You probably encounter it on a daily basis. Though you're not always aware. Your actions help it grow. Yet you rarely give it a second thought. Artificial intelligence is in your pocket. It's in your car, at the doctor's office, at your kid's school.

We comb through search results and social feeds on our screens. We rely on our GPS systems to suggest the best route. We make buying decisions based on recommendations by savvy algorithms that track our browsing habits. We make inquiries of our personal assistants dutifully standing by in our kitchens and dens, or at the ready on our phones. Alexa, what is AI?

Speech recognition, facial recognition, search query AI. Whether we consider it helpful or intrusive, empowering or manipulative, the technology is at our disposal. How we use it, is our choice.

RIA sought out notable voices in AI to help us better understand the sometimes elusive nature of artificial intelligence. These are researchers and entrepreneurs with decades of experience working in the AI and robotics fields. They help us understand why artificial intelligence won't take over the world (or we puny humans) anytime soon. But its rise is worth watching.

AI Still in Its Infancy, Performance Is Not Competence
The AI space is fraught with hype, fear and misconceptions. The experts say we need less hubris and more humility.

"I think the biggest misconception is how far along it is," says Rodney Brooks. "We've been working on AI, calling it AI since 1956 (when the Father of AI, John McCarthy, coined the term "artificial intelligence"), so roughly 62 years. But it's much more complicated than physics, and physics took a very long time. I think we're still in the infancy of AI."

Rodney Brooks says AI is still in its infancy and we should be careful not to mistake its performance for competence. Now and in the foreseeable future, there's no competition between machine intelligence and human intelligence. Humans remain smarter. (Photo courtesy of Rethink Robotics)Brooks is Chairman and CTO of Rethink Robotics, which he cofounded with the goal to bring smart, affordable, and easy-to-use collaborative robots to manufacturing. He is also a Cofounder of iRobot and is the former Director of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). Brooks is a Founding Fellow of the Association for the Advancement of Artificial Intelligence (AAAI) among his many affiliations and accomplishments in the fields of computer vision, robotics and AI.

He is so concerned about the misinformation surrounding AI and robotics that Brooks started a blog to provide some perspective. One of his latest posts provides Dated Predictions on current technological trends, including self-driving cars, space travel, robotics and deep learning.

Brooks believes much of the AI hype comes from recent press covering jaw-dropping demonstrations of anthropomorphic and animal-inspired robots, or spectator sports pitting AI systems against humans playing chess, Jeopardy!, ping-pong, and Go. Yes, AI is here. But in baby steps.

Some of the misunderstanding stems from equating machine performance with competence. When we see a human perform a certain task, we can assume a general competence – skills and talent – the person must possess in order to perform that task. It's not the same with AI.

"An AI system can play chess fantastically, but it doesn't even know that it's playing a game," says Brooks. "We mistake the performance of machines for their competence. When you see how a program learned something that a human can learn, you make the mistake of thinking it has the richness of understanding that you would have."

Take the Atlas robot by Boston Dynamics (now owned by SoftBank). A video of Atlas doing a backflip went viral, whipping the web into a feverish crescendo warning of an imminent robot ninja invasion. Not so, says our AI experts.

Brooks reminds us that these types of demonstrations are carefully scripted: "It had to do a lot of computations very fast, but that was a very careful setup. It didn't know it was doing a backflip. It didn't know where it was. It didn't know all sorts of things that a person doing a backflip would know, like 'Wow, I was just upside down!' The robot doesn't know what upside down is!

"It has some math equations, and the forces and vectors, but it has no way of reasoning about them," adds Brooks. "It's very different from us."

No Context, No Contest
An important distinction between human intelligence and machine intelligence is context. As humans, we have a greater understanding of the world around us. AI does not.

"We've been working on context in AI for 60 years and we're nowhere near there," says Brooks. "That's why I'm not worried that we're going to have super intelligent AI.

"We've been successful in some very narrow ways and that's the revolution right now, those narrow ways," continues Brooks. "Certainly speech understanding is radically different from what we had a decade ago. I used to make the joke that speech understanding systems were set up so that you press or say '2' for frustration. That's no longer true."

He cites Amazon's Alexa as an example. Google's Assistant and Apple's Siri are two more.

"You say something to Alexa and it pretty much understands it, even when music is playing, even when other people in the room are talking," says Brooks. "It's amazing how good it is, and that came from deep learning. So some of these narrow fields have gotten way better. And we will use those narrow pieces to the best advantage we can to make better products.

"When I started Rethink Robotics, we looked at all the commercial speech understanding systems. We decided at that point it was ludicrous to have any speech recognition in robots in factories. I think that's changed now. It may make sense. It didn't in 2008."

Speech recognition compiles the right word strings. Brooks says accurate word strings are good enough to do a lot of things. But it's not as smart as a person.

"That's the difference," he says. "Getting the word strings is a narrow capability. And we're a long way from it being not so narrow."

These narrow capabilities have become the basis for many wildly optimistic AI predictions that are overly pessimistic about our role as humans in that future.

AI Predictions? Consider the Source
Some highly regarded personalities in science, technology and business warn of AI's impending doom for humankind. But those in the know make an important point. Consider the source.

"We can't take their word that robots and AI are suddenly going to take over the world," says Ken Goldberg. "These are smart people and so everyone assumes they know what they are talking about. But people who actually work with robotics realize that although this technology is making great progress, we are far from the humanlike robots portrayed in movies, and the press lately."

Ken Goldberg suggests rather than worrying about AI and robots surpassing human intelligence, we should focus on Multiplicity, where diverse combinations of people and machines work together to solve problems and innovate. (Photo courtesy of Ken Goldberg, Copyright Kathrin Miller)Goldberg is a Professor and Distinguished Chair in industrial engineering and operations research at the University of California, Berkeley where he is also Director of the CITRIS "People and Robots" Initiative and the Laboratory for Automation Science and Engineering (AUTOLAB). He holds eight patents and is widely published on the topics of algorithms for robotics, automation, and social information filtering. Goldberg was awarded RIA's prestigious Engelberger Robotics Award in 2000 for excellence in education, among his other accolades and appointments.

Fear and Exaggeration
Both Goldberg and Brooks emphatically disagree with the propagators of AI exaggeration. They warn us to be especially wary of Chicken Little anxiety. The kind that warns of an AI apocalypse like rampant job losses or an army of super intelligent killer robots destined for world domination.

"People have a long history of fearing robots," says Goldberg. "It goes back to the ancient Greeks, or even further when you think of fears about technology running amok."

From Prometheus, to Frankenstein, to the Terminator, he cites a recurring theme that is deeply rooted in the human psyche. We fear those that are unfamiliar to us. We fear what we don't understand.

"AI is just the latest manifestation of the same story that's been told over and over again," says Goldberg.

Our experts note that most of the fear mongering comes from people that are not working in the AI field. Brooks and Goldberg echo what many automation and robotics insiders already know. Robotics is far more complex.

"There are many tasks, even repetitive tasks, that are very subtle and require far more sophistication than current robots are capable of," says Goldberg. "While I think robots are getting better and we're making a lot of progress, I think it's important to temper these exaggerated expectations so we don't end up repeating the AI winter of the 1970s and '80s when there were huge expectations and then robots couldn't deliver.

"At the same time, we don't want to shoot ourselves in the foot by saying there's no robotics revolution here," continues Goldberg. "Because we do think there will be many more applications and uses for robotics, but not at the level of what people are talking about, where robots are on the verge of stealing half of our jobs."

Multiplicity and Diversity vs. Singularity
Goldberg says that much of the fear stems from Singularity, a hypothetical point in time where AI and robots surpass human intelligence. He suggests that rather than worrying about a hypothetical that is either so far away or improbable, we should focus on Multiplicity, where diverse combinations of people and machines work together to solve problems and innovate.

Multiplicity is already happening on the back end of search engines, social media platforms, and the many apps for moviegoers, shoppers, and vacationers. When we interact with these AI-supported services, every click or view sends a signal about our interests, preferences and intentions. The reward? Better results aligned with our preferences and better predictions of what we might want to do next. It's an interdependent relationship. Each needs the other to improve. And the more diverse the interactions, the more well-rounded they (we) become.

From Research to the Real World
Diversity is important as we move from the lab to AI's applications in the real world. Another of our experts, who is working to bring AI to the industrial world, also stresses the importance of humans and machines working together.

"That's part of the challenge," says Pieter Abbeel. "How are humans able to use this technology and take advantage of it to make themselves smarter, rather than just have these machines be something separate from us? When the machines are part of our daily lives, what we can leverage to make ourselves more productive, that's when it gets really exciting." (In 2010, everyone got really excited when Abbeel's research team posted a video showing a robot folding laundry.)

Pieter Abbeel is transitioning breakthrough research in machine learning into real world industrial applications for robots that can learn new skills on their own. (Photo courtesy of Embodied Intelligence)Abbeel is a pioneer in deep reinforcement learning for robotics from UC Berkeley, where he is a Professor in the Department of Electrical Engineering and Computer Sciences and is Director of the Robot Learning Lab. In 2011, he was named MIT Technology Review's Top 35 Innovators Under 35, among his other accomplishments. Abbeel is President and Chief Scientist of Embodied Intelligence, a startup he recently cofounded in Emeryville, California, which is developing AI software that will allow robots to learn new skills on their own.

He is also excited about AI's prospects but thinks some caution is warranted.

"I think there is a lot of progress, and as a consequence, a lot of excitement about AI," says Abbeel. "In terms of fear, I think it's good to keep in mind that the most prominent progress like speech recognition, machine translation, and recognizing what's in an image are examples of what's called supervised learning."

Abbeel says it's important to understand the different types of AI being built. In machine learning, there are three main types of learning: supervised learning, unsupervised learning and reinforcement learning.

"Supervised learning is just pattern recognition," he explains. "It's a very difficult pattern to recognize when going from speech to text, or from one language to another language, but that AI doesn't really have any goal or any purpose. Give it something in English and it will tell you what it is in Chinese. Give it a spoken sentence and it will transcribe it into a sequence of letters. It's just pattern matching. You feed it data – images and labels – and it's supposed to learn the pattern of how you go from an image to a label.

"Unsupervised learning is when you feed it just the images, no labels," continues Abbeel. "You hope that from just seeing a lot of images that it starts to understand what the world tends to look like and then by building up that understanding, maybe in the future it can learn something else more quickly. Unsupervised learning doesn't have a task. Just feed it a lot of data (as Google did with lots of cats).

"Then there's reinforcement learning, which is very different and more interesting, but much harder. (Reinforcement learning is credited for advancements in self-driving car technology.) It's when you give your system a goal. The goal could be a high score in a video game, or win a game of chess, or assemble two parts. That's where some of that fear can be justified. If AI has the wrong goal, what can happen? What should the goals be?"

This is why it's important that humans and artificial intelligence don't evolve in a vacuum from each other. As we build smarter and smarter machines, our capabilities as humans will be augmented.

"What makes me very excited about what we're doing right now at Embodied Intelligence is that the recent events in artificial intelligence have given AI the ability to understand what they are seeing in pictures," says Abbeel. "Not human-level understanding, but pretty good. If a computer can really understand what's in an image, then maybe it can pick up two objects and assemble them. Or maybe it can sort through packages. Or pick things from shelves. Where I see a big change in the near future are tasks that rely on understanding what a camera feed is giving you."

More later on what Embodied Intelligence is doing with camera feeds as we explore AI's transition from the lab to the real world.

What AI Is and Isn't
AI has become a marketing buzzword. Like "robot" before it, now everything is seemingly AI-powered. What is and isn't artificial intelligence is sometimes difficult to pinpoint. Even the experts hesitate when it comes to identifying, definitively, what is and isn't AI. As Brooks notes, what was considered AI in the 1960s is now taught in the very first course on computer programming. But it's not called AI.

"It's called AI at some point," says Brooks. "Then later it just becomes computer science."

Machine learning, and all of its variations, including deep learning, reinforcement learning and imitation learning, are subsets of AI.

"AI was a very narrow field for a while. Some people saw it very specifically around a set of search-based techniques," explains Goldberg. "Now AI is widely seen as an umbrella term over robotics and machine learning, so now it's being embraced as a whole range of subfields."

Advanced forms of computer vision are a form of AI.

"If you're just inspecting whether a screw is in the right place, we've had that since the '60s. It would be a stretch to call that AI," explains Goldberg. "But at the same time, a computer vision system that can recognize the faces of workers, we generally do think of that as AI. That's a much more sophisticated challenge."

Deep Learning for Robot Grasping
Goldberg's AUTOLAB has been focused on AI for over a decade, applying it to projects in cloud robotics, deep reinforcement learning, learning from demonstrations, and robust robot grasping and manipulation for warehouse logistics, home robotics, and surgical robotics.

A robot manipulates objects it has never encountered before after researchers teach a neural network how to recognize objects from millions of 3D models and images. (Photo courtesy of University of California, Berkeley)The lab's Dexterity Network (Dex-Net) project has shown that AI can help robots learn to grasp objects of different size and shape by feeding millions of 3D object models, images, and the metrics of how to grasp them to a deep-learning neural network. Previously, robots learned how to grasp and manipulate objects by practicing with different objects over and over, which is a time-consuming process. By using synthetic point clouds instead of physical objects to train the neural network to recognize robust grasps, the latest iterations of Dex-Net are much more efficient, achieving a 99 percent precision grasping rate.

Watch ABB's YuMi robot aided by Dex-Net 2.0 manipulate a variety of objects. This includes objects it hadn't seen before. The neural network learned how to grasp new objects based on prior experiences with similarly shaped objects.

In the long term, Goldberg hopes to develop highly reliable robot grasping across a wide variety of rigid objects such as tools, household items, packaged goods, and industrial parts. He's also very interested in algorithms that can work across different robot types. The lab's research is sponsored by some heavy hitters, including Google, Amazon, Toyota, Intel, Autodesk, Cisco and Siemens.

Big Data Has Game
The National Football League is using AI technology. Anyone who's watched a major NFL event or the pre-game and post-game shows, has probably seen it. With the 2018 Super Bowl around the corner, Brooks offers an example of AI that many Sunday couch potatoes may find relatable.

In the late '90s, Takeo Kanade, a world-renowned researcher in robotics and computer vision at Carnegie Mellon University, co-developed a system of robotic cameras and advanced algorithms that allow the playing field to be shot from multiple angles around the arena and then seamlessly integrated into a dynamic 3D panorama. By compiling the separate shots together into a 3D reconstruction, the system produces an immersive 360-degree rendering of a play. Debuted at Super Bowl 35 in 2001, the technology has progressed significantly to where EyeVision 360 was the talk of Super Bowl 50.

"They've patched it all together in real time and built up a complete three-dimensional model of all the players, so you can zoom in and look around in virtual reality to see where everyone was on the field," says Brooks. "That was a hot topic in artificial intelligence 10 years ago. How do we get three-dimensional reconstruction? Now it's something you see on the TV."

The technology has continued to advance and is now used in varied sporting arenas. Slick algorithms crunch a lot of data to bring Free Dimensional Video to life before our eyes.

Predictive Analytics on the Factory Floor
In the industrial arena, AI technology is used by robot manufacturer FANUC in its FIELD System (FANUC Intelligent Edge Link and Drive System). By creating an interactive web of connected machinery and equipment, the FIELD System is able to exploit immense amounts of data and draw intelligent conclusions, such as predict machine behavior or potential failures. Customers like General Motors are using FIELD to ready their factories for Industry 4.0.

Deep-Learning Cobots
Rethink Robotics' Intera 5 software gives the Baxter and Sawyer collaborative robots their smarts. Brooks says there's a lot of AI in the robots' vision and training capabilities. Watch Sawyer tend a CNC lathe at this custom injection molding company, where they plan to eventually repurpose the robot for other tasks.

Collaborative robot with integrated artificial intelligence tends a CNC lathe at a custom injection molder. Automating the process improved product quality and production efficiency, and saved operators from repetitive tasks. (Photo courtesy of Rethink Robotics)"Traditional industrial robots don't have much intelligence," says Brooks. "But going forward, that's what we're doing. We're putting deep learning into the robots. We're trying to deal with variation because we think that's where 90 percent of manufacturing is with (robots) working in the same space as humans."

Sawyer and Baxter robots have a train-by-demonstration feature that puts AI to work.

"When you're training it by demonstration, you show it a few things by moving its arm around and it infers a program called a behavior tree," explains Brooks. "It writes a program for itself to run. You don't have to write a program."

Intera 5 is a graphical programming language. Brooks says you can view it, modify it, or if you want, you can write a program in a behavior tree, bypassing doing it automatically.

"That means someone working on the factory floor who is not a programmer can get the robot to do something new," explains Brooks. "It infers what they are asking it to do and then writes its own program."

AI Shifting the Paradigm for Robot Programming
Artificial intelligence is changing the way robots are programmed. Abbeel and his team at Embodied Intelligence is harnessing the power of AI to help industrial robots learn new, complex skills.

Their work evolved from the cofounders' research at UC Berkeley where they had a major breakthrough in using imitation learning and deep reinforcement learning to teach robots to manipulate objects. This video courtesy of UC Berkeley demonstrates the lab's breakthrough technology that led to the spin-off.

The startup uses a combination of sensing and control to teleoperate a robot. For the sensing, an operator wears a virtual reality headset that shows the robot's view through its camera. On the control side, VR devices such as Oculus Rift and HTC Vive come with handheld devices that the operator holds in his hands. As the operator moves his hands, that motion is tracked. The tracked coordinates and orientation are fed to a computer that drives the robot. That way the operator has direct control, like a puppeteer, over the motions of the robot grippers.

"We allow the human to embed themselves inside the robot," says Abbeel. "So now the human can see through the robot's eyes and control the robot's hands."

He says that humans are so dexterous that there's no comparison between robot grippers and our hands. By working through the VR system, the operator is forced to follow the robot's constraints.

"You teach the essence of the skill to the robot by giving demonstrations," explains Abbeel. "It doesn't mean that it will be robotically fast at that point. It will do it at human pace, which is slow for most robots. That's the first phase (imitation learning). You teach the robot through demonstrations.

"Then in phase two the robot will run reinforcement learning, where it learns from its own trial and error," continues Abbeel. "The beauty here is that the robot has already learned the essence of the task. Now the robot only has to learn how to speed it up. That's something it can learn relatively quickly through reinforcement learning."

Operator wearing a virtual reality headset and holding motion-tracking devices teleoperates a robot, showing it how to grasp and manipulate objects so that it can eventually learn how to perform new skills on its own using reinforcement learning. (Photo courtesy of Embodied Intelligence)Abbeel says their technology is particularly suited for challenging vision and manipulation tasks that are currently too complex for traditional software programming techniques. Applications include working with deformable objects that change shape during handling such as wires, cables and textiles. Bin picking is another potential application.

Embodied Intelligence raised $7 million in seed funding last fall. Abbeel says they've been in talks with over 100 companies to understand their needs and determine if there's a good fit for the technology, noting that the software is agnostic to the type of robot.

"We replicate their set up at our office and then start collecting demonstrations and writing the code for the robot to learn from those demonstrations," says Abbeel. "Then we coordinate with the partner company to ensure that we're teaching the robot to their specs."

He says the types of partner companies include manufacturers of cars, electronics or clothing, also contract manufacturers, warehouse and logistics operations, and companies in pharmaceuticals, agriculture, and construction.

"We're changing the way robots are programmed," says Abbeel. "We write code for imitation learning and we write code for reinforcement learning. Once that code is in place, when you want a new deployment, we don't write new code. Instead, we collect new data. So the paradigm shifts from new software development for a new deployment, to new data collection for new deployment. Data collection is generally easier. It's a lower bar than new software engineering."

Eventually, Embodied Intelligence will let other people use this software to reprogram their robots by doing their own demonstrations. This will allow any company, large or small, to quickly redeploy their robots for different tasks.

AI's Brain in the Cloud
Disruptive technologies and emerging technological trends like Industry 4.0 and the Smart Home are increasingly becoming interdependent. Advances in deep learning using image classification and speech recognition have relied heavily on huge datasets with millions of examples. AI requires vast amounts of data, more than can reside on most local systems. Enter cloud robotics, a vital enabler for today's AI-powered robots.

Cloud robotics enables information sharing so that intelligence, basically learned skills, can be collectively shared across all the robots in a connected environment. It also allows for collaboration so two or more remote robots, or human-robot teams, can work together to perform a task, even when miles apart.

As more AI-powered robots enter the market, they will need to be connected to a common platform. CloudMinds Technology, a new startup founded in 2015, wants to be that common platform, branding itself the world's first cloud robot operator.

Robert Zhang says AI-powered robots will require the cloud to make them smarter and more capable. (Photo courtesy of CloudMinds Technology)"The reason we wanted to create CloudMinds is because we feel this is an opportunity to apply our telecommunications background to robotics and AI," says Robert Zhang, Cofounder and President. "We want to be the operators for robots."

Zhang, a PhD in computational mechanics, hails from consumer electronics giants such as Samsung, Microsoft and Apple. His cofounder, serial entrepreneur and CEO Bill Huang, was formerly the General Manager of the China Mobile Research Institute where he led the development of the first carrier Android project, among other firsts. CloudMinds has attracted some prominent investors, including SoftBank and FoxConn.

The startup will provide a cloud platform and related services to companies that design and sell robot hardware, much like a mobile network operator provides wireless communications services to its customers. While Zhang says their platform will work equally well for industrial robots and consumer robots, they are focusing on making consumer robots smarter.

"For a robot to be able to do tasks in the home environment, the AI needed is very powerful," says Zhang. "For a robot to prepare a meal or fold your laundry, these are very unstructured tasks. You can't realistically put AI capability on the robot. It's commercially impossible, because there are so many different tasks. That's why AI should come from the cloud.

"No matter what the robot is designed for originally, you can always have the cloud to make the robot more capable and more intelligent," he adds.

We're still a "few" years away from anything close to The Jetsons Rosie. Yet some of the AI-powered technologies that may be integral to a robot housekeeper, like computer vision, manipulation capabilities, speech recognition, and mapping and navigation, have already emerged from the lab into the real world.

CloudMinds plans to be at the forefront when the new wave of mobile manipulators go online. The cloud robot operator invests in many of the prominent AI and robotics university programs around the country, including those at Stanford, Carnegie Mellon, UC Berkeley, and Harvard. They've also partnered with startups like Agility Robotics, maker of Cassie, the bipedal marvel we profiled in our story on robotics clusters. And let's not forget investor partner SoftBank's Pepper humanoid robot.

They envision a community of millions of robots sharing what they've learned in the cloud so they can take better care of us.

Our Collective Potential
Cloud robotics, machine learning, computer vision, speech recognition – all the facets of AI are making progress, and at times remarkable strides in specific areas. But still artificial intelligence has nothing on us.

Even if robots with the help of AI and human engineering are someday able to approach our dexterity, they may never truly grasp the world around them in all of its fragility and potential. Context and ingenuity will remain in the realm of humans.

 

Technology is neither bad nor good. It's how we use it. With AI and robotics, we humans have tremendous potential for good. In the coming months, we will take a closer look at those partnerships between humans and robots, and how we can collaborate to enhance each other's capabilities as we evolve together.