Like it or not, we’re surrounded by robots. Thousands of Americans ride to work these days in cars that pretty much drive themselves. Vacuum cleaners scoot around our living rooms on their own. Quadcopter drones automatically zip over farm fields, taking aerial surveys that help farmers grow their crops. Even scary-looking humanoid robots, ones that can jump and run like us, may be commercially available in the near future.
Robotic devices are getting pretty good at moving around our world without any intervention from us. But despite these newfound skills, they still come with a major weakness: The most talented of the bunch can still be stopped in their tracks by a simple doorknob.
The issue, says Matt Mason, a roboticist at Carnegie Mellon University, is that for all of robots’ existing abilities to move around the world autonomously, they can’t yet physically interact with objects in a meaningful way once they get there.
“What have we learned from robotics? The number one lesson is that manipulation is hard. This is contrary to our individual experience, since almost every human is a skilled manipulator,” writes Mason in a recent review article.
It’s a fair point. We humans manipulate the world around us without thinking. We grab, poke, twist, chop and prod objects almost unconsciously, thanks in part to our incredibly dexterous hands. As a result, we’ve built our worlds with those appendages in mind. All the cellphones, keyboards, radios and other tools we’ve handled throughout our lifetime have been designed explicitly to fit into our fingers and palms.
Not so for existing robots. At the moment, one of the most widely used robotic hand designs, called a “gripper,” is more or less identical to ones imagined on TV in the 1960s: a device made of two stiff metal fingers that pinch objects between them.
In a controlled environment like an assembly line, devices like these work just fine. If a robot knows that every time it reaches for a specific part, it’ll be in the same place and orientation, then grasping it is trivial. “It’s clear what kind of part is going to come down the conveyor belt, which makes sensing and perception relatively easy for a robot,” notes Jeannette Bohg, a roboticist at Stanford University.
The real world, on the other hand, is messy and full of unknowns. Just think of your kitchen: There may be piles of dishes drying next to the sink, soft and fragile vegetables lining the fridge, and multiple utensils stuffed into narrow drawers. From a robot’s perspective, Bohg says, identifying and manipulating that vast array of objects would be utter chaos.
“This is in a way the Holy Grail, right? Very often, you want to manipulate a wide range of objects that people commonly manipulate, and have been made to be manipulated by people,” says Matei Ciocarlie, a robotics researcher and mechanical engineer at Columbia University. “We can build manipulators for specific objects in specific situations. That’s not a problem. It’s versatility that’s the difficulty.”
To deal with the huge number of unique shapes and physical properties of those materials — whether they’re solid like a knife, or deformable, like a piece of plastic wrap — an ideal robotic appendage would necessarily be something that resembles what’s at the end of our arms. Even with rigid bones, our hands bend and flex as we grasp items, so if a robot’s hand can do the same, it could “cage” objects inside its grasp, and move them around on a surface by raking at them like an infant does her toys.
Engineering that versatility is no small feat. When engineers at iRobot — the same company that brought you the Roomba vacuum cleaner — developed a flexible, three-fingered “hand” several years ago, it was hailed as a major feat. Today, roboticists continue to turn away from a faithful replica of the human hand, looking toward squishy materials and better computational tools like machine learning to control them.
The quest for soft, flexible “hands”
“Humanlike grippers tend to be much more delicate and much more expensive, because you have a lot more motors and they’re packed into a small space,” says Dmitry Berenson, who studies autonomous robotic manipulation at the University of Michigan. “Really, you’ve got to have a lot of engineering to make it work, and a lot of maintenance, usually.” Because of those limitations, he says, existing humanlike hands aren’t widely used by industry.
For a robotic hand to be practical and even come close to a human’s in ability, it would have to be firm but flexible; be able to sense cold, heat and touch at high resolutions; and be gentle enough to pick up fragile objects but robust enough to withstand a beating. Oh, and on top of all that, it would have to be cheap.
To get around this problem, some researchers are looking to create a happy medium. They’re testing hands that mimic some of the traits of our own, but are far simpler to design and build. Each one uses soft latex “fingers” driven by tendon-like cables that pull them open and closed. The advantage of these sorts of designs is their literal flexibility — when they encounter an object, they can squish around it, form to its complex shape, and scoop it up neatly.
Such squishy “hands” offer a major improvement over a hard metal gripper. But they only begin to solve the issue. Although a rubbery finger works great for picking up all sorts of objects, it will struggle with fine motor skills needed for simple tasks like placing a coin into a slot — which involves not just holding the coin, but also feeling the slot, avoiding its edges, and sliding the coin inside. For that reason, says Ciocarlie, creating sensors that tell robots more about the objects they touch is an equally important part of the puzzle.
Our own fingertips have thousands of individual touch receptors embedded within the skin. “We don’t really know how to build those kinds of sensors, and even if we did, we would have a very hard time wiring them and getting that info back out,” Ciocarlie says.
The sheer number of sensors required would raise a second, even knottier issue: what to do with all that information once you have it. Computational methods that let a robot use huge amounts of sensory data to plan its next move are starting to emerge, says Berenson. But getting those abilities up to where they need to be may trump all other challenges researchers face in achieving autonomous manipulation. Building a robot that can use its “hands” quickly and seamlessly — even in completely novel situations — may not be possible unless engineers can endow it with a form of complex intelligence.
That brainpower is something many of us humans take for granted. To pick up a pencil on our desk, we simply reach out and grab it. When eating dinner, we use tongs, forks, and chopsticks to grab our food with grace and precision. Even amputees who have lost upper limbs can learn to use prosthetic hooks for tasks that require fine motor skills.
“They can tie their shoes, they can make a sandwich, they can get dressed — all with the simplest mechanism. So we know it’s possible if you have the right intelligence behind it,” Berenson says.
Teaching the machine
Getting to that level of intelligence in a robot may require a leap in the current methods researchers use to control them, says Bohg. Until recently, most manipulation software has involved building detailed mathematical models of real-world situations, then letting the robot use those models to plan its motion. One recently built robot tasked with assembling an Ikea chair, for example, uses a software model that can recognize each individual piece, understand how it fits together with its neighbors, and compare it to what the final product looks like. It can finish the assembly job in about 20 minutes. Ask it to assemble a different Ikea product, though, and it’ll be completely flummoxed.
Humans develop skills very differently. Instead of having deep knowledge on a single narrow topic, we absorb knowledge on the fly from example and practice, reinforcing attempts that work, and dismissing ones that don’t. Think back to the first time you learned how to chop an onion — once you figured out how to hold the knife and slice a few times, you likely didn’t have to start from scratch when you encountered a potato. So how does one get a robot to do that?
Bohg thinks the answer may lie in “machine learning,” a sort of iterative process that allows a robot to understand which manipulation attempts are successful and which aren’t — and enables it to use that information to maneuver in situations it’s never encountered.
“Before machine learning entered the field of robotics, it was all about modeling the physics of manipulation — coming up with mathematical descriptions of an object and its environment,” she says. “Machine learning lets us give a robot a bunch of examples of objects that someone has annotated, showing it, ‘Here is good place to grab.’” A robot could use these past data to look at an entirely new object and understand how to grasp it.
This method represents a major change from previous modeling techniques, but it may be a while before it’s sophisticated enough to let robots learn entirely on their own, says Berenson. Many existing machine-learning algorithms need to be fed vast amounts of data about possible outcomes — like all the potential moves in a chess game — before they can start to work out the best possible plan of attack. In other cases, they may need hundreds, if not thousands, of attempts to manipulate a given object before they stumble across a strategy that works.
That will have to change if a robot is to move and interact with the world as quickly as people can. Instead, Berenson says, an ideal robot should be able to develop new skills in just a few steps using trial and error, or be able to extrapolate new actions from a single example.
“The big question to overcome is, how do we update a robot’s models not with 10 million examples, but one?” he says. “To get it to a point where it says, ‘OK, this didn’t work, so what do I do next?’ That’s the real learning question I see.”
Mason, the roboticist from Carnegie Mellon, agrees. The challenge of programming robots to do what we do mindlessly, he says, is summed up by something called Moravec’s paradox (named after the robotics pioneer Hans Moravec, who also teaches at Carnegie Mellon). It states, in short, that what’s hard for humans to do is often handled with ease by robots, but what’s second nature for us is incredibly hard to program. A computer can play chess better than any person, for instance — but getting it to recognize and pick up a chess piece on its own has proved to be staggeringly difficult.
For Mason, that still rings true. Despite the gradual progress that researchers are making on robotic control systems, he says, the basic concept of autonomous manipulation may be one of the toughest nuts the field has yet to crack.
“Rational, conscious thinking is a relatively recent development in evolution,” he says. “We have all this other mental machinery that over hundreds of millions of years developed the ability to do amazing things, like locomotion, manipulation, perception. Yet all those things are happening below the conscious level.
“Maybe the stuff we think of as higher cognitive function, like being able to play chess or do algebra — maybe that stuff is dead trivial compared to the mechanics of manipulation.”
Editor's note: This story was updated on August 24, 2018, to correct a caption describing the Apollo robot experiment. Jeannette Bohg is not the person shifting the box in the video; it is another researcher.