This is an episode in the “What Makes Us Human?” podcast from Cornell University’s College of Arts & Sciences, showcasing the newest thinking from across the disciplines about what it means to be human in the twenty-first century. Featuring audio essays written and recorded by Cornell faculty, the series releases a new episode each Tuesday through the fall.
In the growing debate around artificial “superintelligence,” I frequently hear worries that humans will become obsolete. Will robots eventually “take over” from humans and begin to “act on their own”?
Many roboticists, myself included, don’t see the future as the classic narrative of “humans against machines.” Instead, we envision a world in which humans and machines work together.
For example, picture a factory worker working shoulder-to-shoulder with a robot to assemble a part, or a human nurse supervising a crew of medicine delivery robots. Imagine a school teacher having a robotic assistant helping students when they get stuck, or an office worker scheduling meetings with a robot’s assistance.
At home, too, robots can work with us, helping humans with cooking or cleaning; they can entertain us, and encourage us to exercise. Robots can assist people with carpentry, and help children with music lessons.
All of this is part of the research field called “Human-Robot Interaction,” an exciting interdisciplinary field that investigates the interface between machines and people. My work, and that of my colleagues, is inspired by the prospect of robots engaging people in long-term, tightly-coupled, and personal relationships.
It turns out that we are highly influenced by how robots behave around us. For example, in one experiment, we found that a robot that seems to be enjoying music—by dancing to it and tapping its foot—causes people to think that a song is better than when they listen to the same song with a robot whose moves are unrelated to the music. In another example, when people talk to a robot about an emotionally difficult, personal event, they feel more positive when the robot makes subtle body gestures and short sentences suggesting that it cares about their experience.
When I started this research, I expected that people would want robots to be as precise and predictable as possible and just fulfill human commands. Instead, I found that there are times when giving up control to a machine actually enhances people’s experience.
My research shows that people prefer a robot that predicts what they want, and acts a little bit ahead of time, even when that means that the robot might make mistakes when it takes a wrong guess about what the humans will do. We expect the machine to take initiative, and to make its own judgment about the situation, instead of just waiting for our commands. I believe that this is an inherent part of our expectation from artificial intelligence.
Giving up some control will make robots better teammates and better companions, and will free us up to make other, more meaningful, decisions. But it will also bring new dangers and new questions.
For example, we will need to consider legal questions of robots’ responsibility for their actions, cultural questions about racial profiling and gender, and economic questions about the future labor market in an increasingly automated society.
In the end, the question of control between human and machine is not a binary one. It’s a gradient. This means we will need to think deeply and decide — for each particular case of artificial intelligence and robotics — what we want to retain control of, and how much. In my opinion this is not a technical question for engineers and computer scientists, but one for society at large.