Michael Littman is a professor of Computer Science at Brown University, and co-director of the college’s Humanity-Centered Robotics Initiative. His research is focused on understanding how robots can work well with people, for the benefit of people. Previously, he was at Rutgers University and, before that, Duke. He has also worked as a consultant on the technical teams at AT&T Labs Research and Bellcore. Follow Michael on Twitter at @mlittmancs.
Michael and his fellow researchers study how humans react to robots. Their goal is to optimize the creation of robots for the benefit of society.
One thing they’ve found is that people often perceive robots as living entities with their own goals and aspirations—especially when the robots take a human-like form. With that in mind, Michael suggests it’s best to treat robots with the same respect we would another human being.
For instance, some companies might showcase the strength of their robots by having them undergo physical harm—kicking them, pushing them down, closing doors on them—but many people are uncomfortable with this rough treatment of robots, feeling that the behavior is mean or even abusive. This is unproductive.
Socially acceptable robots are measured by whether humans are comfortable with how they look and act in specific social settings.
The HCRI studies how people react to robots of different shapes and sizes to better understand how the physical design informs the way people perceive them. Researchers point to different robots and ask questions like: Do you think this robot would be good at having a conversation with you? Do you think this robot would be good at fetching coffee for you? Do you think this robot will be good for taking care of your elderly father?
The lab maps out people’s reactions to inform future design. The process aims to ensure that robots trigger the right kind of reaction from people, one that’s in line with its function.
Reinforcement learning is a form of machine learning in which the machine learns through an ongoing process of being rated, scored or evaluated, versus being told exactly how it should behave. Think Pavlov’s Experiment: You tell the dog (or machine in this case), “Good job,” or “That was terrible.” The machine then searches through possible behaviors to figure out what would obtain the highest possible reward and adjusts accordingly.
This approach is particularly effective for social norms, because it’s so similar to the way we as people learn socially acceptable behavior. People don’t typically give explicit feedback in social situations. Instead, they send and pick up on signals – like a smile or frown – that indicate how they performed according to their peer’s expectations.
Today, explainability is a key focus for researchers. Until recently, robots were designed to accomplish a specific task autonomously; now, robots are almost always connected to people in some way or another, and must be able to offer evidence and reasoning for their decisions to reach a solution with a human.
For instance, machines could be used to make medical diagnoses based on chest x-rays, pointing doctors to areas of concern. However, doctors need to be able to question the diagnosis to fully understand why the machine’s diagnosis is accurate and credible.
In the future, as social robots become increasingly present in human life, people will not only buy a machine to accomplish a specific task, but also educate it through social interaction to understand needs, demands and lifestyle. Michael notes that the student-teacher relationship is very rich in human-human contact, and he expects that we will see an enrichment of the human-machine teaching relationship in years to come.