the ConversAItion: Season 2 Episode 9

Michael Littman: Humanity-Centered Robotics

Jim speaks with Dr. Michael Littman about how robots can work best with humans. They discuss everything from robotics design to how humans can play a role in teaching robots socially acceptable behavior, informed by Michael’s work as a co-director and researcher at Brown University’s Humanity-Centered Robotics Initiative.

Listen on Apple Podcasts badge
“People have visceral reactions to the form and function of robots, and we want to map that out more accurately so we can feed it into the design process. Ideally, the robots that are created actually trigger the right kinds of reactions in people.”
Michael Littman

About Michael Littman

Michael Littman is a professor of Computer Science at Brown University, and co-director of the college’s Humanity-Centered Robotics Initiative. His research is focused on understanding how robots can work well with people, for the benefit of people. Previously, he was at Rutgers University and, before that, Duke. He has also worked as a consultant on the technical teams at AT&T Labs Research and Bellcore. Follow Michael on Twitter at @mlittmancs.

Short on time? Here are 5 quick takeaways:

  1. Michael’s work with Brown’s Humanity-Centered Robotics Initiative is focused on researching how robots can work well with people, for the benefit of people.

    Michael and his fellow researchers study how humans react to robots. Their goal is to optimize the creation of robots for the benefit of society. 

    One thing they’ve found is that people often perceive robots as living entities with their own goals and aspirations—especially when the robots take a human-like form. With that in mind, Michael suggests it’s best to treat robots with the same respect we would another human being.

    For instance, some companies might showcase the strength of their robots by having them undergo physical harm—kicking them, pushing them down, closing doors on them—but many people are uncomfortable with this rough treatment of robots, feeling that the behavior is mean or even abusive. This is unproductive. 

  2. Humans make assumptions about the function and purpose of a robot simply based on its physical shape—robots should be designed with this human reaction in mind.

    Socially acceptable robots are measured by whether humans are comfortable with how they look and act in specific social settings. 

    The HCRI studies how people react to robots of different shapes and sizes to better understand how the physical design informs the way people perceive them. Researchers point to different robots and ask questions like: Do you think this robot would be good at having a conversation with you? Do you think this robot would be good at fetching coffee for you? Do you think this robot will be good for taking care of your elderly father?

    The lab maps out people’s reactions to inform future design. The process aims to ensure that robots trigger the right kind of reaction from people, one that’s in line with its function.

  3. Reinforcement learning – the process of teaching machines to learn from the consequences of their actions – plays a key role in training robots to adhere to social norms.

    Reinforcement learning is a form of machine learning in which the machine learns through an ongoing process of being rated, scored or evaluated, versus being told exactly how it should behave. Think Pavlov’s Experiment: You tell the dog (or machine in this case), “Good job,” or “That was terrible.” The machine then searches through possible behaviors to figure out what would obtain the highest possible reward and adjusts accordingly.

    This approach is particularly effective for social norms, because it’s so similar to the way we as people learn socially acceptable behavior. People don’t typically give explicit feedback in social situations. Instead, they send and pick up on signals – like a smile or frown – that indicate how they performed according to their peer’s expectations.

  4. An important next step for robots is to achieve explainability, which would mean they can answer questions and provide reasoning on their decisions and behaviors.

    Today, explainability is a key focus for researchers. Until recently, robots were designed to accomplish a specific task autonomously; now, robots are almost always connected to people in some way or another, and must be able to offer evidence and reasoning for their decisions to reach a solution with a human.

     For instance, machines could be used to make medical diagnoses based on chest x-rays, pointing doctors to areas of concern. However, doctors need to be able to question the diagnosis to fully understand why the machine’s diagnosis is accurate and credible.

  5. Looking ahead, humans will play a key role in training machines so they integrate into everyday life in a socially acceptable way.

    In the future, as social robots become increasingly present in human life, people will not only buy a machine to accomplish a specific task, but also educate it through social interaction to understand needs, demands and lifestyle. Michael notes that the student-teacher relationship is very rich in human-human contact, and he expects that we will see an enrichment of the human-machine teaching relationship in years to come.

Check out more episodes of The ConversAItion.