the ConversAItion: Season 2 Episode 9

Michael Littman: Humanity-Centered Robotics

Jim speaks with Dr. Michael Littman about how robots can work best with humans. They discuss everything from robotics design to how humans can play a role in teaching robots socially acceptable behavior, informed by Michael’s work as a co-director and researcher at Brown University’s Humanity-Centered Robotics Initiative.
Listen on Apple Podcasts badge
“People have visceral reactions to the form and function of robots, and we want to map that out more accurately so we can feed it into the design process. Ideally, the robots that are created actually trigger the right kinds of reactions in people.”
Michael Littman

About Michael Littman

Michael Littman is a professor of Computer Science at Brown University, and co-director of the college’s Humanity-Centered Robotics Initiative. His research is focused on understanding how robots can work well with people, for the benefit of people. Previously, he was at Rutgers University and, before that, Duke. He has also worked as a consultant on the technical teams at AT&T Labs Research and Bellcore. Follow Michael on Twitter at @mlittmancs.

Short on time? Here are 5 quick takeaways:

  1. Michael’s work with Brown’s Humanity-Centered Robotics Initiative is focused on researching how robots can work well with people, for the benefit of people.

    Michael and his fellow researchers study how humans react to robots. Their goal is to optimize the creation of robots for the benefit of society. 

    One thing they’ve found is that people often perceive robots as living entities with their own goals and aspirations—especially when the robots take a human-like form. With that in mind, Michael suggests it’s best to treat robots with the same respect we would another human being.

    For instance, some companies might showcase the strength of their robots by having them undergo physical harm—kicking them, pushing them down, closing doors on them—but many people are uncomfortable with this rough treatment of robots, feeling that the behavior is mean or even abusive. This is unproductive. 

  2. Humans make assumptions about the function and purpose of a robot simply based on its physical shape—robots should be designed with this human reaction in mind.

    Socially acceptable robots are measured by whether humans are comfortable with how they look and act in specific social settings. 

    The HCRI studies how people react to robots of different shapes and sizes to better understand how the physical design informs the way people perceive them. Researchers point to different robots and ask questions like: Do you think this robot would be good at having a conversation with you? Do you think this robot would be good at fetching coffee for you? Do you think this robot will be good for taking care of your elderly father?

    The lab maps out people’s reactions to inform future design. The process aims to ensure that robots trigger the right kind of reaction from people, one that’s in line with its function.

  3. Reinforcement learning – the process of teaching machines to learn from the consequences of their actions – plays a key role in training robots to adhere to social norms.

    Reinforcement learning is a form of machine learning in which the machine learns through an ongoing process of being rated, scored or evaluated, versus being told exactly how it should behave. Think Pavlov’s Experiment: You tell the dog (or machine in this case), “Good job,” or “That was terrible.” The machine then searches through possible behaviors to figure out what would obtain the highest possible reward and adjusts accordingly.

    This approach is particularly effective for social norms, because it’s so similar to the way we as people learn socially acceptable behavior. People don’t typically give explicit feedback in social situations. Instead, they send and pick up on signals – like a smile or frown – that indicate how they performed according to their peer’s expectations.

  4. An important next step for robots is to achieve explainability, which would mean they can answer questions and provide reasoning on their decisions and behaviors.

    Today, explainability is a key focus for researchers. Until recently, robots were designed to accomplish a specific task autonomously; now, robots are almost always connected to people in some way or another, and must be able to offer evidence and reasoning for their decisions to reach a solution with a human.

     For instance, machines could be used to make medical diagnoses based on chest x-rays, pointing doctors to areas of concern. However, doctors need to be able to question the diagnosis to fully understand why the machine’s diagnosis is accurate and credible.

  5. Looking ahead, humans will play a key role in training machines so they integrate into everyday life in a socially acceptable way.

    In the future, as social robots become increasingly present in human life, people will not only buy a machine to accomplish a specific task, but also educate it through social interaction to understand needs, demands and lifestyle. Michael notes that the student-teacher relationship is very rich in human-human contact, and he expects that we will see an enrichment of the human-machine teaching relationship in years to come.

Read the transcript

EPISODE 09: MICHAEL LITTMAN

Creating Socially Acceptable Robots

Jim Freeze Hi Michael, this is Jim Freeze—you picked up too quickly. We were just loving that Skype music. 

Michael Littman Boop, boop, boop [singing].  

Jim Freeze [Laughter.] Yeah, exactly. You know it. Oh, how are you today?

Michael Littman I’m fine, thanks. How are you?

Jim Freeze I am terrific, I’m terrific. This is Jim Freeze and this is The ConversAItion, a podcast airing viewpoints on the impact of artificial intelligence on business and society. The ConversAItion is presented by Interactions, a conversational AI company that builds intelligent virtual assistant, capable of human level communication and understanding.

In this episode we’ll discuss the importance of training AI systems to learn and adhere to social norms. We’ll answer questions like, what does harmonious human machine living look like? And how important is it for us to extend social norms to machines?

We’re joined by Dr. Michael Littman, co-director of Brown University’s Humanity Centered Robotics Initiative, and a professor in the School’s Computer Science Department. Michael has over 20 years of experience in working in machine learning and has a particular focus on reinforcement learning. Michael, welcome to The ConversAItion.

Michael Littman Thanks a lot for having me.

Jim Freeze Michael, you co-direct Brown’s Humanity Centered Robotics Initiative, which focuses on developing robotic systems that co-exist harmoniously with humans. Can you talk a little bit more about that and the lab’s mission and what drew you to that area of research?

Michael Littman Yeah, absolutely. So we started off with the name human, because Human Centered Design is a concept that’s actually really important about thinking about how people interact with artifacts, like robots or computers or shovels.You want to design those objects so that they work well with a person. We ended up shifting the name to Humanity Centered because we also care about the impact of the technologies that we make on society as a whole, on humanity. And so we think of the Humanity Centered Robotics Initiative about creating robots that work with people, for the benefit of people.

Jim Freeze So that’s actually a really subtle but great pivot. And actually that leads into my next question. What does a socially acceptable robot look like and what are the challenges associated with achieving a socially acceptable robot?

Michael Littman Yeah, so one of the things that we’re focusing on is this idea that people have reactions to things, and those reactions are colored by the culture that they live in and their own personal experiences. Here’s a concrete example that I was just telling someone about recently. So do you know about Boston Dynamics, is a robotics company in Boston?

Jim Freeze I sure do.

Michael Littman Yeah. And they’ve put out some really compelling videos where their robots do amazing things like back flips and opening doors, and doing little dances, and walking outside, and running super fast. And people really enjoy these videos, they’re very compelling. 

In some of the videos, the engineers who create the robots will intervene on the robot to show how robust they are, and the interventions can involve things like kicking it, knocking it down, hitting it with a hockey stick, pushing the doorknob that the robot is trying to open, closing the door really abruptly or pushing the robot out of the way.

And what they’re showing is that in spite of these various interventions, the robots are able to perform quite well. But what people feel is that, well these robots are agents, they’re entities, they’re things in the world that have goals and by kicking them and knocking them down and forcing them, you’re actually doing harm.

The engineering perspective is typically, “No, they’re not alive, right? They’re just machines, and we’re just showing how robust the machines are,” but people still perceive agency and they still perceive kind of a living creature there. And in fact, much about the way the robots are designed, activate those thoughts in people’s heads, right? They have legs, they have a directionality to their bodies. 

So when you talk about the idea of socially acceptable robots, it really has a lot to do with how people perceive them, and whether people are comfortable with the way that the robots are behaving, the way that they appear. Do they accurately convey their own limitations and capabilities? Right. So if you hear something that talks to you pre-computer, there’s every reason to think that you can talk to it, right? Things don’t talk that you can’t talk back to, because the things that talk are all people. But now we have machines that can talk but not listen, and that’s weird, and in a sense, socially unacceptable.

Jim Freeze Well, it’s interesting. As you were describing how they were demonstrating the robustness of the robot, I was actually kind of having a visceral reaction thinking, “Wow, that’s just really mean.” So I totally get the point that you were making, which is, I think it’s a normal human reaction when you see something that is so human like, you don’t want to be rude to it and you have a kind of a human reaction. Right?

Michael Littman Exactly, so one of the things that we study is, and a lot of this happens in the lab of Bertram Malle, who’s my co-director, he does experiments where he creates almost like a whole rainbow, a whole diverse set of possible robot physical bodies, and then shows them to people and ask them questions like, do you think this robot would be good at having a conversation with you? Do you think this robot would be good for fetching coffee for you? Do you think this robot will be good for taking care of your elderly father?

People have these visceral reactions as you say, to the form and the function of the robots, and we want to map that out more accurately, so that we can feed that into the design process so that the robots that get created actually trigger the right kinds of reactions in people.

Jim Freeze I’m curious, are there specific social situations that you or your fellow researchers are focused on right now? I know there’s some work being done in elderly care, how do you determine what you want to focus on from various social settings?

Michael Littman Right, right. Is an interplay between what capabilities we have in current machines, and what thing we think we could get to with a little bit of engineering, and what the real problems are in the world. And a lot of times our projects can flounder because the problem that we thought was the problem is not the problem that people actually have. 

Jim Freeze I believe your research in particular is specifically focused on something I talked a little bit about in the introduction, which is reinforcement learning. Can you explain in layman terms what reinforcement learning is?

Michael Littman Sure. So reinforcement learning, machine learning in general is about getting machines to improve their behavior, improve their performance in some way, based on feedback, and the classical kind of machine learning that’s become very visible I think in the society at large since maybe 2015 or so, is mainly supervised learning. So supervised machine learning where the feedback that you give the system is, in this situation, here’s exactly what you should do. Just like copy this or generalize it, but like just do what I’m telling you to do.

Reinforcement learning is a different problem. Reinforcement learning is the problem of trying to get the machine to do what it’s supposed to do, and the feedback that it just gets is ratings, scores, evaluations. So you tell the machine, “Good job,” or you tell the machine, “That was terrible,” but you don’t tell it what it should have done instead. And it’s, part of its learning processes, is to kind of search through the space of possible behaviors to figure out something that will be seen as acceptable, that will actually obtain high reward.

I’ve been interested in reinforcement learning for my entire career, but the social norm part is kind of new to me and it does seem like a really nice fit, because when we’re learning how to behave appropriately in groups, we’re not getting a lot of very explicit feedback. We’re not being told moment to moment, “Oh, you know what? You should stand a little bit further from this person.” Or, “You should not raise your voice quite this much in the presence of this other person.” Or, “When you’re alone, that’s okay, but when you’re with people that’s not okay.”

You tend not to get that very explicit supervised feedback. Instead, as you say, you get nudges, you get signals of acceptability, right? Like people will frown at you or people will smile at you and there’s information in that signal, but it’s not very direct. So reinforcement learning algorithms that can actually learn how to behave based on this kind of overall sense of, “How am I doing?” The Ed Koch kind of version of that. “How are things going right now and how can I use that? And internally, how can I use that to actually be doing a better job?” That’s kind of the essence of the field of reinforcement learning.

Jim Freeze I’m old enough to know who Ed Koch is.

That’s a test for our listeners. Go figure out who … Google it.

Michael Littman How am I doing?

Jim Freeze Yeah, yeah, exactly.

But I think some of your research also is tied to the notion of explainability. Can you talk a little bit about explainability and why it’s important to human robot interactions?

Michael Littman A lot of systems are just supposed to be autonomous, they’re supposed to just do what they do. And a fully autonomous system that’s completely on its own—in reality, people are almost always connected with the system in some way or another.

So for example, you have maybe a new system that’s going to be diagnosing chest X-rays or something like that, right? So it’s a nice computational problem to be able to take something like a chest X-ray that’s an image and to be able to do an analysis on it, and spit out something that says, “Hey, you should look really closely in this area because I’m seeing signs of cancer there.” So this would be a really powerful and valuable thing.

The problem at the moment is that the system isn’t completely autonomous. It’s delivering its diagnoses to people and people need to be able to query it. They need to be able to push back and say, “I don’t understand what you’re seeing here. Why would you call that malignant? To me it looks completely benign.” And current machine learning systems would just say, “I gave you the number. I have nothing more to say about this other than I’m pretty sure that this is a tumor.”

What we think is going to be really important moving forward is the ability for a system to be able to say, “Here’s what I think is going on,” and when the person pushes back to say, “Okay, here’s why I think that’s true. Here’s evidence from other examples. Here’s kind of a high-level description of the situation that maybe we can both agree is a strong signal about what we’re really looking at.” And this is not entirely without controversy. There’s a big Twitter battle going on on exactly this question of, “Should we prefer, for example, a less accurate diagnosis system that can explain itself to one that’s more accurate, but you have absolutely no idea how it does what it does?”

But the fact of the matter is these systems are not perfect. So when they make a mistake and they’re often making mistakes, if you really have absolutely no idea why it said what it said, you can’t even distinguish a case where it’s made a mistake and it’s completely confident, and one where it’s correct. Right? You just can’t tell. So I think people’s reaction to that is this feeling of discomfort, and it sort of colors all of the diagnoses that you see at that point.

Jim Freeze Well that kind of actually leads into my last question, which is as you look forward, how do you envision social robots developing over the next five to 10 years?

Michael Littman So social robots, I think the frontier there is more and more interaction with people throughout their life cycle, right? So it’s not just a matter of, you buy the robot and you bring it in your house, and it does what it does. Instead, you buy it, you bring it home and you have to interact with it. You have to teach it essentially, you have to educate it about your problems and your objects, your house, your family. And that teaching relationship, that student teacher relationship is one that is very rich in human-human contact, and I think what we’re going to see in the years to come is an enrichment of the human machine teaching relationship.

Jim Freeze Interesting. As you’re talking about that, I’m envisioning the robot and Will Robinson. Are you old enough to remember that I went from Lost in Space?

Michael Littman Lost in Space. They had a remake movie, so more people would have heard of that.

Jim Freeze Oh good, good, good. Yeah, I’ll have to watch the remake. Hey Michael, this has been fascinating. I really, really appreciate your time. Thank you so much. 

Michael Littman Well great. Thanks a lot for having me.

Jim Freeze Thank you very much.

On the next episode of The ConversAItion, join us for a discussion on how AI technology has impacted the home, with IoT thought leader Alexandra Deschamps-Sonsino.

This episode of The ConversAItion podcast was recorded at the PRX Podcast Garage in Boston, Massachusetts, and produced by Interactions, a Boston-area conversational AI company. 

That brings us to the end of this ConversAItion. I’m Jim Freeze, thanks for listening, we’ll see you next time.

[UPBEAT MUSIC]

Check out more episodes of The ConversAItion.