the ConversAItion: Season 1 Episode 6

Ethical Human-Robot Interactions

Jim explores ethical robot design with Thomas Arnold of the Tufts University Human-Robot Interaction Lab. Recorded while speaking for a live audience, they discuss the research around moral, ethical and logistical considerations when designing for human-robot interactions
Listen on Apple Podcasts badge
“Ethics are not ornate toppings to consider at the end of the design process, like sprinkles on a sundae. Ethics need to start at the very beginning, when thinking about what an automated system is and what it’s being designed to do.”
Headshot of Thomas Arnold

About Thomas Arnold

Thomas Arnold is a researcher in the Tufts Human-Robot Interaction Lab. With a master’s in theological studies, and working towards his PhD in Religion at Harvard, Thomas is always thinking about the ethical and philosophical questions of how social robotic systems should be designed. His research focuses on the ethics of human-robot interaction, and the morality of artificial agents and autonomous systems. Follow Thomas on twitter @Thomas__Arnold.

 

Short on time? Here are 5 quick takeaways:

  1. The Tufts University HRI Laboratory is an interdisciplinary robotics lab focused on all things human-robot interaction.

    At the Human-Robot Interaction Lab at Tufts, experts in diverse fields come together to study robot design from numerous perspectives. HRI brings together thought leaders in computer science and robotics but also religion, linguistics and psychology.

     

    The issue of welcoming robots into our society is so complex that no one person is going to have the full picture. Collaboration is the key to productive dialogue around robotics design that’s as responsible and beneficial as possible. 

    Ultimately, the goal of HRI Labs is a roundtable that considers the realities of how people interact with robots, taking into account social norms, best design practices and the ethical, philosophical and legal questions of what exactly robots should be doing. You can read more about the HRI Lab in this article: “AI is smart. Can we make it kind?

  2. HRI researchers are worried about robots tripping on carpet, not taking over the world.

    Despite popular Terminator-like depictions of robots, researchers today are more concerned with a robot’s mundane actions, like moving successfully through a room or knowing when it’s appropriate to ask a guest to take their coat. 

     

    Human interaction is highly nuanced. Think of the subtleties of communicating with a waiter at a restaurant, for instance—pointing to an item on the menu to order, putting your hand over your wine glass to prevent a second pour, signaling for the check. Teaching robotic systems the intricacy of socially appropriate communication is a focus area in responsible design and will be for years to come.

  3. Explainability is key to responsible design; an AI system’s internal workings shouldn’t be a complete mystery.

    Researchers at Tufts HRI Lab consider explainability a fundamental attribute of responsibly designed robotic systems. 

     

    Where explainability is the ideal, black box systems are exactly what we should avoid. It shouldn’t be unclear how or why a robot does what it does.

  4. There is a growing recognition for ethics in artificial intelligence and robotics, but fully responsible design will never be 100% achieved.

    In general, researchers are feeling optimistic about the direction conversations are headed. There’s a growing recognition for the importance of carefully selecting the attributes of an AI system, like its personality, voice and gender. 

     

    That said, responsible design is, and always will be, a horizon. We will never create a robot that reflects completely and totally responsible design. It’s something to work towards, but there will always be room for improvement.

  5. Ethics must be considered from day one—ethical design will never work as an afterthought.

    As Thomas said, “Ethics are not ornate toppings to consider at the end of the design process like sprinkles on a sundae.”

     

    Ethics conversations need to start from the very beginning of designing robotics systems. It should be a fundamental consideration when thinking what purpose a robot will serve and what it is being designed to do.

Read the transcript

Jim Freeze Hi, this is Jim Freeze. I’m the chief marketing officer at Interactions, a conversational artificial intelligence company, and this is The ConversaItion, a podcast airing viewpoints on the impact of AI on business and society. 

 

[UPBEAT MUSIC]

 

Today we’re doing something a little different. We’re in the basement of a restaurant and all of our podcasts so far have been recorded in a studio. We’re doing this one live.

 

We had a willing victim to join me—

 

[LAUGHTER]

 

Thomas Arnold. He’s a researcher at Tufts University human-robot interaction laboratory and a lecturer in the University’s computer science department. Thomas is an expert in ethical robot design with a background in theological studies and also the Classics. Thomas, welcome to The ConversAItion

 

Thomas Arnold Thank you. Thank you for having me. 

 

Jim Freeze You approach AI from a very unique perspective. You have a master’s in theological studies and you’re working towards your PhD in the study of Religion—both at Harvard. First can you explain what the Tufts human-robot interaction lab is and how you got involved with it? 

 

Thomas Arnold Sure. So the human robot interaction lab is a kind of intersection between computer science, cognitive science, psychology, linguistics, policy, philosophy and ethics. It really is a roundtable where designing social robotic systems attempts to integrate considerations of social norms, of best design practices and the realities of how we as people interact with robots. 

 

And then we have considerations of ethics and philosophical questions about what should robots be doing. How should they be designed? And how does that link up with issues in law, policy and so on. It’s really is this kind of intersection of a number of fields. 

 

Jim Freeze It’s quite an eclectic group that that you work with, can you talk a little bit about the importance of having so many different backgrounds and kindly kind of having that eclectic mix of individuals who are who are working in this robot lab.

 

Thomas Arnold Yes, so this is really the exciting point for me coming from a religious studies background. If you had told me seven years ago, you’re going to be working at a robotics lab. I have no idea what I would have thought or how I would have projected that. But the concrete nature of what robots should do, it quickly gets you into a situation where you realize you need multiple perspectives. 

 

For me coming from religious studies, it was evident that there was a role to play. To think about norms, what people value, what people hold sacred, what kinds of things would be a violation on the part of a robot. And what things should be safeguarded that makes for a seat at the table along with people who are thinking about motion planning how a robot arm should best move to pick something up. All those things can come together in this in this really charged way where you have to have a certain amount of intellectual humility and realize there’s no one person that’s going to have the whole picture, the comprehensive view of everything. We really have to be collaborative and have a productive dialogue around it. 

 

Jim Freeze One of the things I’ve learned, having worked in AI for a while now is that there are a lot of misperceptions about AI. And it’s difficult to explain to people who are not familiar with artificial intelligence what I do and what the company does and I always try to explain it in the context of something that they know. I’m curious, when you tell people what you do with their initial reaction is?

 

Thomas Arnold Yeah, so I kind of made a real turn at the cocktail party scene, from dry philosophical texts, to: “Ooh, you work with robots and Asimov and Blade Runner and Terminator.” Yeah, so it’s become kind of a vocational duty to re-watch or watch the first time some of the movies just to have some of the lingo and to be able to connect the ordinary, what I would consider pretty ordinary realities of where robots are, in a lot of cases, with some of the fantastic projections. I find myself usually having to ground things and saying, “No, I have to worry about a robot tripping on carpet, not taking over the world at this point.” 

 

While there are some definite large-scale AI issues. We can’t forget some of the almost comical failures that could happen, and that needs to be given perspective, especially because of the sensitive and intimate nature that some of these systems are going to have. As with Interactions, just the ordinary conversation, just the subtle details of even what I’m doing now of saying now, like “uh or um,” you know, what that represents about us as people with bodies. 

 

Jim Freeze Yeah, or just motions with your hands and your face and all of that—human conversation is quite complex. 

 

So, I am curious—you’ve spoken a lot and I’ve read some articles. One here, which think is really interesting—I would recommend—it’s in the Tufts Magazine: “AI is smart. Can we make it kind?” 

 

But one of the things I’m curious about is you’ve spoken a lot about responsible design and instiling ethical standards. Can you talk a little bit about that? In particular, what does responsible design in robotics look like? 

 

Thomas Arnold Well, I think you know responsible design is a horizon, in the sense of it’s something that will be never fully achieved, or be able to do to say, “we’ve got it! we’ve completely, utterly responsibly designed something.” I think for our lab’s work, one of the big issues that we really try to work toward in terms of a standard are systems that are explainable. And systems that can actually give an accurate, accountable explanation of what it is they are doing. 

 

Jim Freeze So I’m intrigued by this notion, something I read about called transparent “thought processes” where—I’ll ask it this way: How important is it that the design be done in a way that humans understand why the robots doing what it’s doing? Which drafts off I think the concept you were just talking about.

 

Thomas Arnold The types of systems that we’re working on in the lab are ones that are going to be interactive in the real world. And that means that they are going to be not just making a decision for a particular problem where human beings can be observing from the outside, as we saw with the game “Go.” 

 

When Alphago from DeepMind defeated the human world champion, AlphaGo made some moves that were unexpected that even Go experts were wondering: “What? Why did it make that, what was going on?” And in that case, within the confines of a game, AlphaGo didn’t need to explain. The only criterion was: “did it win?” 

 

But in social contexts, where we’re coordinating our behavior with one another—think on a street, or even in this restaurant that were speaking at today—you need to be able to understand and coordinate with others, what it is you’re doing—being able to communicate what it is you’re doing—so that they in turn can react. The Go board doesn’t involve people that could be hit or need a glass of water, but in our social, real-life, interactive settings—that’s a higher standard. 

 

And so being transparent, is not about the robot being all knowing or having superhuman abilities. It needs to be able to coordinate and assist and cooperate in a way that is accessible. 

 

Jim Freeze Something you said a few minutes ago about designing robots and the subtleties of human communication. You know, if I’m sitting in a restaurant and a waiter comes by to pour more wine in my glass and I put my hand over the glass—anybody who knows me knows that would never happen, but assuming I did—the waiter would know, “No, I don’t want any more wine.” How do you design for that kind of human interaction, that kind of subtlety?

 

Thomas Arnold I promise this was not planned beforehand. Jim you put your finger on exactly that scenario that we’re working on, the situation of a server at a table and the issue of consent. And these very subtle ways in which—maybe you put your silverware down: “I’m not done yet.” That little give-and-take that we have with the server of: “Why do you want to take away my food? I’m not done yet,” but maybe you are waiting for them to take it. 

 

This is the type of subtlety that we’re really interested in in terms of what that would mean for a robotics system to function well. And the serving of a drink, we just wrote a paper some months ago on consent, that that’s kind of been an under-represented issue in human-robot interaction. 

 

Where do we consent to a robot starting to interact, starting to ask us questions, starting to do things in our personal space? That’s already really fraught, and so that’s really a research area for us: How in a certain context can a system understand, this is okay, this is expected. 

 

You know, the physical act would be difficult for a robot in terms of taking your coat off, but that’s the type of gesture that, in a certain context, yes, that’s natural that you would want to take my coat off and then, you know, hang it somewhere. But outside, you know—maybe even 10 feet outside of a restaurant or what have you—it would be highly inappropriate and a very weird thing to say: “Can I can I take your coat?” So this is exactly the type of subtlety that we’re interested in representing and really asking, how should a robot system fit into in that situation?

 

Jim Freeze Are there any companies or anything you’ve seen research or otherwise that you’re really excited about because you think: Wow, these guys are really getting it they’re getting it relative to how to think about robotic design in an ethical fashion?

 

Thomas Arnold I think, you know there have been. I won’t cite any companies right away, but I will say that in the companion robot space, I do think in my conversations, there is a growing recognition of subtleties in terms of what kind of personality, what type of voice, what issues of gender, what issues of gender roles need to be incorporated. 

 

I was talking with someone about—a colleague of mine and human-robot interaction Julie Carpenter, who’s in San Francisco and who’s worked with a Danish team on a genderless voice assistant in terms of really attending to the character of a voice and not typecasting and assistant necessarily as a typically female voice. So I think the field in general is starting to understand some of the subtle dynamics and realizing that this isn’t something ornate to put on at the end, or to consider ethics as a kind of sprinkle on the sundae, but that ethics needs to start from the very beginning of what a system is what it’s being designed to do. So, I think in general I’m encouraged in terms of where things are headed. 

 

Jim Freeze Well, yeah, one of the things that—When I was listening to you talk about the notion of consent, we deal with developing intelligent virtual assistants. We deal with trying to advise customers on a way to do that that’s respectful of their consumers. So we help them design a persona that’s consistent with their brand, but there are often questions about, are you stepping over a line sometimes? 

 

So as an example, we always advise our clients to, when we do the introduction to have the intelligent virtual assistants say, that I am an intelligent virtual assistant. I understand complete sentences. You can speak to me like you would speak to you know another person. But we advise our customers to say it’s an intelligent virtual assistant because we’ve seen research, and our own primary research has indicated that that consumers don’t want to be fooled into thinking they’re talking to a human that would notwithstanding sometimes they do think they’re talking to a human. I’m wondering if there are just general lines or general piece of advice you provide regarding, you know, you’re that’s going too far you’re stepping over. Line and it’s obviously probably situational but curious as to your thoughts on that?

 

Thomas Arnold I mean, that’s really interesting. I think that’s—I would applaud that kind of consideration around giving a kind of a primary notification that it’s an intelligent virtual assistant. I would say from our lab experiments and the things that we see, I’d say one piece of advice is that that initial identification or clarification will not prevent ongoing kind of attributions and ongoing reactions between the person and the IVA. We’ve had studies that, where you can tell someone: this robot only understands three commands. But if that robot—we have a series of studies where the robot says “no” to a certain command—once the robot says “no” to someone, you will have participants immediately start negotiating, immediately start using language that you’ve told them again and again: “It does not understand.” 

 

So, I think it’s part of the importance of HRI is to say that how we think in the abstract we will interact with the robot is not in fact how we do interact. We have way too many personal, interpersonal language instincts, social instincts, that just—we’re not going to be able to suppress. And so one thing I would say is, the longer a conversation goes on, I would say it’s more and more likely that a person will slip back into attributing things to the IVA. And implicitly assuming that the IVA will understand things even without it necessarily being conscious, that there may be attributions or expectations that will be harder to fulfill. 

 

Jim Freeze Well, it’s interesting you say that because even though we once again advise customers, and they do, they generally say: “This is an intelligent virtual assistant, I understand complete sentences.” There are recordings where there will be consumers interacting with it asking, towards the end of the transaction, “is this a human, are you a live person?” 

 

And we’ll occasionally get, at the end of a successful transaction: “God bless you.” You know, so they—it’s not uncommon at all that initially it starts off perhaps in a more robotic fashion in terms of responses, because unfortunately consumers have been trained to act like robots when they’re dealing with technology. 

 

How can how can companies infuse ethical best practices in the work they’re doing, leveraging this technology and designing?

 

Thomas Arnold I think there are a number of ways to go about it. But you know one approach is really to look at cases. In the case of Alexa, there’s a great article on Wired called, “The Terrible Joy of Yelling at Alexa.” And it’s about a person that with who spouse the two of them would yell at Alexa, curse at Alexa, vent after a long day of work. The problem being when the two-year-old started doing the same thing, using the cuss words, using the same language because the two-year-old saw what their parents were doing. And so all of a sudden the ethical issue was at the center of, “oh, how are we treating Alexa? What do we call Alexa? How do we repair this? 

 

So it’s not a problem that was necessarily foreseen, but it’s a good case of: Let’s really have the moral imagination to flesh out what interaction actually is. And so, thinking of costs is going to be there. Thinking of liabilities is going to be there, from legal. Thinking about corporate branding is there. All of those issues are in the mix. 

 

But really, I think you need to think of it as part of the engineering task. The imagination of the interaction. Really fleshing out and using ethics as a conversation to really help do that in a better way and I mean better in kind of a number of different senses where you’re going to have a better product because you really flushed it out and really imagined things in a more robust fashion because when you do that the ethics become much clearer and much more evident earlier on.

 

Jim Freeze This has been fantastic. Thomas, really appreciate it. You one of the things that I love about what we’re trying to do with The ConversAItion is to bring together people who have a very unique perspective and bring that perspective to an intersection of artificial intelligence and talk about the impact on society talk about things like we did today around ethics and design. Really appreciate it, Thomas. Thank you very much. 

 

Thomas Arnold Thank you so much, Jim, and thank you to Interactions for having me. I appreciate it. 

 

Jim Freeze And that’s a wrap. 

 

[UPBEAT MUSIC]

Check out more episodes of The ConversAItion.