Thomas Arnold is a researcher in the Tufts Human-Robot Interaction Lab. With a master’s in theological studies, and working towards his PhD in Religion at Harvard, Thomas is always thinking about the ethical and philosophical questions of how social robotic systems should be designed. His research focuses on the ethics of human-robot interaction, and the morality of artificial agents and autonomous systems. Follow Thomas on twitter @Thomas__Arnold.
At the Human-Robot Interaction Lab at Tufts, experts in diverse fields come together to study robot design from numerous perspectives. HRI brings together thought leaders in computer science and robotics but also religion, linguistics and psychology.
The issue of welcoming robots into our society is so complex that no one person is going to have the full picture. Collaboration is the key to productive dialogue around robotics design that’s as responsible and beneficial as possible.
Ultimately, the goal of HRI Labs is a roundtable that considers the realities of how people interact with robots, taking into account social norms, best design practices and the ethical, philosophical and legal questions of what exactly robots should be doing. You can read more about the HRI Lab in this article: “AI is smart. Can we make it kind?”
Despite popular Terminator-like depictions of robots, researchers today are more concerned with a robot’s mundane actions, like moving successfully through a room or knowing when it’s appropriate to ask a guest to take their coat.
Human interaction is highly nuanced. Think of the subtleties of communicating with a waiter at a restaurant, for instance—pointing to an item on the menu to order, putting your hand over your wine glass to prevent a second pour, signaling for the check. Teaching robotic systems the intricacy of socially appropriate communication is a focus area in responsible design and will be for years to come.
Researchers at Tufts HRI Lab consider explainability a fundamental attribute of responsibly designed robotic systems.
Where explainability is the ideal, black box systems are exactly what we should avoid. It shouldn’t be unclear how or why a robot does what it does.
In general, researchers are feeling optimistic about the direction conversations are headed. There’s a growing recognition for the importance of carefully selecting the attributes of an AI system, like its personality, voice and gender.
That said, responsible design is, and always will be, a horizon. We will never create a robot that reflects completely and totally responsible design. It’s something to work towards, but there will always be room for improvement.
As Thomas said, “Ethics are not ornate toppings to consider at the end of the design process like sprinkles on a sundae.”
Ethics conversations need to start from the very beginning of designing robotics systems. It should be a fundamental consideration when thinking what purpose a robot will serve and what it is being designed to do.