the ConversAItion: Season 4 Episode 21

Leveraging Big Data for Good

Renowned AI and big data thought leader and analyst Susan Etlinger kicks off Season 4 of The ConversAItion with a discussion on creating human-centric, responsible AI. She’ll share insight into her work at Altimeter and the Centre for International Governance Innovation, and offer her take on why we should think of AI more like a dishwasher, the importance of government oversight and why we should be optimistic about AI’s future. You can find Susan on Twitter @setlinger.
Listen on Apple Podcasts badge
“You wouldn’t say, we have cars and ethical cars, or dishwashers and responsible dishwashers—we just build them. But we don’t have norms for technology around ethics the same way that we do in medicine or in the law. So we’re at this point where these technologies have so much power, yet we haven’t come to an agreement around how we can and should be using them.”
Susan Etlinger

About Susan Etlinger

Susan Etlinger is an industry analyst at Altimeter, focused on AI ethics and big data. She also serves as a Senior Global Policy Fellow at the Centre for International Governance Innovation, an independent, non-partisan think tank based in Canada, where she researches the role and importance of AI and big data governance. Susan holds a Bachelor of Arts in Rhetoric from the University of California at Berkeley. She can be found on LinkedIn here and on Twitter here. 

Short on time? Here are 5 quick takeaways:

  1. Susan’s research focuses on how to make technology more human-centric.

    Susan’s time at Altimeter Research began in 2010 by looking at social media platforms—and the troves of data they produce—as they became increasingly central to our lives. As an analyst, unaffiliated with any particular technology company, she was able to do an objective deep dive on how new technologies affect people, and begin envisioning what a truly human-centric approach to innovation could look like. 

    She continues this work today as a Global Policy Fellow at the Centre for International Governance Innovation, investigating how decisions about technology manifest themselves in our daily lives—where and how we live, the products we use and more. She also studies organizational norms around data, leveraging her expertise to guide businesses in the ethical use of AI to ensure their technology serves its intended purpose.

  2. If data isn’t inclusive by nature, it’s exclusive by default.

    Susan believes that if we don’t actively work to make data inclusive, it will ultimately discriminate against society’s most vulnerable groups. Bias is often built into algorithms, from those that power social media platforms to those that inform our criminal justice system. And in some cases, it can be impossible to trace the origins of the bias that’s built into AI— whether along racial, societal or gendered lines—which is why it’s so important to understand biases in large language models. Since data has important consequences for business and society, we need to be sure that we’re creating meaning from data—not letting data create meaning for us. 

    While the study of the intersection between technology and society certainly isn’t new, it has recently become focused on examining this potentially biased algorithmic decision-making. Susan cites key research on this topic, including Dr. Safiya Noble’s book, “Algorithms of Oppression,” Cathy O’Neil’s “Weapons of Math Destruction,” and the work of Timnit Gebru and MIT’s Joy Buolamwini. Along with her colleagues at the Centre for International Governance Innovation, Susan also recently published her own research about the women in technology governance. 

  3. As a society, we need to come to an agreement on how AI should be designed and used.

    Susan believes all technology companies should commit to responsible AI, but she acknowledges that this label can be a difficult pill for many companies to swallow—both because they are resistant to being told what they can do with their data, and because there are lingering questions around what “responsible” truly means. However, she maintains that any function powered by AI should require government oversight and a set of ethical principles and processes to ensure that the systems work the way they’re supposed to. 

    She’s a fan of the analogy that compares AI to daily household items. We don’t have “cars” and “ethical cars,” or “dishwashers” and “responsible dishwashers”—we just expect that they’re designed properly. The same should be true for AI. Technology like AI is incredibly powerful today, but as a society, we haven’t quite come to an agreement around how we can and should be using it, nor have we decided on the best nomenclature for it. Susan asserts that we need to agree on a shared framework for understanding AI—both how it works and how it can function correctly and responsibly—much like we have for earlier technologies. 

  4. Susan is an optimist when it comes to the future of transparency in AI.

    Susan has high hopes for a more transparent and ethical future of AI and data. Though consumer protection legislation moves slowly, we’re already seeing the beginnings of a foundation for more human-centric technology with laws like GDPR, which at its core is centered around translating people’s fundamental human rights into digital rights. 

    In the same way that yogurt companies now put the names of the cows the milk came from on the side of the container, Susan thinks more companies should be upfront about the development of their product and their ecosystem of vendors, partners and more. And while these yogurt companies are setting a high bar, she believes all executives should be striving for the same transparency when it comes to their technology. Susan doesn’t necessarily think that issues like bias can ever be fully eliminated—first, we’d have to eliminate them in society—but she firmly believes that there should be a set of principles that govern the way that data is used, and is optimistic about that future when it comes to AI.

  5. Additional Resources

    Check out some of the resources Susan mentions in the episode:

    For more from Susan, see here:

     

Read the transcript

TRANSCRIPT

EPISODE 21: SUSAN ETLINGER

Jim Freeze Hi! And welcome. I’m Jim Freeze, and this is The ConversAItion, a podcast airing viewpoints on the impact of artificial intelligence on business and society. 

[UPBEAT MUSIC]

It’s safe to say a lot has changed in the last year. But amid all the uncertainty we’ve experienced, the growing impact of AI on business and society has remained constant. We’re excited to be back for a fourth season of The ConversAItion to explore the most exciting applications of AI today, and some of the industry’s most pressing issues.

From conversational mental health bots to AI-powered personal stylists, we’ll be diving deep into both topical and practical uses of AI, and the implications of this technology for society at large. I’ll be speaking with AI leaders who are pushing the boundaries of AI’s capabilities and finding new ways to make an impact. 

You’ll likely recognize many of the names in our lineup—and we’re excited to introduce you to the ones you don’t already know. Each guest is doing important work to advance our understanding of AI, and the value it unlocks. Whether you’re out driving, cooking, or listening from your home office, it’s great to have you tuned in. 

In today’s episode, we’re joined by Susan Etlinger, a renowned thought leader in AI and big data. Susan currently works as an analyst at Altimeter, where she is focused on researching the ethical use of AI. She also serves as a Senior Global Policy Fellow at the Center for International Governance Innovation, a think tank addressing global issues at the intersection of technology and policy. 

Susan, we’re so excited to have you on the show. Welcome! 

Susan Etlinger Thank you. Thank you so much.

Jim Freeze So Susan, you’ve had a long career in tech, but you actually used to work in tech communication. So what kind of prompted the transition to research?

Susan Etlinger Well, it’s funny. I had worked in tech communications mostly because I graduated college with a degree in rhetoric and I had no idea what to do for a living. And living in San Francisco, in the Bay area, tech is all around us.

Jim Freeze A degree in rhetoric?

Susan Etlinger Yeah. Yeah. I studied rhetoric. I was really interested in classical rhetoric. I was interested in sort of how arguments are made, how people persuade other people to do things, to think things, to change their frame of reference. And the thing I particularly loved about rhetoric was that it wasn’t confined to a single field of study. So I could look at rhetoric of philosophy, of history, of language, of law, pretty much any topic, any sort of area of the humanities has its own sort of rhetorical grounding. So I was really intrigued by that, and that’s what I studied. As you can imagine, there are not that many jobs for professional rhetoricians. So when I finished, I thought, “Okay, what am I suited to do?” And that was a very quick crash back to earth. And I started working in tech as a tech writer

Jim Freeze That makes total sense. Well I love career pivots, so I think that’s interesting. And I think your educational background is fascinating. I think you’re the first person I’ve ever met who has that educational background. But the more I think about it, the more I think it’s very relevant to our times and relevant to the conversation we’re about to have. So, at Altimeter what is your current research focus?

Susan Etlinger Yeah, so it’s funny. At Altimeter, one of the things I’ve really enjoyed about working here is that there’s a fair amount of leeway for me to explore things that are untested. And so I started actually by looking at data, and particularly social media data. As social media platforms became more and more common and used for more and more things in the mid 2,000s, I became really interested in the idea of, what could we learn about people, about business, about trends from posts, from likes and shares and all of those behaviors? And I became really interested in particularly how business systems make sense of the unstructured stuff. So the stuff that doesn’t fit easily in databases like likes and shares, but the actual language that people use to explain things. And later, images and videos and GIFs and things like that. So that led me into sort of big data, because of course, social media data doesn’t exist in a vacuum. Looking at other kinds of data that was being created, user generated content and all that sort of stuff. 

And then from there, I started thinking a lot about the norms that organizations were building and that frankly, app providers and what we think of now as social media platforms, were building around how to make sense of all of that. And I felt that it required some study and especially, as we use it to draw conclusions around business, to draw conclusions around society, and especially as it affects everyday people in their lives.

Jim Freeze Yeah. That’s very interesting. There’s part of your research I’d love to dig into, which is your work around a concept called responsible AI. And until this point, there’s been a lot of conversation around responsible AI, and you just touched on it in the context of social media shortcomings, but I think you’ve argued that all large tech programs should bake it into their business. Could you talk a little bit about the importance of responsible AI in all tech initiatives?

Susan Etlinger Yeah, sure. So this has been a really interesting journey that we’ve all been on. I started thinking about data privacy, and more than data privacy, responsible data use, right? Or ethical data use back in, I want to say, 2013, 2014 around that time. I had actually gone to South by Southwest and I was sitting in the audience for a speech by Dana Boyd from Data and Society, one of the founders of Data and Society. And she was talking about Facebook and she was talking about how Facebook was making these changes to the platform that was really changing the expectation that people had around how their data was being used. And she told a story about a young girl and her mother who had been victims of domestic violence, and who had to change their names and move away and essentially live this very quiet life. And this happened when the girl was very, very young. And then she hit 13 and wanted a Facebook account. And the mother was sort of panicked about it, but said, “Okay, well, if we lock down the account, we only make it friends, and we use this new name and everything else,” that this would be probably okay to do. And then that was one of the times when Facebook sort of, without any announcement, all of a sudden made all profiles public.

So then there was a period of like two or three weeks, I think, where this girl’s profile was public and they didn’t know. And nothing ended up happening, but the idea of the panic that they must have felt really hit me. And I started thinking about, who decides all this stuff?

Jim Freeze Yeah. It’s a great question.

Susan Etlinger So as we get into responsible AI or ethical AI or whatever we want to call it, the nomenclature is really a problem for a lot of different reasons. One is the word ethical is just a difficult pill, I think, for a lot of companies to swallow. Number one, it’s immediately associated with compliance and legal issues. It sort of has a very second grade teacher kind of feel to it, right? That we’re telling you how to use your AI. Responsible isn’t much better because responsible sort of implies that, actually both of them, neither of them is that great, right? Because on the one hand you have technology, and then you have responsible technology.

I heard somebody say this not so long ago, and I thought, this is really actually a brilliant point. You wouldn’t say, we have cars and then we have ethical cars, or dishwashers and responsible dishwashers. We just build them.

Jim Freeze That’s a great analogy.

Susan Etlinger But we don’t have norms and technology around ethics the same way that we do, for example, in medicine-

Jim Freeze Mm-hmm.

Susan Etlinger … or in the law. And so as a result, we were at this point now where technology has so much power and yet we haven’t really come to an agreement whether, internally or from a legislative point of view, around how we can and should be using these technologies.

Jim Freeze
That’s really fascinating. It’s something that’s even more interesting in your research. I recently watched your 2014 Ted Talk where you discuss the responsibility that falls on all of us to create meaning from data rather than allowing data to create meaning for us. I think that’s a really important concept. One thing you noted that stood out to me is that if data is not inclusive by nature, it’s exclusive by default. Would you share a little bit about that philosophy and how we can use that approach to ensure that technology is serving its intended purpose with data in particular?

Susan Etlinger Yeah, sure. And to do this, I really have to be scrupulous around making it clear that these issues around the impact of data and technology on society are not new questions. I mean, these are questions that scholars in STS, which stands for science, technology, and society, historians of technology, philosophers, anthropologists, sociologists of tech have been studying for, at minimum, 30, 40 years. And arguably, if you look at tech going way back in history, hundreds of years. And so, this idea of how humans and technology interact with each other is something that industry is starting to pick up on, but there’s a foundation of scholarship by people who everybody should be reading, frankly. Around how, for example, algorithmic decision-making expresses itself in search engine results and can be dehumanizing, particularly to Black people, people of color, non-binary, any sort of marginalized or historically vulnerable group.

And so one of the great books on this topic is by Safiya Noble, Dr. Safiya Noble, it’s called Algorithms of Oppression. There’s a phenomenal book by Cathy O’Neil who actually has a company that does algorithmic audits called Weapons of Math Destruction.

Jim Freeze Yeah.

Susan Etlinger
There’s the work that Joy Buolamwini from MIT and Timnit Gebru, until recently at Google, which is a whole other story, a project that they did several years ago called the Gender Shades Project, where they looked at the way that facial recognition technology performs so much less effectively on black people and people with darker skin tones, and the sort of implications of all of that. And so any bias in our society, and I’m not talking about statistical bias because of course you need bias to build a model so that you don’t include everything and you have a model that’s functional and does what you need. But any sort of social bias, societal, racial, gender, ethnic, et cetera, political bias, is going to be built into the algorithms in such a way that, because of the nature of machine learning and artificial intelligence, it’s kind of impossible, in many cases, to understand where it came from.

And so the paper that actually precipitated Timnit Gebru’s firing, although Google called it a resignation, and her colleague Margaret Mitchell, was about the dangers of large language models and how really intractable it is to try to pull all the bias out or understand all the bias in these massive, massive language models. And as a result, that ends up kind of finding its way into things like the apps that we use and the search that we use and the audience segmentation that we see and the way that our grades and our likelihood of getting alone and so on and so forth. Criminal justice systems certainly, around parole and sentencing. So it’s very pervasive and there’s so much incredible scholarship, and primarily by women of color and Black women.

Jim Freeze That’s fantastic. I’m glad you mentioned some of that research and some of those books. When this episode goes live, we’ll make sure in the show notes that we provide some links to some of those books that you’ve highlighted, which I think will be very helpful to listeners. 

I was laughing a little bit when you said, “Math Destruction” because I was a math major and it’s just so true. I mean, I think about statistics and certainly, I studied statistics and we see it all the time, where people use data, statistics, to advocate a point of view. But you can take the same data and statistics and represent completely different points of view based on how you represent that data. So the concept you’re talking about, it just very much resonates with me. I just think it’s a fantastic area of research that you’re focused on. Speaking of which, you also serve as a Global Policy Fellow at the Center for International Governance Innovation. I’d love to hear a little bit more about what that role entails and the impact of your work as a Fellow.

Susan Etlinger Yeah. It’s been just a lovely experience for me because I came straight out of the business world. I’m not an academic, I’m not a policy person, I’m not a Canadian, and any of those things. And when they approached me, I was just shocked. I thought, “Okay, number one, you have to understand, I don’t have a PhD and I’m not Canadian.” But, the thing that I had been doing over the course of the past, I don’t know, six or seven or eight years, is really trying to understand the way that technology decisions are manifesting themselves in terms of people’s daily lives, consumer products, essentially the way we live, where we live, how we live, all those types of things. And so really what they were looking for from me and what they have been looking for from me is a bit of an industry perspective.

And because I work for a firm that is a consulting and research firm, I’m not a Google employee, I’m not a Microsoft employee, I’m not a Facebook employee. So I don’t have skin in the game in quite the same way that they do. And as a result, I’m in a fortunate position to be able to look across these various models and try to help figure out how we might make some of these questions of bringing technology, helping to make technology more human centric in a positive way from a business and technology standpoint.

So I’m a bit of a translator in some ways for the Center for International Governance Innovation, a bit of an odd duck, but they’ve just been so lovely and welcoming in terms of bringing me in, and I’ve learned so much. Actually, we just published a piece today, which maybe we could link to in the show notes. I didn’t even know it was going to go live today, by two other of the Policy Fellows and me about the women who inspire us in terms of technology governance. So I named, in fact, a lot of the women that I named earlier and maybe a couple of others as well.

Jim Freeze Oh, fantastic. Hey, have you personally seen some meaningful developments in AI policy or regulation in the last few years?

Susan Etlinger Well, I was hopeful when GDPR passed. I mean, GDPR is of course a bit a nightmare from an execution standpoint, but the heart of GDPR is really around people’s fundamental human rights and how those human rights translate into digital rights.

Jim Freeze
Yeah.

Susan Etlinger And so things around privacy and justice and all of these topics that they in the United Nations or, of course, the European Union, or the artist formerly known as the European Union, the UK has exited, thought of in the context of sort of World War II and the universal declaration of human rights. And so I really loved that framework in a lot of ways because I feel that it grounds itself in what’s essentially important. Legislation regulation, they’re slow. And here in the United States, our government has been, let’s just say, distracted for the last four or five years. And these issues have been percolating up, there certainly have been privacy laws passed like CCPA and various others.

And so I have high hopes. I’m going to say high hopes as an optimist. High hopes that we do start to pass more consumer protection type laws that make explicit, and make explicitly unacceptable the kinds of things that have been happening particularly to vulnerable and marginalized populations. And at the same time, we also see technology companies that are building for example, systems to try to detect and remediate bias. I don’t think you can ever truly eliminate bias because, of course, there’s no perfect ideal scenario. And of course, data continues to feed these algorithms. And so until you fix it in humanity, you can’t fix it completely in the algorithm, so it’s a process. They’re looking at interpretability, so the ability to understand how an algorithm came to conclusion, because sometimes that’s not entirely clear, most times it’s not entirely clear.

And they’re putting processes in place around impact assessments, things like that. For example, when you do industrial research around AI and you build a model, the idea that you would have a data card that would have the provenance of your data. Very much the way that we have provenance and we have information on a cereal box. There’s even a yogurt that tells you the name of the cows, those types of things. And those are really important, right? Because if you’re a global company, a public company, you have vendors, you have suppliers, you have partners, and it’s not only important what you do as your own company, but what that ecosystem does as well. And so if you say, “Well, you can’t use face recognition for, let’s say, criminal justice applications,” but other people are scraping that data and using it in a second hand way, that’s clearly not the spirit of the law. So there’s a lot to be done, and it’s a pretty fraught time right now.

Jim Freeze Yeah, it is. It’s interesting. You kind of rolled right into my last question, which was really around, from your perspective, how can the industry as a whole do better? And I think what you just articulated was fantastic. It was, I think, a great punch list of how the tech industry as a whole can do better. It’s been fascinating talking to you. I very much appreciate your time, Susan. And thank you for the work you do.

Susan Etlinger Oh, no, you’re very welcome. And I really hope, we’re in a bit of a crisis mode right now, especially with what’s been happening at Google. And so I hope this is at least a reckoning point for people to understand that technology is just a tool, however sophisticated it is. And it’s really the impact on people and how we treat the people who raise these issues is critically important. So I’ll-

Jim Freeze Amen.

Susan Etlinger So I’ll get off my soapbox now.

Jim Freeze That’s okay. I’m glad you were on your soapbox. Thanks for being on The ConversAItion today.

Susan Etlinger My pleasure. Thank you.

Jim Freeze That’s all for this episode of The ConversAItion. Join us next time when we speak with Kendra Gaunt, AI and Data Product Manager at The Trevor Project, the world’s largest suicide prevention and crisis intervention organization for the LGBTQ+ community. We’ll discuss how the company leverages AI to train counselors and the growing role of technology in the mental health space.

This episode of The ConversAItion podcast was produced by Interactions, a Boston-area conversational AI company. I’m Jim Freeze, and we’ll see you next time. 

[UPBEAT MUSIC]

###

Check out more episodes of The ConversAItion.