Karen Hao is the Senior AI Reporter at MIT Technology Review, where she demystifies AI and explores its complex ethics and social impact. Previously, she was a tech reporter and data scientist at Quartz, and before that, an application engineer at the first startup to spin out of Google X. Follow Karen on Twitter @_KarenHao.
Karen studied mechanical engineering at MIT and worked as an application engineer in Silicon Valley before becoming a reporter. Enticed by the particularly far-reaching impact of AI on society, Karen turned to journalism to explore how the technology shapes everyday lives and in her writing aims to uncover its long-term implications.
Karen points out that tech media serves as the bridge between technologists and the public, and AI reporters have the important task of translating technical concepts for the average reader. With this in mind, accuracy in AI reporting is of utmost importance.
With the current hype around AI, however, more and more reporters are pursuing the beat, and some writers are under pressure to produce content on the topic before they fully understand it. This can lead to inaccuracies, misinformation and general misunderstanding.
This is complicated, too, by the emergence of seemingly credible, but inconsistently informed, public figures who claim authority over AI topics. For example, Karen says Elon Musk makes statements about AI that seem accurate to the everyday reader, when researchers in the field would never actually turn to him for direction.
The news cycle was already fast. Right now, it’s even faster. But Karen says this quickened pace hasn’t changed her primary goals as an AI reporter: to educate the public on how really AI works and how it’s shaping our lives.
Karen has always been focused on trying to understand artificial intelligence as deeply and accurately as possible, so she can best evaluate the importance of developments and news in the space. When she started out as a reporter, she went deep into AI research and even spoke directly with some researchers about their work. Karen leans on this base of knowledge when evaluating the numerous PR pitches she receives daily—notes from companies that hope she will write about their AI initiatives and products—and today, she uses that knowledge to evaluate the many pitches she gets about AI in the context of coronavirus.
Karen frequently comes across the assumption that AI offers a comprehensive solution to a specific problem—this, she points out, is impossible. In the context of the coronavirus, for example, she says AI will not “solve” the pandemic and we can’t expect it to, because the problems at the heart of the severity of the pandemic aren’t technology problems.
Karen mentions that the biggest issue that we’re seeing right now in the US, at least, is the lack of testing and the severe personal protective equipment (PPE) shortages. These issues can’t necessarily be solved by AI. Sure, you could use AI to optimize the manufacturing of PPE, but that’s not really the crux of the problem with the PPE shortage.
Another example: Karen believes that AI could play a role in getting vaccines produced, as researchers increasingly apply machine learning to rapidly identify vaccine candidates. But this would only be one piece in a far more complex solution. The labs that develop this work would also need to get funding and there would need to be an entire pipeline of production and distribution—logistics that are not ultimately about AI itself.
With readers hyper-focused on the coronavirus crisis, reporters are working to produce content that addresses relevant questions and concerns, but they don’t want to burn readers out with coverage of one single topic (and a grim one at that).
Karen notes that as much as journalists respond to reader interest, they shape reader interest, too. Today, news organizations are trying to establish a balance between pandemic and non-pandemic content—it’s something the entire industry is grappling with.