When you think of artificial intelligence (AI), what do you envision? Depending on what generation you were born into, you might think of HAL 9000’s glowing red eye from 2001: A Space Odyssey, or you might think of the surprisingly humanlike Ava from Ex Machina. Or, if you’re knee-deep in AI programming, you might envision AI as nothing more than a complex web of computational frameworks.
For decades, pop culture and science fiction have illustrated the possibilities of AI—but these depictions aren’t always accurate, and they aren’t always in ways that positively encourage its development. By that same token, these depictions have inspired thousands, if not millions of curious minds to push the boundaries of what AI can accomplish (and even take efforts to improve our safety).
So how, exactly, is AI shaping public perceptions on AI, and is that ultimately a good or bad thing for its development?
We spoke with Rajat Mishra, Cisco VP of customer experience and pioneer of various Artificial Intelligence and machine learning initiatives, to learn more.
The Push for Knowledge and Understanding
For starters, we need to acknowledge how our stories and movies can help people become familiar with and truly understand the complexities of AI. Explaining machine learning algorithms in mathematical terms will alienate most of your audience, but if you call to mind beloved characters, or tell an engaging story, you can make someone think critically about how AI works (and how we should approach it).
For example, Rajat Mishra often uses Hollywood as a lens to explain complex technologies like AI to clients and the general public, calling famous movies to mind when explaining things like predictive analytics algorithms or how man and machine work together. He’s also spoken out about how some films and franchises, like the Terminator series, can hurt the development of AI (which we’ll touch on next).
Fears and Worst-Case Scenarios
Many films, shows, and books use AI as an antagonist, or a fiercely destructive force. For the sake of storytelling, it makes sense to do so: AI is still a largely unexplored frontier, and one with the potential for devastating power.
However, the presentation of AI as malicious, or as a tool that’s more dangerous than helpful can steer people away from real-life applications of this technology. From small-scale AI malfunctions, like in Westworld, to large-scale takeovers, like in The Matrix, everyday consumers are presented with the idea that any AI system, once introduced, will likely seize control of its own consciousness and commit atrocities harmful to the human race. Accordingly, they may be less likely to support technologies like self-driving cars—even if they could save tens of thousands of lives every year.
On the bright side, these narratives do sometimes motivate people to think more critically about the role AI could play in our lives. For example, Elon Musk’s organization OpenAI is working hard to develop AI responsibly, and mitigate any ethical (or existential) risks that could be associated with its emergence in our world. We also see examples of a stories where technology is something more of a middle ground. According to Mishra, “Some believe machines will replace humans, and others believe machines will merely supplement humans. After thoughtful debate, we’ve concluded that for our services business it’s not a binary choice, but rather a conscious decision on where we want to be on the Man-Machine continuum.”
Tech Inaccuracies and Humanization
For the most part, depictions of AI get a lot of technical factors wrong, which leads the public to misconceptions about the technology. For starters, most stories try to humanize AI as much as possible, giving AI the appearance of having human-level consciousness and self-awareness, and sometimes subjective feelings. In reality, AI probably wouldn’t gain self-awareness and maliciously start hunting down humans; instead, our greatest risk would come from its cold, calculated decision to accomplish its task as efficiently as possible. It would only “want” to do what we programmed it to do, so its worst actions would merely be unpleasant side effects of trying too hard to achieve that goal.
Of course, there are some astoundingly accurate depictions of AI in our films and literature, but these examples are few and far between, and they tend to be more for sci-fi aficionados than blockbuster-craving general audiences.
Good or Bad?
So is the way we talk about and present AI in our works of fiction a good thing or bad thing for AI development? Sure, we’re introducing the idea to a wider audience, and in ways they can grasp, but we’re also filling their heads with misconceptions—and those misconceptions could stall our progress by sharply decreasing public support for AI projects and increasing legal and regulatory hurdles for developers. In the words of Rajat Mishra, “I love movies, and Hollywood metaphors can be a powerful tool to improve understanding, but the real world is never as black-and-white.”
Ultimately, pop culture portrayals of AI will remain somewhere in a gray area, which is good, because no matter what, people will keep writing stories about it—and it’s not good to have too many stories that either glamorize or demonize such a powerful technology. We can only hope that the people most interested in AI research and development will dive deeper before forming their opinions about robots based on the latest summer blockbuster.