Technology that changes the way we live has always been met with suspicion — especially information technology. When new tech supplants less efficient ways to engage with the world, it usually attracts trumpets of doomsday proclamations about how tech will negatively impact humanity. Even the printing press was said to be a harbinger of our cultural demise. In the long term, though, many information technologies have positively contributed to the modern world.
Artificial intelligence is in its infancy, so in one sense, the distrust it’s received is no different than that of any other game-changing technology. In the next 25 to 50 years, AI is likely to mirror the trajectories of the printing press, early computers, or cars: Initial applications will become increasingly powerful. Pundits on both sides will warn of both incredible potential and potential catastrophe. Once the undulations smooth out, AI will likely contribute a net positive gain to society.
To be clear, that’s the short-term view of narrow AI, which is good at specific tasks. Today’s AI is still rudimentary and is generally only good at performing one thing at a time. I believe narrow AI’s capacity has been overestimated in the next decade or so, but I do think we will see valuable benefits in the near future.
General AI is where things become much more interesting — and potentially perilous. That’s when we’ll see machine learning that is truly human-level or beyond, a program that can perform a range of cognitive tasks as well as or better than a human. The advent of general AI is when the comparisons of production-enhancing previous technology is no longer relevant because we simply can’t predict what will come from it.
Even though AI in the short term will be beneficial to society, it will quickly outgrow its obvious positive impact. We will face security, informational, and existential threats within our lifetime as AI becomes smarter unless we get serious about the risks as well as the benefits.
On the Near Horizon
To date, AI’s practical impact has primarily been felt in natural language and image processing, which are difficult for traditional computers to accomplish. Narrow AI has brought efficiency to tasks that would otherwise slow down processes or bore humans. When done correctly, this has an overall positive impact on business.
More importantly, AI will bring two major advances in the near future that will save millions of lives: self-driving cars and AI-driven medicine.
Every year, more than 1 million people die in auto accidents worldwide, the vast majority of which are due to human error, including intoxication. Self-driving cars could cut the mortality rate of driving by a factor of 10. We’ve seen companies like Uber struggle to get road-ready self-driving cars off the ground (despite billions invested), but even the most pessimistic projections put self-driving cars on the road for private use within the next 20 years.
The potential upside for AI-driven medicine is even more incredible. AI’s proven utility in medical triage aside, it will play a revolutionary role in the pharmaceutical industry. Currently, the cost to bring a drug to the U.S. market is well over a billion dollars. This forces drug companies to prioritize mass-market drugs and so-called drugs of desperation, which consumers will likely pay for by any means necessary. AI can predict which drugs are likely to be effective against a particular disease at much lower costs and risks to companies, and it could reduce the need for testing on humans and animals. Furthermore, AI-led DNA analysis might usher in an era of personalized drug treatments.
Resentment and a Crisis of Misinformation
As the technology develops, so will challenges. The first sticking point will be a growing resentment against AI for taking jobs currently performed by humans — driving trucks or intaking patients, for example. Robots taking jobs from humans is, again, not a new concern. The level of blowback will be determined by society’s ability to reallocate resources and adjust for job shifts.
The second challenge is a misinformation ecosystem that will be impossible for humans to make qualified judgements against. As machines learn how to create better and more nuanced information, humans will want digital verification that the information is somehow real — or at least not false. But it’s almost certain that it’ll become a game of cat and mouse; as the verification gets better, so, too, will the fakery.
I believe there’s potential for an AI-driven misinformation crisis in our lifetime. AI can already convincingly manipulate images and video. Actresses’ faces have been superimposed on pornographic photos and videos. World leaders are being made to say ridiculous or inflammatory statements.
Fake news is just the half of it. Individuals will face risks to their reputations (fabricated naked photos sent to co-workers or fake revenge porn posted online), finances (forged bank documents that impact credit), and legal standing (phony audio, video, or other evidence of a crime). This is not a worst-case scenario; this will probably be the norm.
If people don’t know what’s real or what’s fake, personal responsibility could go out the window. Even now, strategic people use AI-created media as a way to dodge the court of public opinion. If a person is caught on tape saying sexist comments, for example, he can argue that the tape is fake. When the AI is good enough, it will be hard to prove otherwise. At a certain point, the simple existence of advanced technology will be enough to cast doubt on nearly any information.
Beyond the risks to individuals and their families, AI poses global security hazards. AI-created intelligence or media could be used to create a political firestorm, spark riots, or even start World War III.
Long-Term Look: Super AI
By far the biggest threat is posed by general AI. There are questions as to whether human-level AI (or beyond) is even possible. But unless we find evidence that human intelligence is driven by processes that humans simply cannot tap into, it’s only a matter of time before a super AI is developed. However, despite what Ray Kurzweil says, I don’t believe superhuman AI will be created in our lifetime.
A reasonable timeline for AI to achieve human-level or better performance in most tasks is about 250 years. Anything that historically required our intelligence — building machines, solving problems, making important decisions — will be handled more efficiently by machines. Perhaps there will be a few preeminent mathematicians working out the equations of the universe, but the rest of us will have little to offer society.
In this world, most folks will probably live on universal basic income. It’s possible that we would have the freedom to simply learn and enjoy time with ourselves, our friends, and our families. But I think it’s more likely that we become lazy, unmotivated, and irrational people — a shell of society, much like in “Brave New World.” Ongoing long-term studies on UBI might shed some light on how this would affect us.
I agree with the late Stephen Hawking, who believed the birth of AI could be “the worst event in the history of our civilization.” Because we simply don’t know the outcome of creating a super AI, the possibilities demand an abundance of caution for what could be one of the best or worst events in human history.
It remains to be seen whether the benefits will outweigh the ensuing negatives from AI and machine learning. Already, AI has opened the door to an era potentially ruled by misinformation. From forged bank statements to world leaders declaring false wars, AI-created media will cast doubt on what we read, see, and hear. Once we dig deeper into the possibilities of the technology, humans will become displaced by machines. These will be massive problems to solve, and the companies that solve them will be worth billions. I don’t see any other solution beyond a technical one.
The most important thing we can do is start the conversation about how to deal with general AI now. We need to understand whether we can build effective safeguards — such as Asimov’s three laws of robotics — that could control any conceivable superintelligence.
I hope I’m wrong, but I’m skeptical.