THE SOCIAL IMPACT OF AI
In 2011, I watched on TV as the IBM Watson DeepQA computer played a challenge match against two previous Jeopardy! champions. Nerd that I am, I rooted for the machine. I was thrilled to see the computer answer correctly over and over again.
Even though this was a fantastic achievement, I strongly suspected that there was no real intelligence in the underlying IBM technology. I was able to confirm my speculation when IBM published a series of detailed journal articles1 that explained how the technology is mostly a massive set of very clever tricks with no human-level intelligence.
IBM then decided to ride the credibility produced by the Jeopardy! victory and began to rebrand itself around its artificial intelligence (AI) capabilities. IBM marketing claimed that “Watson can understand all forms of data, interact naturally with people, and learn and reason, at scale.”2
The ads made it sound as though technology had progressed to the point of being able to think and reason like people. While I appreciated the engineering achievements Watson demonstrated on Jeopardy!, even Watson’s creators at IBM knew these systems could not think or reason in any real sense.
Since then, AI has blasted its way into the public consciousness and our everyday lives. It is powering advances in medicine, weather prediction, factory automation, and self-driving cars. Even golf club manufacturers report that AI is now designing their clubs. Every day, people interact with AI. Google Translate helps us understand foreign language webpages and talk to Uber drivers in foreign countries. Vendors have built speech recognition into many apps. We use personal assistants like Siri and Alexa daily to help us complete simple tasks. Face recognition apps automatically label our photos. And AI systems are beating expert game players at complex games like Go and Texas Hold ’Em. Factory robots are moving beyond repetitive motions and starting to stock shelves.
Each of these fantastic AI systems enhances the perception that computers can think and reason like people. Technology vendors reinforce this perception with marketing statements that give the impression their systems have human-level cognitive capabilities. For example, Microsoft and Alibaba announced AI systems that could read as well as people can. However, these systems had minimal skills and did not even understand what they were reading.
AI systems perform many tasks that seem to require intelligence. The rapid progress in AI has caused many to wonder where it will lead. Science fiction writers have pondered this question for decades. Some have invented a future in which we have at our service benevolent and beneficial robots. Everyone would like to have an automated housekeeper like Rosie the Robot from the popular 1960s cartoon TV series The Jetsons. We all love C-3PO from the Star Wars universe, who can have conversations in “over six million forms of communication,” and his self-aware-trashcan partner, R2-D2, who can reprogram enemy computer systems. And we were in awe of the capabilities of the sentient android Data in Star Trek: The Next Generation, who was third in command of the starship (although he famously lacked emotion and so had trouble understanding human behavior).
Others have portrayed AI characters as neither good nor evil but with human-like frailties and have explored the consequences of human–robot interactions. In Blade Runner, for example, Rachael the replicant did not know she was not human until she failed a test. Spike Jonze’s Her explores the consequences of a human falling in love with a disembodied humanoid virtual assistant. In Elysium, Matt Damon’s character must report to an android parole officer. In the TV series Humans and Westworld, humanoid robots gain consciousness and have emotions that cause them to rebel against their involuntary servitude.
Many futurists have foreseen evil robots and killer computers—AI systems that develop free will and turn against us. In the 1927 film Metropolis, a human named Maria is kidnapped and replaced by a robot who looks, talks, and acts like her and then proceeds to unleash chaos in the city. In the 1968 book-turned-movie 2001: A Space Odyssey, the spaceship has a sentient computer, HAL, that runs the spacecraft and has a human-like personality. It converses with the astronauts about a wide variety of topics. Concerned that HAL may have made an error, the astronauts agree to turn the computer off. However, HAL reads their lips, and, in an act of self-preservation, turns off the life-support systems of the other crew members. In the Terminator movie franchise, which first appeared in movie theaters in 1984, an AI defense system perceives all humans as a security threat and creates fearsome robots with one mission: eradicate humanity.
Speculation about the potential dangers of AI is not limited to the realm of science fiction. Many highly visible technologists have predicted that AI systems will become smarter and smarter and will eventually take over the world. Tesla founder Elon Musk says that AI is humanity’s “biggest existential threat”3 and that it poses a “fundamental risk to the existence of civilization.”4 The late renowned physicist Stephen Hawking said, “It could spell the end of the human race.” Philosopher Nick Bostrom, who is the founding director of the Future of Humanity Institute, argues that AI poses the greatest threat humanity has ever encountered—greater than nuclear weapons.5
This kind of fear-inducing hype is an overstatement of the capabilities of AI. AI systems are never going to become intelligent enough to have the ability to exterminate us or turn us into pets. That said, there are many real and critical social issues caused by AI that will not be solved until we separate out and put aside this existential fear.
FACT AND FICTION
The AI systems that these technologists and science fiction authors are worried about all are examples of artificial general intelligence (AGI). AGI systems share in common with humans the ability to reason; to process visual, auditory, and other input; and to use it to adapt to their environments in a wide variety of settings. These systems are as knowledgeable and communicative as humans about a wide range of human events and topics.6 They’re also complete fiction.
Today’s AI systems are miracles of modern engineering. Each of today’s AI systems performs a single task that previously required human intelligence. If we compare these systems with the AGI systems of science fiction lore and with human beings, there are two striking differences: First, each of today’s AI systems can perform only one narrowly defined task.7 A system that learns to name the people in photographs cannot do anything else. It cannot distinguish between a dog and an elephant. It cannot answer questions, retrieve information, or have conversations. Second, today’s AI systems have little or no commonsense8 knowledge of the world and therefore cannot reason based on that knowledge. For example, a facial recognition system can identify people’s names but knows nothing about those particular people or about people in general. It does not know that people use eyes to see and ears to hear. It does not know that people eat food, sleep at night, and work at jobs. It cannot commit crimes or fall in love. Today’s AI systems are all narrow AI systems, a term coined in 2005 by futurist Ray Kurzweil to describe just those differences: machines that can perform only one specific task. Although the performance of narrow AI systems can make them seem intelligent, they are not.
In contrast, humans and fictional AGI systems can perform large numbers of dissimilar tasks. We not only recognize faces, but we also read the paper, cook dinner, tie our shoes, discuss current events, and perform many, many other tasks. We also reason based on our commonsense knowledge of the world. We apply common sense, learned experience, and contextual knowledge to a wide variety of tasks. For example, we use our knowledge of gravity when we take a glass out of the cupboard. We know that if we do not grasp it tightly enough, it will fall. This is not conscious knowledge derived from a definition of gravity or a description in a mathematical equation; it’s unconscious knowledge derived from our lived experience of how the world works. And we use that kind of knowledge to perform dozens of other tasks every day.
The big question is whether today’s narrow AI systems will ever evolve into AGI systems that can use commonsense reasoning to perform many different tasks. As I will explain, the answer is no. We do not have to worry about AGI systems taking over the world. And we probably never will.
TOASTERS DON’T HAVE GHOSTS
The title of Arthur Koestler’s 1967 book The Ghost in the Machine9 alludes to the long-standing philosophical debate about whether humans have a “ghost”—a mind, a consciousness, that cannot be seen or measured—in addition to their physical machines. Koestler believed that people are just their physical machines, that there is no separate mind, and that we will someday be able to explain, for example, emotions like love as the interaction of neurons. I cannot tell you the answer to the philosophical question, and I have no idea if we will ever be able to explain love. However, I can confidently declare my belief that we will never develop computer systems or robots with human-level, commonsense reasoning capabilities. Said another way, there will never be a ghost in the machine.
Even though we do not need to worry about AGI systems dominating humanity, as narrow AI technology becomes more and more widely deployed, it brings with it many new social issues. The race to perfect self-driving vehicles is well underway, but there are safety issues that we must address before we deploy them on our city streets and highways. Autonomous weapons and other narrow AI advances threaten public safety. We may see a significant impact of narrow AI technology on employment. Facial recognition technology is being used for surveillance and threatens our privacy. There are significant issues around fairness and discrimination against minorities. Furthermore, deepfakes, fake news, and hackers are influencing real-world elections. We will need to address all these social issues.
One of the keys to finding solutions to AI-related issues is to make sure we do not overcomplicate them by conflating narrow AI and AGI. For example, if AGI capabilities were imminent, we would need laws that govern human interaction with intelligent robots. Do robots have rights? Can they go to jail? Can they be held financially responsible for an accident? We would also need laws to ensure that the manufacturing process does not create robots that can take over the world.
Fortunately, narrow AI systems will only ever be able to make autonomous decisions regarding specific tasks, so we do not need general AGI laws. We do not have to worry about the legal rights of robots. They can and should have no more legal standing than toasters. Instead, we can focus on laws for specific uses of narrow AI, such as autonomous vehicles.
THE FUTURE IS ALWAYS A MIXED BAG
We see warnings about AI in the popular press every single day. In December 2019 alone, The New York Times featured headlines with grave cautions: “Artificial Intelligence Is Too Important to Leave to Google and Facebook Alone,” “Many Facial-Recognition Systems Are Biased, Says U.S. Study,” and “A.I. Is Making It Easier to Kill (You). Here’s How.” A recent study showed that 60 percent of the people in the UK fear AI.10
Historically, new technology has brought great benefits to society. However, the positive impacts are often accompanied by some negatives. The invention of the automobile brought us greater mobility, but also introduced car accidents. The invention of the internet brought us connectivity beyond any level imagined previously, while it also led to hackers and spam and facilitated child exploitation.
Although even narrow AI may lead to many societal changes, such as the way we work, it’s no different from any other major technological advance. The steam engine and mass production led Western society away from an agrarian lifestyle and into factories, which brought with it increased pollution and wage disparity but ultimately led to the middle class. Advances in transportation expanded the world from local communities into huge geographic regions of travel and trade. The internet expanded that world even further, changing how we do just about everything.
AI is just one more step forward. As with each of those other advances, AI can be dangerous when used for nefarious purposes or without proper regulation, but it’s a tool. Just like any kind of progress, although AI may involve some difficult societal and personal challenges in the short term, its overall effect on the world and on our lives will be largely positive.