What is AI, really—and how will we prepare for it?
Some say we should fear it. Others say worship it. But what’s real, what’s hype, and how will it actually impact our lives?
Whether you’re a total novice or simply want to round out your knowledge, this book is your one-stop crash course on AI. James Wang, former hedge fund investor at Bridgewater Associates, start-up entrepreneur, and now deep tech venture capitalist, cuts through the hype with clarity and insight. He explains both AI’s true superpowers and its hard limits.
By the end, you will understand how AI is already transforming our world—and what is still to come.
What is AI, really—and how will we prepare for it?
Some say we should fear it. Others say worship it. But what’s real, what’s hype, and how will it actually impact our lives?
Whether you’re a total novice or simply want to round out your knowledge, this book is your one-stop crash course on AI. James Wang, former hedge fund investor at Bridgewater Associates, start-up entrepreneur, and now deep tech venture capitalist, cuts through the hype with clarity and insight. He explains both AI’s true superpowers and its hard limits.
By the end, you will understand how AI is already transforming our world—and what is still to come.
Computers don’t exist anymore. Undoubtedly, that sentence sounds baffling, especially in a book about AI. But that’s because the profession of computer has been so thoroughly wiped out by electronic computers. Few even remember that it was once a title held by humans. During WWI and WWII, computers—mostly women, with men out on the war front—were critical for logistics, navigation, ballistics, and more. But by 1952, when the Association for Computing Machinery (ACM) started its now-famous journal, the job title “computer” had essentially disappeared.¹
Liberated from pencil, paper, and human frailty, we’ve seen an exponential increase in both computational capacity and demand. Henry Ford’s Model-T, introduced in 1908 and a triumph of industrial prowess, barely had anything electronic at all.² In 2022, during the pandemic, Ford had over $3.1 billion in lost sales because it couldn’t manufacture enough cars. Why? Ford’s losses stemmed from a shortage of semiconductor chips, the digital brains that control almost everything in a modern car, from engine temperature to the oil change light.³ We have put computational power—now being referred to more and more using the noun “compute”—everywhere.
This hasn’t caused mass unemployment. On the contrary, the United States in 1900, leading up to WWI and WWII, barely employed 4 percent of the population in technical or scientific fields of any kind. By 1950, when the computer profession had been wiped out, that proportion had increased to 8 percent.⁴ By 2021, nearly a quarter of all jobs in the US were explicitly within science, technology, engineering, and mathematics (STEM).⁵ Beyond explicit STEM jobs, nearly everyone on earth has access to computational capacity. The proprietor of your local corner store likely utilizes digital spreadsheets for tracking expenses, surpassing the capabilities of any war planner from the 1930s. Even some of the poorest farmers today, like those I met during my time working with a nonprofit in Ghana, have flip phones with far more computational power than what NASA used to put a man on the moon in 1969.
Our Modern Lives Are Made Possible by Technology
This is literally how all technology has always worked. In the introduction, we described a changed world between 1900 and 1930. The cause, of course, was the tail end of the Second Industrial Revolution, which varied by country but was roughly 1870–1914.⁶ Humankind started the period—outside of Britain, which transitioned somewhat earlier—as a species dedicating most of its population to farming to feed itself, with literal “cottage” industries and small-scale craftspeople.⁷ The period ended with a version of our modern world that was dirtier, soot-coated, and more dangerous but broadly recognizable, featuring cities and factories that we’d be familiar with today. Manufacturing technology was a flywheel, allowing mass production of innovations like the threshing machine, which further increased farm productivity and freed even more farmers to become industrial laborers.⁸
Most of us would not enjoy living in that world. Workers toiled fourteen to sixteen hours, six days a week, with backbreaking labor. Child labor was common. Factory life was smoky, sooty, and extraordinarily dangerous. Overcrowded cities grappled with outbreaks of diseases like typhus and smallpox. At the same time, people still flocked to cities. While early industrial conditions horrify us now, people of that era were not abandoning comfortable lives. They were leaving similarly backbreaking labor on farms that tied them to their land, barely made them enough to survive, and had meaner conditions than almost any developing country today. City life, and the Industrial Revolution itself, allowed for cheaper basic goods—clothing and household items—and some level of wealth accumulation. It provided the dream of a better life.⁹
Objectively, we’ve seen a remarkable rise in material wealth and quality of life driven primarily by technology. Between 1900 and now, the world has seen global production value skyrocket from $4.8 trillion to $166.7 trillion (in 2021 US dollars), infant mortality go from common (around one in six children in 1900) to rare (less than 3.7 percent even including the poorest countries in 2022), and lifespan double from thirty-two years to seventy-three years.¹⁰ It’s impossible to deny the staggering progress we’ve made. We are the beneficiaries of multiple technological revolutions in the last century and a half. The Industrial Revolution. The Green Revolution. And, finally, the rise of the Information and Digital Age.
I think it’s fair to say that the poorest in our society today enjoy better medicines, more varied foods, faster transit, more entertainment, and more basic comforts than the richest a century ago. The difference? Technology. Will the rise of the AI Age also herald incredible changes? We’ll be covering this in-depth in later chapters, but our own technological transition is only just beginning.
Unfortunately, although “what comes next” has consistently been better than “what came before,” that doesn’t mean the transition won’t be painful in the short and medium term.
Luddites or Computers?
Luddite is now defined as “someone who is opposed or resistant to new technologies or technological change.”¹¹ However, the term originates from a real movement in England during the 1800s. As previously mentioned, Britain was the first country to industrialize, doing so significantly earlier than the rest of the world and experiencing some of these transitional pains before others. Luddites were skilled artisans in textiles. They protested against the mechanization of their industry, fearing its threat to their livelihoods and craft.¹² Their story did not have a happy ending.
In 1811, workers stormed a factory in Nottingham to destroy textile machines. The government deployed soldiers who violently suppressed the uprising.¹³ By 1813, Parliament passed laws making machine breaking punishable by death. Some Luddites were hanged. Perhaps worse, others were banished to Australia. The movement was dead.¹⁴
The Luddites were right, though. They knew their industry. And they knew what the technology did. As such, they knew it would degrade their wages, making it possible for their work to be done by unskilled workers and forcing them to take lower-paid work in factories.¹⁵
Technology may be a boon to overall welfare, but it isn’t always good for everyone in the moment. Society was better off with more access to cheaper clothing and other textile goods. But the hard-earned skills of the Luddites were suddenly made far less valuable. Why did this happen? As mentioned at the beginning of this chapter, electronic computers wiped out human computers. However, STEM as a whole flourished. Anyone with an analytical mind undoubtedly found more opportunities with the mechanization of calculation. What made the difference?
For the Luddites, technology down-skilled their trade. Mechanization allowed far more people with less training to do what they could—at least close enough for most consumers. For computers, technology elevated their trade. Suddenly, the same numerically minded individuals themselves could do far higher-level tasks and spend less time on rote calculation. The difference isn’t manual labor versus “knowledge work” either. Many manual professions—such as farming—have benefited enormously from technological progress. Workers ended up being able to do far more and improve their material well-being.
Farms went from nonstop, backbreaking labor that often left families barely able to feed themselves in the early 1900s to a forty-hour-a-week average job with wages above fifteen dollars per hour in 2023.¹⁶ It’s not glamorous, comfortable, or particularly lucrative, but it’s a significant departure from the profession’s beginnings. The farms also generate multiple orders of magnitude more produce.¹⁷ Given that it’s beyond the scope of this book to explore every instance of technological transition, readers will have to take my word for it that most technological transitions resemble computers rather than Luddites. That’s scant comfort to the Luddites, though, and certainly no solace to those fearing job losses to AI. What if their field is closer to textile mechanization than computerization?
AI Is Already Causing Disruption
According to a survey by the Society of Authors, a quarter of illustrators and over a third of translators have already lost work due to AI. Furthermore, two-thirds of fiction writers and over half of nonfiction writers think AI will negatively impact their future earnings.¹⁸ These worries have been backed up empirically. In a study using data from a leading online jobs platform, researchers found that substitutable skills like writing and translation saw a 20 to 50 percent relative decrease in demand since the introduction of ChatGPT. Interestingly, however, certain categories that utilized AI but weren’t fully substituted, like “creative writing and explainer videos” and “lead generation,” saw significant increases.¹⁹ The story is obviously more complicated than simply writers becoming the new Luddites.
It makes sense that the emergence of Large Language Models (LLMs) would cause consternation, particularly among those operating in the same domain. Like a mechanical loom produces cloth, LLMs produce words. Will all writers and translators lose their jobs? Or what about programmers, who also have a profession composing words? At this point in the book, we don’t yet have the tools to fully explore the question, but we will by the end of it.
LLMs and AI in general have too much fear, hype, and mysticism attached to them. We’ll need far more information about the fundamental strengths and limits of AI, which will be the subject of the coming chapters. Only then can we answer the question—not only for writers, translators, and programmers but for everyone else too.
AI Is Already Spreading Rapidly
While writers worry about AI, other parts of the economy have enthusiastically adopted ChatGPT and other LLMs for everyday work tasks. According to a 2024 report by AIPRM, 75 percent of surveyed workers were using AI in the workplace.²⁰ Beyond that, familiarity with AI tools is high; a Pew Research Center survey found that 68 percent of US workers have some familiarity with AI tools. However, while many are optimistic, a significant portion (52 percent) are at least somewhat apprehensive about AI’s impact on their jobs.²¹
Of course, this hesitation is completely understandable. We have already seen massive shifts in our economy, and this only adds to it. A study by the Brookings Institute found that more than 30 percent of all workers could see at least half of their occupation’s tasks affected by generative AI. This impact is expected to accelerate trends such as automation in various industries.²² While the outcome could mean workers transition to higher-paying roles, it’s not inconceivable that certain jobs will simply disappear, and those workers might drop out of the workforce. Regardless of what happens, disruption is happening now.
Our Imagination Often Fails Us
So what comes next after disruption? Historically, we have been terrible at figuring this out.
In the late 1700s, English economist Thomas Malthus observed a troubling trend. He saw the population explode in England with the rise in economic well-being. It seemed clear what would happen. He argued that humans would inevitably exponentially increase in population, but food supplies would never be able to keep up. Human hopes for “social happiness” were in vain. Essentially, humanity would be doomed to outstrip resources, and only war and famine could bring the population back in line. Famously, he was extraordinarily wrong, and improvements in agricultural yield (from technology) not only kept up with but easily outstripped population growth.²³ In November 2022, we exceeded eight billion people on Earth—a mind-bogglingly large number that Malthus could have never imagined. Maybe we can hope for “social happiness” after all.²⁴
This pessimism in the face of disruptive change occurs in every era, though. In Jeremy Rifkin’s 1995 classic, The End of Work, he calls the idea that we would replace jobs being destroyed by automation in the agricultural and manufacturing sectors with ones in science, engineering, and professional services a “pipe dream.”²⁵ Of course, that is precisely what happened. STEM jobs are not neatly classified in US Bureau of Labor data, but “Professional and Business Services” skyrocketed (along with many other sectors) as manufacturing jobs precipitously declined in 2000.²⁶ This was really less automation and much more China manufacturing making its debut on the world stage, but regardless, we saw Rifkin’s prophecy come true with many jobs being destroyed—just by other humans abroad (which is a tougher issue).²⁷ Nevertheless, the economy rapidly shifted and created many jobs in other sectors.
For better or worse, our complete inability to foresee the changes in our economies and societies in the face of disruption is a constant in human history. Malthus could see population growth, but he could not imagine how technology could increase material goods massively beyond anything he knew. Rifkin could see automation obsoleting jobs but could not see how the economy would retool itself.
Even just two years ago, I spoke with a successful businesswoman who channeled Malthus in chiding young people that they had to consume less, have fewer children, and live poorer lives because climate change is inevitable. She could see climate change but couldn’t see that the trajectory of climate change had already been materially shifted due to solar improvements.²⁸ Technology causes disruption but often rides to the rescue as well.
Of course, today, we see the same worries from the previously cited surveys of authors, translators, and corporate employees. What comes next? Who would have predicted employment websites would have hundreds of job listings in California alone for “self-driving car engineer?”²⁹ While it’s silly to try to predict exactly how our job landscape will shift, it’s inevitable that it will happen.
But even if the economy shifts more rapidly than people expect, we’ll still feel the pain of transition. As we showed in the study of the job listing site, in the short term, some people win, and some people lose. Sometimes people are computers. And sometimes people are Luddites.
In the case of the literal Luddites, as we discussed, vastly greater swaths of society enjoyed the benefits of greater material prosperity. Society wasn’t going to return to the past. And given the collective increase in welfare, we wouldn’t want it to. But the Luddites themselves were worse off, at least temporarily (not including those executed or sent to Australia, where conditions were unfortunately more permanent in nature).
It’s impossible to know whether or not you are a computer or Luddite without understanding the technology itself. For Luddites, how the mechanical loom worked—and its threat—was clear. AI is more general purpose—and far harder to pin down. That’s why we need to have a clear-eyed look at what AI actually is—and isn’t.
How do you regard Artificial Intelligence (AI), the ubiquitous name that’s quietly revolutionizing the world? Friend or foe? A threat to your job and/or lifestyle? A know-it-all that’s intrinsically cold and anti-human and to be wary of, or a trustworthy assistant?
Truth be told, AI can be both beneficial and harmful. It all depends on how judiciously we use it after arming ourselves with the basics of AI. And it’s here to stay regardless of whether we like it or not! That being the case, it vastly helps in gaining perspective about AI’s beneficial roles and its pitfalls. James Wang’s What You Need to Know About AI: A Primer on Being Human in an Artificially Intelligent World aims to address just that need: to expound on AI sufficiently, so you better know the monster (or perhaps the void!) inside it and thus, how to leverage it safely and beneficially, while steering clear of harmful defects/ deficiencies.
The author is a renowned AI expert with over a decade of experience in the field. He’s a passionate researcher, activist, and advocate of AI. His background lends great credibility to this book.
This book comprises three parts. The first, “Understanding AI,” introduces the subject and brings you up to speed on the basics. The second, “AI in the Wild,” deals with issues/challenges related to rolling AI out for business/commercial use. "Putting it All Together,” the final one, using parts one and two as its basis, addresses common questions, popular misconceptions, and fears about AI.
I thoroughly enjoyed reading this book because it’s informative, engaging, and also because of its rich, up-to-date technical content. It provides an adequate background of AI, its historical timeline, the types of AI (symbolic, deep learning, and hybrids), a spotlight on LLMs (Large Language Models), and their awe-inspiring capabilities and frailties, viable/ profitable applications of AI, current challenges in the field (massive power consumption, cooling capability, hardware that supports parallel processing, etc.), nuances in launching AI for commercial use vs. traditional software, and so on. It also busts myths and overhype that sci-fi and Hollywood have regrettably sown in people's minds.
The author’s writing style reveals a tech researcher at heart—vigorous, eager, and thorough. You occasionally encounter challenging questions throughout the book that make the reading lively and interesting. However, though a technical gem, I would be remiss if I fail to highlight the following downsides: (1) readability is a few notches below average—suitable changes in page formatting/styling can help, which is recommend (2) though there are only a few, the book isn’t free of English errors, and (3) this isn’t a book for everyone, as the title may suggest. At a minimum, you’ll need a college-level STEM background to understand/appreciate it. If your background is non-STEM, I strongly recommend you stay clear.
To check whether or not this book is for you, take a simple test: how thoroughly do you understand these words, terms, or phrases: perceptron, convolution, XOR, Turing test, matrix multiplication, and simulacrum? If you score three out of six or higher, it’s for you. Otherwise, it isn’t.
Coming to the recommended audience, I think the choice is abundantly clear: AI pros and students, and for those outside the AI community, a minimum college-level STEM background.