The promise of AI is captivatingâfaster decisions, sharper insights, and unmatched precision. But have we paused to consider what we might lose in our race to embrace the machine? Infailible examines how AI is reshaping customer relationships and human interaction in business.
The book explores the intersection of consumer behaviors and human intelligence, pairing them with AI capabilities to uncover strengths, limitations, and opportunities. It challenges the âAI Ideology,â a belief system that elevates AI beyond its capabilities, tempting businesses to prioritize automation over empathy and efficiency over meaningful connection. Yet, customers are not bots themselves; they are people with emotions, stories, and aspirations that extend beyond datasets.
Through real-world examples from brands like Nike, Starbucks, Taco Bell, and Netflix, Infailible illustrates AIâs paradox: its potential versus its output. This is not a book about technologyâitâs about trust, humanity, and crafting experiences that balance innovation with genuine connection.
The promise of AI is captivatingâfaster decisions, sharper insights, and unmatched precision. But have we paused to consider what we might lose in our race to embrace the machine? Infailible examines how AI is reshaping customer relationships and human interaction in business.
The book explores the intersection of consumer behaviors and human intelligence, pairing them with AI capabilities to uncover strengths, limitations, and opportunities. It challenges the âAI Ideology,â a belief system that elevates AI beyond its capabilities, tempting businesses to prioritize automation over empathy and efficiency over meaningful connection. Yet, customers are not bots themselves; they are people with emotions, stories, and aspirations that extend beyond datasets.
Through real-world examples from brands like Nike, Starbucks, Taco Bell, and Netflix, Infailible illustrates AIâs paradox: its potential versus its output. This is not a book about technologyâitâs about trust, humanity, and crafting experiences that balance innovation with genuine connection.
I should have written this book sooner.
Sitting at my desk surrounded by Post-it notes full of new ideas, Iâm reflecting on a roundtable webinar I participated in about the convergence of Artificial Intelligence (AI) and Customer Experience (CX). In my head, around 99.9% of the group of 200 people that tuned in believe AI is the best tool ever for developing richer customer experiences. Iâm confident I was the remaining .1% who considers the influence of AI on customers leads them to feel disconnected from your company.
During recent months, I have become increasingly surprised by how people interpret AI or, more specifically, understand and believe in AI. When I say âbelieve in AI,â I donât mean simple trust in its accuracy or abilities. Iâm referring to a deeper ideological belief in the transformative power of AIâa system of thought that elevates AI as a key force in redefining human potential, ethics, and progress, much like belief systems that ascribe infallibility or purpose to a higher power. This infallible belief exists despite the technical reality of what AI is actually capable of doing. The belief in what AI can achieve far exceeds its current technical capabilities, creating an approximate seven-year gap between perception and reality. There is too much Star Trek, not enough Columbo.
Whatâs even more amusing is scrolling through countless articles praising the wonders of AIâonly to realize they were written by AI. Itâs like a chatbot patting itself on the back for being ârevolutionary.â The irony practically writes itself. On a positive note, the hashtag #degenerativeAI has grown since the fall of 2024. Degenerative AI highlights the risks of systems trained on repetitive, low-quality, or biased data, leading to a gradual decline in creativity, accuracy, and meaningful insights over time. This trend highlights a skeptical shift in the publicâs perception of AI, recognizing that the relentless push to use it in a variety of industries and situations isnât without consequences. While people are just beginning to question the integrity and direction of some AI implementations, theyâre still captivated by the possibilities, unable or unwilling to step back and take a broader look at the ramifications.
Putting the AI hype aside, the parallels to customer relationships are hard to ignore. Iâm increasingly shocked by the number of organizations that co-opt the label âcustomer first,â but fail to practice that mindset. Instead of embracing how AI technology benefits their customers, they instead focus on its magical potential for the benefit of their teams. As I described in my 2023 book, Customer Transformation, to be obsessed with your customers means that technology always comes last.
I had my first encounter with AI in 1999. As most of my followers know, I began my career at a movie theater and in my spare time took a second job in technology building websites and software tools.
At the theater, I had the privilege of entering auditoriums before the lights dimmed to welcome guests. Sharing news about upcoming movies and highlighted marketing campaigns, Iâd get them engaged with trivia questions and movie posters or T-shirts for correct answers.
During this time, most theaters in the United States had on-screen advertisements prior to the film, similar to what you see today. However, in the 1990s they were 35mm slide-based, with canisters of images that looped automatically with a distinctive sound as each slide dropped into the viewfinder before being projected on the screen.
While walking into an auditorium to welcome the audience one day, a couple stopped me to chat and ask questions. They were Friday evening regulars (a trend that has died in the digital age), and I had gotten to know them over several months. Carl pointed to the screen and asked, âDo they ever update these slides?â I laughed and explained that a few are replaced each month. [As a side note for web enthusiasts: the image carousels used on websites today get their name from the vintage Kodak 35mm slide carousel patented in 1965. This is the same year Joseph Weizenbaum created ELIZA, the worldâs first chatbot, which pioneered natural language processing by mimicking a Rogerian psychotherapist.]
Finding an answer for Carlâs question was more complicated than a âyesâ or âno.â The basic formula for these on-screen advertisement packages included local business advertisements, theater concessions promotions, upcoming movie announcements, and trivia questions, with the answers separated by additional advertisements. An average carousel held about 100 slides, and each slide appeared on the screen for about 15 seconds before automatically advancing to the next. On average, the entire presentation was approximately 25 minutes long.
For those regular patrons, arriving at the theater 30 minutes before showtime meant youâd see the same slides. And if you visited multiple times a month, it was likely that youâd memorized the slides. I often heard people shouting out the answers to trivia questions in the theaterânot because they knew the answer immediately, but rather because they had already seen those slides previously or even in the same evening. Other faults plagued the carousel system. It wasnât uncommon to find a slide flipped upside down or backward due to human error and manual processes for updating the presentation.
Reflecting on the conversation with Carl, I asked myself an important question: could I build a digital system that prevented typical errors and updated content automatically? Better yet, could I make the system intelligent so that no trivia question was asked more than once and adjusted based on who was in the audience? Could I also make it interactive?
After a few months of development, we premiered Digireel, the first-ever digital entertainment and advertising system, at Cinemapolis movie theater in Anaheim Hills, California (it was convenient I worked there). It leveraged digital projectors, an on-premise server, a connected database of advertisements, hundreds of trivia questions, and other forms of interactive entertainment and games. We introduced animations, sound, video, and an intelligent interactive mechanism to personalize the content based on the movie being shown in a specific theater, its MPA rating, time of day, and audience demographic. Honestly, this system was a bit ahead of its time. However, after I obtained a patent, the product was acquired by National CineMedia, Inc., and became the foundation for digital entertainment you see prior to movies todayâalbeit with more ads, less entertainment, and no intelligence.
Over the years, as mobile technology, data tools, and AI grew in accessibility and capability, Iâve frequently thought about rebuilding Digireel. Despite the drastic decline in movie theater attendance leading to an unrealized investment, , I imagine a powerful upgraded experience with todayâs AI. And thatâs the moral of this story: Despite the power and availability of AI, there is no new customer problem to solve or consumer demand for an upgraded advertising platform powered by AI at movie theaters. To do so would be a focus on technology, not the customer.
The Bandwagon Effect
This phenomenon, not limited to the political arenas where it first took root, now infiltrates corporate boardrooms with a potent force. Despite the marketing benefits of this cognitive bias, a crucial misstep is occurring in todayâs digital strategy, where AI has become a buzzword synonymous with innovation and forward-thinking. Executives, captivated by the widespread availability of AI, are driving its integration not for its genuine benefits but because itâs trendyâor perhaps, more cynically, as a means to enable complacency.
Suddenly, I witnessed hundreds of people become âAI Expertsâ and watched sales pitches flood my inbox with âone-of-a-kindâ AI tools that could solve all my problems. This gold rush toward AI solutions has left the market saturated with questionable products and diluted the genuine understanding and potential applications of AI technology. Everywhere you look, there are bold claims about AI transforming businesses overnight, yet few address the nuanced challenges or the strategic integration necessary for these tools to deliver on those promises. This hasty jump onto the AI bandwagon has fostered a landscape where hype often overshadows substance, leading to disillusionment and skepticism among consumers who are promised revolution but experience little real change.
This gap between hype and reality is reflected in the above diagram. The largest circle on the left represents the overwhelming number of people talking about AI, driven by media buzz and speculative claims. Within that, a smaller circle shows those actively using AIâleveraging tools for tasks or integrating them into workflows. An even smaller circle depicts the people building AI: the researchers, developers, and engineers creating these systems. On the right, zooming in, a tiny dot within the builders that represents those who truly understand AI. Itâs worth noting that some individuals who genuinely understand AI may not be actively building, using, or discussing it for reasons explored in this book. These experts possess a profound understanding of AIâs technical, ethical, and societal dimensions, enabling them to distinguish between realistic possibilities and inflated promises. The diagram highlights how widespread discussions about AI often outpace genuine comprehension, leading to misconceptions and misplaced expectations.
In a moment that feels more like satire than an actual advertisement, Salesforce released âRain,â an ad that launched in December 2024 featuring Matthew McConaughey. It offers a hilariously bleak glimpse into the AI-driven future we supposedly need. McConaughey sits alone at an outdoor cafĂ© in the pouring rain, lamenting how an AI-based app could have saved him from this damp inconvenience by automatically moving his reservation indoors. Across the street, his pal Woody Harrelson sits dry under an awning, beckoning him over. The implication? Without AI, weâre hopelessly doomed to soggy dinners and missed opportunities.
But letâs pause for a reality check. Are we now so lazy that we need AI to tell us itâs raining? Are we so gullible to believe that restaurants wouldnât simply pull their tables inside? Is this ad reinforcing a narrative that human judgment is inherently flawed and requires technological intervention to function? What does the ad say about the priorities of tech companies? This ad epitomizes the confection of âneedsâ that donât exist, manufactured to sell a vision of technology solving problems consumers donât have. Itâs a glaring illustration of how hype often overtakes substance, perpetuating an AI ideology that inflates the role of technology in trivial aspects of life.
As Isaac Asimov once observed, âThe saddest aspect of life right now is that science gathers knowledge faster than society gathers wisdom.â This quote rings as accurate now as it did in 1988, reflecting the gap between technological advancement and our ability to integrate it wisely. Salesforce isnât just trying to sell AI; the company is offering a commentary on how easily we can be seduced by slogans and sales mantras, strategically aligned with a fear of missing out and ultimately closing our eyes to common sense.
Lost Art Forms
In November 2024, I came across a LinkedIn announcement by Andrew Chen, General Partner at a16z, heralding an investment in Promise AI, a studio claiming to be powered by âthe worldâs best GenAI artists and storytellers.â As someone with deep roots in media, entertainment, and gaming, my immediate reaction was one of disbelief. I couldnât help but comment, âThis has to be a joke, right? Itâs not April Foolsâ Day, is it?â
What struck me wasnât just the hyperbole, but also the disconnection from reality. The announcement praised AI-driven storytelling as revolutionary, while overlooking the profound value of genuine human creativity, legal issues with copyrights and trademarks, and the complete disregard for how consumers want to be entertained. The comments section exploded with similar sentiments, condemning the notion that generative AI could replace or rival authentic talent. One user aptly wrote, âAI artists are not real. There is little to no talent behind the models borrowed, stolen, or modified.â Another, with biting honesty, remarked, ââWorldâs best GenAI artists and storytellersâ is quite a statement. Also, I vomited while reading this post.â
The hype surrounding such investments illustrates a troubling trend not exclusive to entertainment and gaming: the pursuit of technological spectacle at the expense of quality. This reckless prioritization of novelty over substance explains the layoffs, uninspired content, and failing connections with audiences that plague the entertainment world today.
Coca-Colaâs AI-generated holiday ads in 2024 illustrate the pitfalls of this approach. The campaign, intended to modernize the companyâs nostalgic âHolidays Are Comingâ ads, used AI tools like Leonardo, Luma, and Kling to create visuals that include distorted proportions, glossy faces, and peculiar motion. Social media users quickly mocked these uncanny elements, with one calling the technology âa powerful toolâ for producing flawed content. The backlash underscores the growing divide between cost-cutting executives embracing AI and creative professionals who argue it undermines artistry and quality.
Critics highlight deeper harms. AI-generated ads can mislead consumers with artificial imagery, damage brand trust, and exploit stolen creative work. Megan Cruz of The Broad Perspective Pod wrote: âThis is always what [AI] was going to be used for btw. Itâs not some great equalizer. Itâs a way for already massively wealthy execs to add a few more mil to their annual bonuses by cutting creative teams entirely & having a machine vomit up the most boring slop imaginable instead.â
Rob Wrubel, founder and managing partner at Silverside AI, which worked with Coke on the new campaign, said, âPeople believe that AI means you automate everythingâsomeone just puts one prompt in, and a video pops out. It is the farthest from the truth,â Wrubel said. But, he added, âThe intensity of the creative process that used to happen over weeks and months can now happen every two hours.â
While some studies suggest consumers may overlook AI flaws, such as Coca-Colaâs nostalgic appeal masking errors, the broader implications remain troubling. AIâs reliance on remixing preexisting content rather than creating genuine innovations erodes the integrity of brand storytelling. Brands risk alienating audiences and devaluing their identity when sacrificing quality for efficiency. This growing reliance on AI raises critical questions about authenticity, creative laborâs future, and AIâs ethics in advertising.
This âAI for the sake of AIâ belief manifests as a costly distraction. Across multiple industries, executives are asking, âHow do we take advantage of AI?â when the real question should be, âDo our customers want to engage with us through AI?â The distinction between these queries is not just semantic but foundational to building beneficial rather than merely fashionable strategies.
A compelling study in June 2024 (Cicek et al., 2024) offers a sobering insight into this issue. Their research, titled âAdverse impacts of revealing the presence of Artificial Intelligence in product and service descriptions on purchase intentions: the mediating role of emotional trust and the moderating role of perceived risk,â sheds light on a profound consequence of misusing AI in consumer communication. The study presented 1,000 individuals with various product descriptions, some with AI mentions and others without. The findings were stark: products associated with AI were often met with decreased emotional trust, leading to lowered purchase intentions. This effect was exacerbated in high-risk scenarios, such as purchasing expensive electronics or medical devices, where the mention of AI features generated additional apprehension.
The repercussions of this bandwagon mentality are multifaceted. Companies risk not only financial waste from unwarranted investments in AI but also, perhaps more critically, the erosion of customer confidence. The allure of AI might bring initial attention, but if its application does not genuinely enhance the customer experience or offer a tangible improvement in product or service delivery, disappointment is inevitable. The chase after AI as a trendy tool rather than a strategic asset could lead companies into a cycle of unfulfilled promises and detached customer relationships.
It is wise for businesses to step back and evaluate their approach to AI. It isnât enough to plaster âAI-powered!â across marketing campaigns without substantiating how it tangibly benefits the customer. The focus should remain steadfast on the productâs actual value and how AI can enhance that value meaningfully, not just as an âartificialâ selling point.
During a CNN interview in August 2024, Gursoy elaborated on the studyâs findings, âWe looked at vacuum cleaners, TVs, consumer services, health services, and in every single case, the intention to buy or use the product or service was significantly lower whenever we mentioned AI in the product description.â This consistent trend across various industries underscores the depth of consumer skepticism toward AI-augmented products. Such views are further validated by recent studies, including a Pew Research survey of 11,000 people revealing a sharp decline in public trust toward AI: 52% of respondents in 2023 expressed more concern than excitement about AI technologies, a notable rise from 37% from the same research in 2021.
Is it Hype?
In June 2024, the management consulting company Gartner reported that the industry surrounding Generative AI (GenAI) has reached the peak of inflated expectations and is now descending into the trough of disillusionment. This hype cycle describes a common pattern in technological adoption, where initial enthusiasm gives way to a more sobering reality.
The âpeak of inflated expectationsâ occurs when product usage surges amidst significant hype and circumstance, yet tangible evidence that the technology can meet user needs remains sparse. Following this, the âtrough of disillusionmentâ sets in as the initial excitement fades, early adopters begin to encounter performance issues, and the return on investment (ROI) appears to be lower than anticipated.
My perspective on this cycle, particularly with AI, suggests a divergence from typical Gartner cycles. AI technology, including its generative forms, has been evolving over decades, with foundational developments emerging as early as the 1960s. This long history challenges the notion of a sudden emergence typical of Gartnerâs âinnovation trigger.â
Throughout 2024, Iâve heard countless defenses and rhetoric urging us to âget used to AI; itâs here to stay.â However, in comparison, Virtual Reality (VR) has been around equally as long, but you don't hear the same demand for submission, mainly because the market size is still too small. Yet, the current trends in GenAI have seen dramatic increases in usage in less than two years, suggesting either a very slow disillusionment stage or an upcoming catastrophic crash akin to the dot-com bubble burst of the early 2000s. During that period, the NASDAQ composite stock market index soared by 800% between 1995 and its 2000 peak, only to plummet 78% by October 2002, erasing all its gains.
The parallel drawn here with the dot-com era highlights the potential volatility and the discrepancy between the long-standing development of AI technologies and the recent explosive hype. This mismatch between historical development and contemporary expectations suggests we might be on the brink of a significant recalibration in how GenAI is perceived and utilized in the broader tech landscape.
Widder and Hicks (2024) highlight that the deflation of the generative AI hype bubble reveals a recurring issue: technologies marketed as inevitable often leave behind harmful legacies. They argue,
âEven as the generative AI hype bubble slowly deflates, its harmful effects will last: carbon canât be put back in the ground, workers continue to need to fend off AIâs disciplining effects, and the poisonous effect on our information commons will be hard to undo.â
Businesses rushing to adopt AI risk creating dependencies on flawed systems that fail to deliver on their promises. The authors warn that this hype-driven entrenchment damages customer trust, erodes the reliability of information ecosystems, and diverts resources from sustainable progress. They emphasize that, as AIâs narrative shifts from inevitability to scrutiny, businesses must approach adoption cautiously, focusing on long-term utility rather than short-lived trends.
Both the hype cycle and bandwagon effect of rushing into AI can have severe repercussions for businesses. Beyond the financial implications of misdirected investments, thereâs a substantial risk of eroding customer trust and loyalty. History shows us that those chasing hype lose, while those who see through it build progress. AI will be no different.
The Make-Believe Technology
The British novelist Arthur C. Clarke famously said, âAny sufficiently advanced technology is indistinguishable from magic.â Today, AI has become the epitome of this illusion, or what I often call, the make-believe technology.
In the 1970s, the Pet Rock became a cultural phenomenon. For $3.95, you could buy a rock in a cardboard box with breathing holes and a humorous training manual. It was absurd, it was viral, and it was ultimately empty. The Pet Rock didnât solve a problem or add lasting valueâit was a gimmick packaged as a product.
AIâs rapid rise in 2024 mirrors historical moments of overhyped promises, from the Pet Rock to Californiaâs gold rush in the 1850s, showing how those selling tools, like shovels then or AI infrastructure now, often reap more benefits than the users chasing quick success.
Let me be clear: AI itself is not the problem. The issue lies in the infallible ideology of AI, including rampant, misleading claims about its capabilities, harm caused by false promises, and reliance on stolen creative work to build these systems.
In 2023, the belief in AI was steeped in doom, with references to Terminatorâs Skynet and fears of AI spiraling out of control to threaten humanity. This narrative dominated headlines. But in 2024, the doom perspective was conveniently dismissed, in part due to an essay by Marc Andreessen, who declared, âThe era of Artificial Intelligence is here, and boy are people freaking out. Fortunately, I am here to bring the good news: AI will not destroy the world, and in fact may save it.â
Andreessenâs essay repeatedly couches AI as a utopian solution, ignoring risks and challenges while emphasizing limitless potential. This optimistic narrative reframed AI from a looming threat to a potential savior, perfectly aligning with Big Techâs agenda of rapid adoption and minimal regulation. By downplaying real risks and amplifying techno-optimism, the industry ensured that profits soared alongside AI models, creating a modern version of snake oil that many organizations still buy.
A Clearer Perspective
Did you know, there are seven main âtypesâ of AI:
Reactive Machine AI: AI that reacts to external stimuli in real-time but lacks the ability to store information or learn from past experiences.
Limited Memory AI: AI that retains data temporarily to learn, adapt, and improve its performance for future tasks.
Narrow AI: AI programmed to perform specific tasks without the ability to independently learn beyond its initial programming.
Theory of Mind AI: AI designed to recognize and respond to human emotions while also performing functions similar to limited memory AI.
Artificial General Intelligence (AGI): AI designed to learn, reason, and perform tasks across various domains at a level comparable to human capabilities.
Self-Aware AI: AI that possesses self-awareness, understands human emotions, and demonstrates human-level intelligence, representing the highest stage of AI development.
Artificial Superintelligence (ASI): AI capable of exceeding human intelligence and outperforming humans in knowledge, problem-solving, and capabilities.
We can find between 20-30 sub-forms of AI categorized under each of these main types. Currently, the most well-known Generative AI and large language models fall under Narrow AI because it is designed to perform specific tasks, such as creating text, images, music, or other forms of content based on patterns in the data it has been trained on. Strip away the jargon; their core function resembles a glorified search engine: prompt, response, repeat. Yet, the public fascination persists, driven by the desire to see AI as something more profound than it truly is. The reality starkly contrasts the Star Trek fantasies projected onto it. Instead of boldly venturing into uncharted territory, AI rehashes and regurgitates data. This book does not aim to marvel at the supposed âmagicâ of AI. Like the dystopian reality portrayed in Wall-E, it will explore how AI quietly erodes the depth of human intelligences, corrupting the qualities that make us uniquely capable of meaningful connection.
As we navigate the future, we see that aligning AI strategies with a nuanced understanding of customer behavior creates richer interactions and deeper engagement. But technology alone cannot replace the subtleties of human insight. So, we will also examine various types of intelligenceâartificial, human, emotional, behavioral, and aspirationalâunpacking how each plays a unique role in shaping customer experiences.
To ground this topic, Iâll explore the groundbreaking work of Howard Gardner, a psychologist and professor of education at Harvard University, who introduced his theory of multiple intelligences in 1983. This influential framework redefined our understanding of intelligence by challenging the traditional focus on IQ as a single, standardized measure. Instead, Gardner proposed that intelligence encompasses a set of distinct abilities, each representing a unique way of perceiving, understanding, and interacting with the world. His theory highlights diverse forms of intelligence, from linguistic and logical-mathematical to spatial and interpersonal, providing a lens through which to contrast human capabilities with the functions of artificial intelligence.
Throughout this exploration, weâll consider how understanding customersâ broader life goals and values can guide more mindful uses of AI. When businesses incorporate an understanding of these deeper motivations, they create strategies that not only fulfill immediate needs but also resonate with customersâ long-term aspirations, enhancing engagement and loyalty. Balancing artificial intelligence with a genuine appreciation for human intelligence requires thoughtful strategy. Iâll examine the potential risks of relying too heavily on automation, risks that can disconnect businesses from the people they serve, ultimately diminishing trust and authenticity.
Artificial intelligence is impressive, scalable, tireless, and razor-focused. But for all its clever algorithms and data-crunching power, AI lacks the one element that human intelligence brings effortlessly to the table: understanding. AI can analyze patterns, predict preferences, and automate interactions, yet it canât truly feel what itâs like to be a customer in need, an employee struggling, or a person seeking meaning. Human intelligence isnât just about calculation; itâs about context, empathy, and intuitionâthe things that canât be measured in code. When we rely on AI alone, we might get precision, but we miss connection. And in business, connection is everything.
Iâve contemplated what I have to say about AI and its role in our lives and whether or not what is in my head is anything different from the hundreds of books on the same topic already available. To be clear: Iâm not against AI; it can most definitely have a supportive place in our customer strategies. However, I will always favor focusing on the customer first and the technology last. Whether this is simply another vehicle to share fun stories, a genuine analysis of artificial hype, or a cautionary tale for where society is headed, in hindsight, I should have written this book sooner.
Oh boy, the promise of Artificial Intelligence (AI) can be captivating to the unsuspecting person, but it can also spell trouble in the end. I wanted to read this book because it is the first book I have seen that challenges the AI ideology. While this might not be a popular thought, it should never be discounted. In Infailible: The Artificial Intelligence Ideology Reshaping Customer Behavior, Chris Hood discusses the connection between human intelligence, consumer behavior, and AI.
I am happy that someone has decided to take the bull by the horns and discuss the other side of AI. So many people think that AI is a panacea without considering the other side. I love technology and adore AI's capabilities, but I agree with the author that we should consider its strengths and limitations. Infailible: The Artificial Intelligence Ideology Reshaping Customer Behavior is one book I think every business leader should read before jumping headlong into AI.
Chris Hood reminds us not to elevate AI beyond its capabilities. The author clearly states that prioritizing automation over empathy and efficiency over meaningful connection will be costly mistakes. Infailible: The Artificial Intelligence Ideology Reshaping Customer Behavior is essential for those who must be reminded that customers are more than bots. This book does not focus heavily on abstract principles and technical processes. Instead, it takes a holistic approach that addresses client management's technological and human sides.
There are thirteen chapters in this three-hundred-and-twelve pages book. Do not be alarmed, however, because the chapters are short. Christ Hood is clear and precise in his writing, but there is also a tinge of dry humor and sarcasm. I smiled in several sections because he said exactly what I was thinking. Somehow, it seems like humans can do nothing without AI. While we are thankful that AI can save us a lot of time, especially for repetitive chores, the author rightfully warns us not to jump on the bandwagon without testing the wheels.
 I like the examples provided, although some could have been more globally diverse to appeal to an international audience. Nevertheless, Infailible: The Artificial Intelligence Ideology Reshaping Customer Behavior is ideal for novice and seasoned business owners. This book comes at a time when many people need to be reined in when it comes to idolizing AI. It is an invaluable addition to the arsenal of resources dealing with client relationship management but with a twist.