You Can’t Trust the AI Hype

Maxim Hopman/Man examines video screens

My friend had recently gained a renewed appreciation for managing people instead of software. “Why build tech when we have interns?” he asked as we caught up. He had recently made an exit from a successful startup and was now applying his AI skills in a large non-tech industry. “I had an intern manually watch a bunch of footage today and label stuff, and called it AI. If our stakeholders really like it, then we can build the tech later.” It was an offhand comment, but it got me thinking: just how much of tech proper was doing the same thing?

You might interpret this kind of move as mere technological laziness in an established industry, but you would be wrong. My friend faithfully applied a strategy used by countless AI-branded companies around the world.

Google was able to market its Duplex service as a personal AI tool even while it required human assistance in 40 percent of its calls a year after launch: 25 percent of the calls began with a human and 15 percent of the rest required intervention at some point. 60 percent of calls done entirely by AI is still impressive, but the accurate branding would be something far more like a hybrid outsourcing service.

Across the pond, one French founder quipped that “Madagascar is the leader in French artificial intelligence,” since the AI-branded startups in his country tend to outsource their hidden human labor there. Other examples are legion.

The pattern reveals something quite strange about not just AI, but the broader tech industry too: being “in tech” or “in AI” describes branding and a particular social sphere as much as anything to do with how the work is done. You operate in the world of VCs and startup CEOs, use a geometric sans-serif logo, and are on Twitter.

AI has only intensified this trait—in fact, you might not have to create anything related to deep learning at all. Companies can present as if they are “AI companies” while using no machine learning—up to 40 percent of AI companies in Europe alone do it—and still reap the benefits of higher funding and excitement. The supermarkets and HVAC companies trying to posture as “technology leaders” by dropping AI strategies into investor reports are on even more dubious ground.

The main reason this works is hype. AI is the future, the future is now, and looking too closely at the details could mean losing the bag. What makes AI interesting is that, unlike cryptocurrency or the early internet, it all somehow feels expected. AI is unmatched in how long-standing and ready-made the narrative of its own inevitability is, and the sense most people have that they understand its essence even if they can’t quite describe the technical guts.

Most consequentially, fully-developed AI is supposed to ultimately replace humans, both in specific domains of work and perhaps even as Earth’s apex life form—everybody “knows” that, somehow. From the perspective of this expectation, a tool that requires extensive human support is therefore early-stage at best and a failure at worst. In this setting, AI becomes something like a genre and a narrative de-linked from the capabilities of specific tools.

That narrative reflects a deeply held belief within tech itself: that the most significant problems are engineering problems, and more specifically technology problems. This is an attitude sometimes criticized as “solutionism.” This attitude is useful when it sees material potential where others may not, as with the early internet.

But can also lead to destructive overfitting. Elon Musk famously overestimated the capabilities of the automated robotics systems Tesla attempted to base its factory on, which couldn’t adequately handle many judgment-laden details like the fine-tuned corrections of screws, bolts, and fasteners that humans did naturally. Over-automation created disruptions in welding, battery pack assembly, and final assembly of vehicles. The whole thing left Tesla with a hefty redesign cost and Musk updated to a more human-centric view of how to make cars.

To the degree that any problem is an engineering or technology problem, the tech industry gains an immediate advantage in solving it. If it’s a software or computer science problem, that advantage often becomes a monopoly. The incentive of the tech industry, then, is to convince people that all significant problems are engineering problems, narrowly defined as those categories of engineering at which the tech industry does best. By acting as an epistemic cartel in this way, the industry pumps up the value of its work and guarantees its future capital flows.

This ideological hype is the stuff from which bubbles are made. But an AI bubble in deep learning won’t kill off the underlying technology, just as the dot com bubble didn’t kill off the internet. What matters more is that people are making decisions about technologies and their potential in an epistemic environment dominated by a hype narrative. That has blown up the share prices of AI-branded companies, but the scrambling of market signals also incurs huge opportunity costs on the level of both companies and the larger economy. Musk’s bet that 80 percent of Twitter could be fired without doing any real harm to the app itself seems to have paid off. The extremely high “tech” valuation of Twitter overfunded it to the point that it could get away with hiring five times too many people.

This colors how people understand the capabilities, function, and purpose of AI technologies. From the level of ordinary workers all the way to prominent founders, people are operating on ideology—either naively or cynically—instead of on the rigorous assessment of tools. When most beliefs and even market signals reflect ideological hype-mongering, you can no longer trust them as guides to the value of these tools or of their long-run technical potential. What they signal is successful large-scale conditioning into the tech industry’s preferred narratives about AI.

Any accurate assessment of the potential of deep learning tools requires a rigorous exclusion of hype narratives that is nearly impossible to achieve on the individual level. Investor bubbles may come and go, and while they attract retrospective criticism, they are a passing problem. The large-scale misallocation of resources, on the other hand, incurs compounding opportunity costs on a massive scale.

At its worst, such ideological hype causes a systematic over-investment in new technologies that do not actually yield much overall progress or development across society. Bubble-driven investment incurs large economic costs. Pursued in this way, AI may actually be an economic and industrial net negative. Misallocations can continue for generations, free-riding off the functional parts of the structure while the overall reality is one of decay and dissolution.

The Tech Industry’s High-Yield Ideology

A tool is what it does. Its nature is not determined by what people imagine it can do, think it should do, or even what its inventors originally intended it to do. Alfred Nobel first invented dynamite for use in mining, but its actual nature as an explosive made it useful in war. When its inherent capabilities are applied within some process or system, the tool gains a function within that system.

When the technology is new and the set of possible functions is large, figuring out how to implement such tools usefully is highly dependent on the worldview of the planner. What makes AI unique is the degree to which tech ideology and even science fiction inform that worldview across large swathes of society. Decades of storytelling about superior machine minds have primed society into a simple narrative: computers are smarter than you, lack your weaknesses, and will replace you in all tasks that matter as soon as the computing power involved gets strong enough.

It’s the ultimate in tech ideological cartel thinking. In the solutionist apocalypse, frail and fallible mankind is the most fundamental problem and AI will eventually solve it once we hit artificial general intelligence. Whether this is utopian or extinctionist is a matter of taste. That the underlying technology has changed several times—Japan’s distributed computing systems did not pan out in the 1980s—has never led to wavering in the basic story.

The intensity of tech ideology gets higher the closer you get to the tech industry itself, and specifically the closer you get to Silicon Valley. IBM is also a tech company, but an older New York tech company rather than a newer Silicon Valley one. That distance matters and the resulting tone is very different. In its public reports, IBM tends toward empiricism, dryly giving the numbers on who is using what.

Other distinct AI ecosystems also exist. Each tends to see AI as an issue through the lens of its own concerns and methods, and in a way that implies an expansion of its own power. In Washington D.C., AI tends to be a security question befitting the needs of the security and government complex. In China, AI ends up being part of the Communist Party’s attempt at constructing a population monitoring apparatus.

Each ecosystem is colored by the lens of those interests that make it up, and they all interact with each other to varying degrees. But the Silicon Valley ecosystem is unquestionably upstream of the rest; it is the hub of much of the advanced technical research and the place where the ideologies and philosophies around AI are at their most explicit, radical, and consciously promoted. It is where you will have long discussions about just how we might cross the line into agentic artificial general intelligence.

Day-to-day ideological promotion tends to happen at the hands of VC funds, startup CEOs, and the like. Recently, a16z cofounder Marc Andreessen and YCombinator president Garry Tan have both pushed “e/acc” (effective accelerationism) to hype young entrepreneurs into entering the AI world and VC term sheets, playing on a defanged iteration of the “accelerationist” ideas of philosopher Nick Land. A few years ago, a16z benefited from the boosting of “web3,” a meme with a similar hyping function for cryptocurrency.

Players like this receive the direct payoffs of tech ideology, both in AI and other fields. They directly benefit by shifting attention to their preferred problems and their preferred solutions for them. VC funds hype young entrepreneurs into creating startups that they become good at evaluating for potential success. If an industry is successful, they end up with stakes and influence over its leading firms. Those who run and own the startups themselves, meanwhile, benefit from later-stage investors operating on the same hype and from convincing clients of the necessity of adopting their product. A startup that taps effectively into society-wide hype is a startup that can rapidly increase its valuation and then pass the bag to less sophisticated investors. The tech ideology gets skin in the game.

AI’s Hidden Human Hands

Silicon Valley is also a hub for more explicit and developed philosophies of AI. Even top AI pioneers interact with these ideas, ranging from the Kurzweilian Singularity to Yudkowsky’s call to halt all AI development to the point of air strikes on rogue data centers. It’s where people are confident that minds are fundamentally mechanical and thus engineerable, that an eventual hard trade-off between AI and human dominance exists, and that all this may manifest in AGI take-over within our lifetime. Even the contrarians who advocate a Dune-inspired “Butlerian jihad” against AI accept these basic claims.

When you think of AI, you likely don’t mean facial recognition programs or traffic monitoring software, even though these descend from recent concepts of AI. Self-driving cars and LLMs are probably closer to the mark, at least for now, because they seem to operate with more autonomy and self-direction than we are used to.

But the acid test for “real” AI is independence from humans, up to the point of replacing them. Those creating deep learning tools are under pressure to show that their creations reach that standard. What happens in practice, though, is that companies simply hide the presence of human work and correction beneath the guise of a seamless experience. The human labor needed for everything from labeling data to editing AI outputs is massive and growing.

The San Francisco-based company Scale AI made its name, its $7.3 billion market cap, and its client list ranging from OpenAI to the U.S. military based on providing precisely that human support. While the company’s branding is that of a tech company, it founded an in-house agency called Remotasks to manage its core operations, the nature of which is something more like “outsourcing services.” Via Remotasks, Scale AI hires about 240,000 workers for data labeling services, many of them in regions like Africa and Asia. The company maintains a strict brand separation between the two. While the company’s position is that this is for client privacy, it’s hard to ignore the reality that “outsourcing services” is a far less sexy investor hook than AI and is far easier for competitors—of which Scale AI has many—to disrupt.

On the scale of global investment, as well as firms thinking in terms of massive economic transitions, the disparity between the ideology and the tool fuels the hype cycle. Employers think the tools are more advanced than they are, employees think their jobs are more at risk than they are, and investors think their gains are just around the corner. People ask which jobs they can eliminate, but not which ones they have to create.

In one recent case, scriptwriters have been worried about producers tricking them into editing scripts produced by ChatGPT instead of hiring them to write originals. However, conflicts over automation are not new in the film industry. The reality is that the elimination and merger of film jobs due to technology were occurring long before AI came on the scene. I asked a couple of friends who worked in Montreal’s huge film and music scenes what people were expecting. Their responses were surprisingly unfazed. “Honestly, the middle has already emptied out,” said Sarah, who does post-production work on feature films and series content. “The storyboard timelines that were 3-6 months are now a month or less, and so much has been combined that the same person might even be directing work.”

But what about the artist worries about images and music just becoming repurposed as training data for their own replacements? “Repurposing content is so normal already and not all of it is legal Getty Images-type stuff,” said my other friend, a music producer. “The internet already gave people the ability to trade and share materials easily. AI isn’t fundamentally changing that, but it does change the compression effect. If you’re using prompts on a generative AI tool like Midjourney to create art, you often lose some control over the final product, at least right now.” Were they worried about tools getting good enough to actually outperform them? “I mean, what are they getting trained on?” he answered. “The mass production might be a problem in the end. It’s good to democratize a creative tool, but that doesn’t mean that the high-quality output will increase. What happens when half of images online are AI-produced and that’s what Midjourney is learning from? It’s probably not as divergent as the purely human work it’s learning on now. I don’t think it’s going to make for better art.”

Both reported that despite the growing use of AI tools, many of their colleagues didn’t have a strong understanding of the underlying technical basics of generative AI. Absent this, people are left to operate on the assumptions of ideological hype. The problem is that this hype has already been propagated across industries and mass culture, while the work of figuring out how best to use the tools is usually slow.

This is not unique to AI; industries tend to move in lockstep. Trends, best practices, and “industry standards” tend to dominate both new and established sectors and both large and small firms. Think of how every company you use started deciding that they needed an app; that happened despite the average user of the average app abandoning it after a couple of uses. CEOs are as prey to the hype as anyone. A large organization that actually bucks the ideological alignments represented by “industry standards” is rare, and will usually come under immense pressure to conform.

You Can’t Trust the Signals

Decades ago, the economist Friederich Hayek described markets as operating on price signals that reflected the actions people took on the basis of their private knowledge and beliefs. Hype changes the nature of these signals. Like in a standard Hayekian market, people in AI or broader tech markets are looking at price signals and the behavior of market participants as sources of information. But when a sufficiently large proportion of investors are operating on ideological tropes, those signals reflect collective hype instead an aggregation of private bets based on diverse approximations of reality.

Nor is everyone necessarily fooled by the hype; for savvy investors, the bubble itself is the opportunity. George Soros once commented that “When I see a bubble forming, I rush to buy, adding fuel to the fire.” The results are clear: just this year, Nvidia—whose chips are fundamental to the current wave of AI projects—added $700 billion in market capitalization in 2023, traded at almost 20 times its expected revenues, and rocketed to the top of JP Morgan’s Hedge Fund VIP Basket.

AI is far from the only tech market operating on ideological hype. Cryptocurrency and vanity projects like the metaverse relied on the same pattern. Now that rising interest rates have stemmed the flow of easy money, the price of ideological misjudgment is increasing. Tech was disproportionately reliant on the low-interest-rate economy. The downturn hit tech fast but took several months to start impacting the rest of the economy.

The real story here is that these downturns didn’t affect much real value at all, either beyond the tech world or even within it. If Twitter is a good heuristic for just how much capital the systematic overvaluation of tech has soaked up, the misallocation constitutes a systematic disorder in the global economy. These are not market signals that can be trusted. The reach of this hype is not equally distributed. The intensity gets higher as one gets closer to the epistemic cartel of the tech industry proper. In contrast, legacy sectors that run on personal relationships—like finance or law—value their core employees and this affects how they use the technologies involved.

An investment manager I spoke to told me that experimentation with products like ChatGPT is still largely left to “vanguardists” like himself in his area of finance. In the broader finance industry, IBM estimates AI tool adoption at about 21 percent of professionals. His main concern was freeing up time for the tasks that actually mattered. “I use ChatGPT if I’m transcribing conversations, pulling out insights from a paper, stuff like that. For drudgery. It’s like having a personal assistant.”

Did he sense fear about job losses? “At the middle and senior level, finance runs on weak relationships. You know lots of people somewhat well, and more importantly, you can figure out the unique deals that no one else sees,” he explained.

“At the lower rungs, it’s different. But there’s been gradual upskilling already for the last hundred years. It used to be that someone paid a secretary to handle their busy work, but now an assistant will handle multiple executives using Teams, Outlook, and other apps. I’d expect the same with lower-level analysts: maybe it’s a quarter of the number of people hired, employees make $120,000 instead of $60,000, and their job has become using machine learning to assist mid-to-senior level people.” The real opportunity, he thought, might be using AI as an excuse to remove low performers. “Large firms are full of dead wood. Machine learning might be the ideal rationale to do reductions without blaming any one person. Hey, it’s just ‘technological changes!’”

The legal field yielded similar results. I spoke with two lawyers, Zach and Shannon, both of whom work at mid-size offices. The AI tools they used were targeted at lawyers and mainly for editing documents and sorting specialized contracts that need public disclosure. “The real work for lawyers isn’t in sitting and drafting anyways,” Zach told me. “It’s in quarterbacking deals and doing syndication on the fly. Diligence work is basically a way to pay your dues, but no one really wants to do entry-level work like that.” While Zach thought that support staff might get slimmed down, Shannon hadn’t even seen that yet: “The research tasks are changing, but it’s not at the point yet where you can do it without an actual person.”

The insularity of law and finance probably made them overlook the potential of AI technologies in some ways. Deep learning tools might cause certain kinds of job loss, but these companies tended to evaluate them by how they increased the value of their best-performing people, whose real contributions were often their personal judgment and ability to cultivate relationships with clients and stakeholders. Distance from the social influence of the tech industry also gave them resistance to tech-centric ideologies about the nature of AI.

Developing independent assumptions and models about AI is a tall order, especially for a company justifying its decisions to boards and shareholders operating on the same hype as everyone else. Rival ecosystems to Silicon Valley, like Wall Street or Washington D.C., have their own operating ideologies through which they interpret the world in ways that serve that particular industry. The task at hand for those trying to think about the most important problems in society—and the real potential or implications of deep learning—is to create an independent epistemic environment with both correct information and sane models of the world. This can’t be done in isolation, but it doesn’t take too many people either, at least at the start. The fundamentals of good analysis are good first principles and rigorous empirical assessment, and these usually require collaboration in small groups.

Presenting AI technologies as a dynamic of deep learning tools replacing humans violates both these fundamentals. Seeing it as a reorganization of certain kinds of labor and production, where human capital is reallocated to more useful roles, is more faithful to them. Nothing deep learning has yielded has even a theoretical pathway to altering the most fundamental operating norms of society or of institutions. Basic patterns like the growth and decay of institutions, principal-agent conflicts, or the tendency for bureaucracies to prioritize their own expansion, will all continue. However significant the changes that AI tools do make possible, they will operate downstream of these more fundamental patterns in society. Good models of AI technologies, or any other category of technology, are therefore downstream of good models of society overall.

With no immediate rival to the tech ideology’s dominant narrative around AI, we will likely see AI-branded technologies go through more than one bubble in the years to come. Their bursts will probably not undermine the propagation of the underlying technologies over the long run. But the real value of these technologies for material or social development is, on net, far lower than the hype suggests. In all likelihood, we will stop thinking of many deep learning tools through the lens of AI as their real functional niches become established and unremarkable.

But until the AI ideology faces displacement by something closer to reality, the systematic overvaluation of the tech industry as the vanguard of material progress is set to continue.

Ash Milton is Contributing Editor at Palladium Magazine. You can follow him at @miltonwrites.