The Threat of Automation Is a Self-Fulfilling Prophecy

Lidya Nada/Bandung, Indonesia

The Tesla Model 3, an affordable and aesthetic electric car, was one of the most anticipated launches for an automotive company in history. Just a week after the car’s announcement in March 2016, future Tesla owners placed 325,000 reservations. Internally, Tesla slated production targets at 100,000 cars in 2017 and 400,000 in 2018. It took until October 2018 for 100,000 deliveries to occur globally. A relative of mine, who placed a reservation in Canada back in 2016, was told by Tesla that he should expect his car by the end of 2019.

The production difficulties for the Model 3 are well-known, and form part of Elon Musk’s reputation as a visionary who sets overly ambitious goals that nonetheless inch us into the future. Beyond the insanity of the targets themselves, however, came poor management decisions that stemmed from Musk’s overestimation of automation. It quickly became clear that much of the low productivity Tesla was experiencing stemmed from “over-automation,” in which the company spent far in excess of the cost of human workers to automate its factory with robotics that under-performed. Musk later tweeted that “humans are underrated.”

Failed attempts at automation are commonplace, especially in the automotive industry, but they rarely stop investments in automation technologies or the scare stories about their future impact. Musk originally wanted to produce Teslas faster and cheaper than other car manufacturers by applying more aggressive automation to every step of the process. But this approach actively undermined Tesla’s production goals, and the company faced stoppages as they were forced to re-adapt to a larger than anticipated human workforce. Rather than ushering in the future of manufacturing, this result echoed GM’s failed 1980s attempt at automation. Tesla’s failures may have left the company a larger and less productive workforce, at least while they adapt, than they would have had if they had not bothered with automation in the first place.

Beyond corporate risks, bad implementation of automation has jeopardized lives, as evidenced by the two crashes of Boeing 737 Max 8, thirteen minutes after takeoff from Jakarta, Indonesia on October 29, 2018, and six minutes after takeoff from Addis Ababa, Ethiopia on March 10, 2019. Design changes to increase fuel efficiency altered the shape of the plane’s engine housing, which caused the plane to angle upwards in certain conditions. To correct for this, Boeing introduced a new automated system, the Maneuvering Characteristics Augmentation System (MCAS), without including a description of the system in the operations manual. During the Ethiopian crash, the system took control without pilots knowing what was happening or how to turn it off. A total of 346 people died in the crashes before the plane was grounded. The tragedy serves to show how even more restricted forms of automation can go disastrously wrong when they fail to consider the human element.

The automation adopted by Tesla and Boeing appears comparatively low-tech compared to the kinds of automation promised by narratives of artificial intelligence and technological unemployment. Musk himself has been a big proponent of these narratives, having prominently raised the alarm about the possible existential threat of a future general AI so powerful that it will surpass the collective economic, political, and military potential of humanity. In practice, the technology just isn’t there. From self-driving car crashes to failed workplace algorithms, many AI tools fail to perform simple tasks humans excel at, let alone far surpass us in every way.

Faith in the power of AI nonetheless persists. Any failure is branded as a short-term problem that will eventually be overcome. The prophecy of coming automation, as a result, hasn’t taken a serious hit in the cultural consciousness. This in turn makes it self-fulfilling—automation becomes implemented by companies on the grounds that all the major companies are automating. Even if the power of automation is unrealized, companies remain convinced that it will exist at some point in the future. Managers are not encouraged to change their investment decisions in response to failure.

Part of this stems from the mystical marketing of AI and its automation potential. Everything is branded as AI to the point that the term has become meaningless. The current paradigm of AI research is built around deep learning, and out of that broader research domain most deployable algorithms rely on supervised learning, a technique that learns from labelled data to model complex statistical correlations and generate output predictions. This approach, while extremely powerful in domains such as image and language processing, is nonetheless narrow compared to the entire suite of complex tasks humans can accomplish, and somewhat fragile even in its strong areas. Fancier algorithms, such as the reinforcement learning approach that powered DeepMind’s AlphaGo and its successors, so far have limited commercial potential. Nothing from existing deep learning research even begins to approach an intelligence of the scale Musk frets about, or one that could replace human workers completely on complex tasks, and given the limitations of deep learning, they probably never will. This conflation of what AI “may one day do” with the much more mundane “what software can do today” creates a powerful narrative around automation that accepts no refutation.

The Narrative Around Automation

The threat of automation has been present since the industrial age. David Ricardo worried about the labor-saving power of machines harming workers, while Karl Marx saw this trend as one of the contradictions of capitalism that would inevitably lead to its collapse. The Luddite movement arose in response to machines that made the skilled trades more accessible to unskilled workers, undermining the power of guilds by commodifying labor. Throughout the 20th century, there were anxieties around “technological unemployment,” a term coined by John Maynard Keynes to describe the condition of workers whose skills would be entirely substituted by the new machines born out of the increasing speed of innovation. Despite this, the number of jobs available to humans continued to increase, even as old ones disappeared.

 

As machines made certain tasks more efficient, increasing overall production, demand shifted to the complementary tasks that could not be automated. As technology has advanced, more and more of the work can be automated, to a point previously undreamed of, and the pattern of labor demand shifting to complementary tasks that require human skill and more adaptability has held up.

But some argue that advances in artificial intelligence are now able or will soon be able to automate tasks beyond this constraint. This promise of AI has been around since the field’s inception in the 1950s. AI pioneer Marvin Minsky declared in 1961 that “[w]ithin our lifetime machines may surpass us in general intelligence.” His colleagues made similar statements. Early failures in AI research may have cautioned researchers about method, but never ambition. A cycle of hype and disappointment has characterized AI research since the start. And yet, in public discourse, the last breakthroughs in artificial general intelligence to finally displace human work has always been present. Deep learning, the paradigm that has lead the latest resuscitation of AI from the grave, and whose leading thinkers received a Turing Award this year, is now touted as that breakthrough.

Few AI researchers would argue that deep learning is all that is needed for general intelligence, given the problems that the algorithms have in abstract reasoning or identifying causation, though there is nonetheless a belief that a significant amount of labor can be displaced by what exists. Economists have picked up on this promise and tried to predict exactly what the impact will be. In 2013, an Oxford Martin study predicted that 47% of U.S. jobs would be susceptible to automation from technologies like artificial intelligence. The same methodology found that 57% of jobs across the OECD were susceptible.

The approach used to find these numbers is sorely wanting. In identifying tasks that AI can feasibly perform and then generalizing to the jobs that can be automated, the study falls into the same oversimplification that companies like Tesla are guilty of. The complex ways that tasks interact in a person’s workday are lost. In disaggregating tasks, what is really valuable is obscured.

A practical example of automation done right is in bank telling. Bank tellers historically spent much of their days counting cash and updating bank books, a job that required speedy numerical skills. With the introduction of the automated teller machine (ATM) and other machines like money counters and bank software, it might be expected that these jobs would disappear. Bank telling has in fact grown rapidly as an occupation, shifting its task away from accounting to customer service, resulting from a dynamic in which ATMs vastly increased productivity, while reducing spatial requirements for operating banks. The value of customer service to the bank was always there, but technical limitations occupied tellers’ time with a more inefficient process. Rather than removing the source of value that tellers provided, automation freed them up to provide even more value. The relationship between what tasks someone spends the most time on and what produces the most value is not one-to-one.

Despite the flaws of this style of research, it has grabbed major media attention and persisted in the cultural discussion around automation. The anxiety these findings bring has started to find its way into political discourse, most notably with Andrew Yang, candidate for the 2020 Democratic presidential nomination, who is running on the platform that America needs to prepare for a post-automation future with his flagship YangBucks proposal of $1,000 a month for every American. While his campaign also touched on issues such as stagnant wages, the oncoming threat of automation forms a central part of Yang’s rhetoric.

The narrative around automation persists, however, because it is so appealing to those who perpetuate it. Regardless of whether a technology can replace labor, the point is that that labor is theoretically replaceable. It reinforces the beliefs of consultants, the automation industry, and executives who get sucked in by the AI hype cycle, to the extent that they make major restructurings based on presumed AI capabilities that far exceed what AI can actually do in automation. There’s also the belief among technologists, investors, journalists, and policy-makers that there is a large underclass that does not have valuable skills, whereas they are truly productive and vital to society. A Universal Basic Income is necessary to talk about now because even if jobs are not disappearing due to automation and cognitive stratification, they inevitably will, and the unskilled of the world may become fed up with the system if elites fail to buy their silence.

Underneath this narrative that reinforces a dichotomy between productive and unproductive classes is a reality of human labor being completely discounted, which undervalues it in future decision-making. The belief in automation leads people to act in such a way that brings it about, even if it is irrational to do so. This comes at significant cost.

The Reality Of Automation

Musk may have learned from his mistakes that “humans are underrated,” but most “innovative” enterprises don’t like discussing their reliance on human work, even on nominally automated tasks. Automated workforces are trendy. The little hiccup that they are inefficient is not worth sacrificing the cool of being cutting-edge. In Ghost Work: How to Stop Silicon Valley from Building a New Global Underclass, Mary L. Gray and Siddharth Suri of Microsoft Research expose the reality that a variety of fancy new AI tools rely on hidden human work, identifying the data laborers these companies need as resulting from the “paradox of automation’s last mile.” If a system is nearly perfect, it raises the demand for the service, which in turn raises the demand for humans to correct its mistakes. Google needs to use human callers to impersonate its Duplex system on up to a quarter of calls, and Uber needs crowd-sourced labor to ensure its automated identification system remains fast, but admitting this makes them look less automated.

From the perspective of these businesses, their AIs will get to the point that they can replace human labor in these domains eventually, so why bother waiting until then to advertise them as such? Hiding the outsourced labor required to make something run today helps attract the investment needed to genuinely automate that labor tomorrow. The narrative that automation will inevitably come has encouraged companies to undervalue existing laborers. Thousands of data laborers work through crowd intelligence platforms such as Amazon’s Mechanical Turk and Microsoft’s Universal Human Relevance Service (UHRS), performing data tasks, such as image labeling, for pennies. Admitting that a product requires this labor makes it look less automated, and given that these tasks are so disaggregated among large quantities of workers, it is simply easier to ignore their necessity.

The hype around AI has created an industry to push conventional businesses to pay attention to a technology that only a few years ago wasn’t on their radar. Companies across sectors, from banking to automotive to pharmaceuticals, have now started calling themselves “AI-driven” in the hope of a new lease on life. Even as deep learning research hype cools down, the hype over the industrialization of AI is just getting started.

Sometimes businesses don’t even bother with AI at all beyond using the rhetoric as a marketing trick to pull in investors and media attention. London-based investment firm MMC Ventures found that out of the 2,830 startups they identified as being “AI-focused” in Europe, 40% used no machine learning tools whatsoever. Given that AI firms in Europe attracted 15% higher funding on average than traditional software or analytics firms, it’s no surprise that companies are throwing AI rhetoric around as a funding strategy to capitalize on investor ignorance. This is even more pronounced in the United States, where venture rounds far exceed those in Europe. The push to rebrand a company as an AI startup sometimes occurs reluctantly. Sometimes investors knowingly play the game; they want more AI companies in their portfolio, and the media loves reporting on large AI acquisitions.

The speed at which this is occurring obfuscates the necessary investments these new technologies require. It’s not enough to hire some talented PhDs and tell them to implement AI into a company’s workflow. Numerous companies have recruited data scientists prematurely, without the necessary complements in project management, paying for work that will never amount to much. Historical automation occurred over long periods of time, as workplaces reorganized around new technologies from electricity to industrial robotics to computers, which were not simply things you could drop into a company as is and expect to perform well. It will take time to develop the necessary AI management expertise to implement these algorithms well. While some have acknowledged this need, it has failed to slow down investment.

This exuberance over artificial intelligence is not actually automating work, as much as it is making it more precarious. Beyond Mechanical Turk and UHRS, we also see the rise of the “gig economy,” which erodes the labor protections that once existed. Uber is far leaner than any traditional taxi company because most of the people that work through it do not work for it, thereby avoiding the kinds of onerous regulations that once posed a barrier to market entry. There are countless companies seeking to be the next “Uber for X,” breaking apart jobs into separate tasks, and claiming that the automated matching component they provide is the source of the majority of value. These two-sided market platforms know that maximizing the satisfaction of the buyer-side, such as Uber riders, allows them to reduce the bargaining power of the supplier-side, such as Uber drivers, who depend on the platform to make a living. Even those acting as suppliers of massive data collections become long-run irrelevant according to this philosophy, since they are ultimately just powering the algorithm that will eventually replace them.

The Risk Of Excessive Automation

This preference for the mythical powers of capital over labor creates an incentive for managers to reframe existing jobs in such a way that their automation potential increases. While new purpose-driven organizational theories are arising that attempt to leverage the whole person rather than particular skills, it is hard to imagine these becoming mainstream. Instead, it is more likely that management will make tasks more explicit, neatly separate workloads, and develop more precise metrics by which to enable automation. Even if the technology to deliver on this does not arise, the armies of low-paid data laborers that are accessible even today will enable them to lower their overall investments in labor, enabling the appearance of a leaner enterprise, temporarily inflating their profits and share price.

Some clear downsides arise from devaluing the experience of workers and attempting to make their labor digestible for machines. Tesla and Boeing have already learned the hard way. There is a value to the tacit knowledge of human workers, whose understanding of their particular job roles go beyond pure metrics, and are hard to quantify. If workplace efficiency were derived entirely from explicit metrics, “work-to-rule” would be a much less effective form of industrial action than it is. Companies that do not grow in a digital world, and evolve in a symbiotic way with investments in data analytics and artificial intelligence technologies that extend their efficacy, will find themselves with operational nightmares.

Some of these problems might be easy to spot, such as reduced efficiency in production, or PR scandals caused by problematic output that slipped through human oversight. Others are more subtle. Deferring to algorithms over particular niches in an organization may prove highly efficient, which would encourage managers to aggressively adopt automated tools on more and more projects, and give them more control over an organization. Given the weakness of existing AIs at establishing causality, it’s possible that the correlations hold well at a certain scale, but completely break down at larger levels. To avoid this, domain expertise is required, something that requires the very form of labor investments that people are fretting about not being made.

This complete deference is encouraged by the growing field of prescriptive analytics, which uses AI advances to extend traditional business analytics. Prescriptive analytics promises to go beyond prediction and create a truly end-to-end system for decision-making. Removing human input as much as possible is seen as making corporate activity more data-driven and scientific, but in reality offloads authority to algorithm designers. The new approaches to analytics that have arisen with the explosion in data in recent years have nonetheless had serious methodological issues. Continued growth in the analytics industry, if done in a way that capitalizes on management naiveté, encourages poor decisions.

These risks are sustained by a narrative that undervalues the role of human labor, and takes automation as a given. They are also harder to correct for using conventional liberal policy mechanisms, which are not geared towards correcting the voluntary decisions of private sector actors.

Alternative Visions For Automation

The real threat of automation stems from excessive faith in emerging technologies. The power of these technologies and the inevitability of their impacts need to be challenged for a more honest discussion to arise. Even if AI does not enable the automation of whole sectors of the economy, ambitious action is needed to account for the changing nature of work, to protect against the erosion of labor rights, to protect competition, and to prevent a few companies having outsized control over AI, thus exacerbating inequality. Taking the labor-saving power of artificial intelligence as a given is a distraction from these conversations.

Breaking away from this narrative is not about coloring it with hope. The calls for Fully Automated Luxury Communism have an even more naive approach to new technologies, and one that discounts the true value of human labor to an even greater degree. Discussions that paint the labor-saving power of automation as something to be celebrated do nothing to change the incentives that are leading to poor investment decisions and bad policy proposals.

In Germany, unions have taken up the mantle of creating a new narrative around automation that respects the skills and rights of workers. Having labor involved in the automation process is not about worker control, but rather allowing for technologies to be implemented in a way that complements human skills, augmenting their work and leading to genuine productivity gains. Improving the conversation around automation starts with an improvement in attitudes towards labor.

Those who actually do the work know the limitations of metrics and understand that fuzzy logic is sometimes necessary for good decisions. The reality is that we often know more than we can explain and render explicit. Automation is a threat only because we believe it to be a threat, but it would stop being one if we acknowledged just how underrated humans are.

Ryan Khurana is an associate editor at Palladium Magazine. He tweets at @RyanKhurana.