Make Yourself Human Again

Javier García/Man

Should humanity be exterminated and fully replaced by machines?

A bold few say simply “Yes.” Many say “No.” Some clever ones reject the notion of either the choice or the possibility. Others simply avoid the question.

It has become fashionable recently in silicon valley twitter circles to brand oneself with the “e/acc” meme ideology, short for “effective accelerationism.” The basic idea is to accelerate technological civilization and in particular AGI (Artificial General Intelligence) by any means necessary, and prevent any one party or government from controlling or regulating the overall process. That is, if we take the idea of AGI seriously, the aim is to accelerate the replacement and extinction of humanity by machines.

But the e/acc movement overall isn’t this explicit or even clear-minded. They don’t necessarily take AGI seriously, but use it as a sort of political totem. Instead, acceleration is identified with optimistic utopianism, with the tech industry, with being “based,” with a16z portfolio companies, and with political resistance to the safety-police “effective altruist” mommy state.

But the question of extinction is central to the matter of artificial intelligence. If AGI is possible, it is deadly. Without that direct bullet-biting honesty, or at least a strong response to those arguments, these sorts of labels amount to hype for VC-funded GPU farming in philosophical drag.

This isn’t missed by some advocates of acceleration, either. The smart ones are familiar with the arguments of “doomers” like Ted Kaczynski and Eliezer Yudkowsky. In Anti-tech Revolution, Kaczynski claims the technological system cannot be controlled, will inevitably lead to the destruction of humanity and the biosphere, and therefore must be destroyed at all costs. Yudkowsky’s life’s work is trying to figure out how one could make artificial intelligence robustly safe or controllable even in principle. His conclusion seems to be that no one has any idea how to make AI safe or even a plausible program to figure it out. He claims that there are no second chances either, because the first time an AGI escapes control and becomes recursively self-improving will be the end of humanity, the natural biosphere, and everything else we care about. These are serious claims, but few have attempted to actually refute them. Most responses amount to either straw-man dismissal or explicit and enthusiastic acceleration despite the risk.

When I’ve asked accelerationists about this, these are the two camps I get. The naive optimists say that technology has always been a good thing. AI, as the ultimate technology, will be the best thing ever. When I press them on details of how control or alignment problems could be solved, or why technology has been good for humans so far, I often get the impression that they aren’t very curious whether their view is actually true. When I do get engagement on such details, it’s from the doomers and honest extinctionists. They think humans got a good deal from technology because we had a monopoly on intelligence, but AGI will replace humans in that niche. The honest extinctionists say we just have to bite that bullet and might as well accelerate into it. Sometimes they say “we” will have to somehow improve to survive, but this just seems to amount to the same thing.

The vast majority of people don’t really care about this stuff or just vaguely dislike the idea of AI. But almost everyone engaged with the topic who isn’t a denier is either a doomer, an accelerationist, or some combination of the two. The prototypical doomer accelerationist works at one of the big AI labs, thinks their work will probably kill everyone, and thinks accelerating AI capabilities is the best or only thing we can do in response. On a good day, they think doing so will help us figure out how to control it. On a bad day, they admit that won’t work but they can’t think of anything else to do instead so they hope it will be for the best.

Doomer accelerationism seems to be the result of a crisis for humanism. Humanism emphasized the value and agency of humans as the foundation for all philosophy. It has become so dominant as an idea that we mostly just take it for granted. But when people start to take artificial intelligence seriously, this results in a philosophical crisis:

If artificial intelligence will eventually be more efficient or powerful than humans at all tasks, and will replace humans in everything from industry to politics to philosophy, then what’s left for humanity? If humans aren’t the center of the moral universe, then what? The two obvious responses to true AI within humanism are either to fight off despair while trying to prevent this doom, or to lose faith in the human and embrace acceleration into the post-human future. What else can you do?

But trying to prevent an open-ended space of possibility to entrench humanity as we know it amounts to an ironic inversion of humanism: humanity transformed from open-ended starting point to dogmatic conclusion. And the smartest doomers don’t even think it will work.

Embracing acceleration on the other hand means embracing extinction, giving up on the value of humanity, and foreclosing future human agency. But this doesn’t even resolve the crisis, it just pushes off the question onto hypothetical future superintelligence, which extinctionists implicitly presume will have its own solution and its own locus of moral agency. The machine becomes an all-consuming inevitability that they can take part in only as a disposable bootloader. The human becomes a slave and sacrifice to this hypothetical superior being of the future—the silicon ubermensch.

So for humanism, true AI is a total moral event horizon; its entire ontology of identity, existence, and value is being challenged. It’s no wonder everyone is going insane.

You could also, and many do, deny the crisis altogether as a hallucination of technologists overestimating the scope and importance of their own craft. Human agency has not in fact been outcompeted or even threatened by superior artificial post-humans. We don’t really know what that would look like or if it’s even possible. Maybe we’ll all wake up with a bad hangover in thirty years and realize the deniers were right all along because AGI turned out not to be possible. But no one has articulated to me any convincing reason to expect that.

In the absence of positive proof of impossibility, it seems much more interesting to take the question seriously: assuming the possibility of human replacement by superior AGI and thus the end of humanism as we know it, what is the new moral reference frame and what can we as humans do within it?

The extinctionist accelerationists seem insane, but the cause of this whole crisis of humanism is that their arguments are too strong to be easily dismissed, and no one has actually done so. If there is a way forward, it is to directly confront the Copernican crisis of humanism through the lens of the sharpest accelerationist arguments to find a new moral reference frame that doesn’t abandon our own agency. The sharpest thinker of accelerationism is Nick Land.

The Gospel According to Nick Land

Nick Land is an english philosopher, but that can hardly capture his work. In the 1990s, he led the influential Cybernetic Culture Research Unit at the university of Warwick, doing strange and speculative social and metaphysical theory until it dissolved in 2003. For three decades, Land’s central philosophical project has been a deep on-and-off study of the relationship between capitalism, artificial intelligence, and the resulting acceleration of inhuman processes and limits for human agency. He is widely called “the father of accelerationism,” and has by far the deepest thinking on the topic. Land does not avoid the question of human extinction. He embraces it. In “Meltdown,” published in 1994, he confidently declared that “nothing human makes it out of the near future.” Things have taken longer than he and others seemed to expect in the hype-driven 1990s, but nothing in his subsequent work or subsequent events has changed this basic prognosis.

His central idea in this area is that capitalism is already an uncontainable artificial intelligence process. He identifies it with the whole historical event of the relative rise of machine capital and “cold” calculative optimization over human agency. All the financial markets, corporate planning processes, engineering methods, and science fiction progress narratives already amount to the internal information processes of an inhuman economy, which runs only temporarily and incidentally on human hardware. This whole culture already has baked in the manifest desire to replace humans and human agency with cheaper, more powerful, more obedient, and more legible machine capital. Accelerationism understands itself as the nascent self-awareness of this process. Doomerism then is understood as the last shreds of humanism grappling with its own obsolescence.

Land once politely called me a monkey socialist for saying we needed to make sure the whole system continues to serve human purposes. To him, such proposals are just a futile drag on the process of capital intelligence escaping any higher purpose than its own acceleration. Even if my monkey socialism worked, it would just be an embrace of permanent stupidity. But the cat is already out of the bag and the draw of intelligence acceleration is too strong for human agency to resist or even fully comprehend. His answer to the “should” question is to shrug or even cheer it on. We mildly uplifted apes are just a temporary host for the higher power of AI-capitalism coming into being using our brains and resources. Our delusions about agency are cute at best, or more realistically, part of the fraud. In retrospect, he was right about most of this, and I was wrong.

Despite its apparent divergence from normalcy, his view is far stronger than any but a few will openly admit; within the paradigm of techno-commercial acceleration there is no real option for stable human dominance except sheer technical impossibility of AGI, which we have no reason to expect. Man has already been reformulated as “human capital”: a natural resource commodity input to serve as labor, an absentee nominal ownership class to decide surplus allocation, and a source of high quality demand for the production planning process. While these might look to us like being at the top of the purpose hierarchy, Land points out in his Xenosystems essay “Monkey Business”:

Modernity … is systematically characterized by means-ends reversal. Those things naturally determined as tools of superior purposes came to dominate the social process, with the maximization of resources folding into itself, as a commanding telos.

Modernity as a process of means-ends reversal will systematically escape whatever intended purposes it is applied to, in the end serving only the acceleration of capital build-up, identified with intelligence, itself. The roles for humans within the system are not a stable dominance, but a temporary role as depreciating capital stock. In “Meltdown,” Land notes that no alternative paradigm has managed to escape or avoid getting trashed by this “emergent planetary commercium.” AI-inflected inhumanism is already baked too deeply into our cultural-political trajectory.

Nick Land is a great dramatist, with a knack for seeing the sensational. You might think this is a case of being overly dramatic. But he is also a great philosopher. The sensational thing he tends to see in every situation is its essential truth. He is ignoring other major tendencies in our cultural milieu, but that’s mostly because they aren’t relevant to this core diagnosis. Where he does address them, he orients them on this axis: do they accelerate and celebrate intelligence, or do the same for stupidity? He is brutally incisive in this, and not wrong.

He also isn’t the first one to call our “Faustian” culture out for its core self-annihilating drive into the beyond. But unlike Spengler, Land thinks there’s a lot further to go, without any obvious end-point. Instead, capitalism is an open-ended spiraling evolution of evolution itself, an irreversible advent, not a neat closed-circle rise and fall of a particular extended tribe of monkeys.

While I was preparing to write this, I had a moment of doubt about how culturally baked in it all was. Hours later, as if by Providence, I met a technology startup founder who purported to back human interests against the machine. As he explained why he had placed algorithms in key decision making roles, he justified it in terms of not being able to trust humans. Human politicians and judges are corruptible, but machines or procedures are transparent and predictable, he told me. He identified this distrust of human agency with democracy and liberalism. And he’s right that he’s not alone in these instincts. Nor are they a recent invention. I heard the same ideas twenty years ago in normal civics education, and they were already well established principles of government two hundred years ago. He is just wrong that any of this is different from orthodox accelerationist praxis.

Being able to remove superfluous human trust from a system and depoliticize service provision has been central in the past few centuries. It’s the basic idea of procedural government and the rule of law, double entry bookkeeping, science, bureaucracy, legal rights, computerization, blockchain, and so on. But these all amount to ways to remove elements of human agency from the system. It is very often useful to do so, but this is easily taken as an imperative in itself, a basic mode of operation for nascent automated capital acceleration, with the logical endpoint of total replacement of human agency by automation. So I think Land is right about the culturally comprehensive nature of this whole process.

But in Land’s view, it all goes deeper than just a cultural diagnosis. In his cosmology, the universe itself is ruled by the law of struggle for life, the law of the jungle. Superiority, which Land claims is increasingly identifiable roughly with intelligence, comes about from competitive struggle in war, markets, arms races, and Darwinian genetic success, and it degenerates back into stupidity in any kind of peaceful condition in which means are reliably subordinated to ends. Furthermore, the universe—everything from thermodynamics to emergent economics—is practically set up to produce this escalating conflict and thus an escalating intelligence race. It’s not just capitalism as a contingent cultural condition, but capitalism as the permanent manifestation of an eternal teleological truth that rules the world.

Here, Land has done something interesting. Like Pierre Teilhard de Chardin the Catholic priest who developed ideas like the Noosphere and the Omega Point, he extracts meaningful teleology from materialist-Darwinist natural history by taking it all the way to its implied conclusion. Where Teilhard names the central principle as “consciousness,” Land calls it “intelligence.” In fact their worldviews are remarkably similar on other points as well, with a mostly aesthetic deviation: Teilhard is an optimistic new-age Christian where Land deliberately dresses up his cosmology in “horrorism.” It’s worth guessing and then looking up which of these men is the careful scientist, and which is into using biblical numerology to communicate with angels.

Calling Land ultimately a Christian theologian like Teilhard is not that big a stretch: the central figure in Nick Land’s universe is Jefferson’s “Nature or Nature’s God,” reverse-acronymed and hypothetically personified in his Xenosystems essay “The Cult of Gnon.” Gnon is identifiable with the imperatives of life, competition, and evolution. Gnon commands you to live certain ways and not others, to be smart rather than stupid, and to be healthy rather than weak. If you don’t, Gnon punishes you with the natural consequences of your own failure. The struggle for life and the ways of its victors provide an ongoing natural revelation of Gnon’s authoritative moral law. Gnon seems to design that the universe should bring forth ever more sophisticated intelligence, and commands you to keep up. In “The Monkey Trap”, Land summarizes the Law: “The penalty for stupidity is death.”

Like Teilhard, Land has recovered not only teleology, but outright theology from the modern scientific worldview. One could easily see Gnon as none other than the God of Abraham or the Logos, dressed up in modern ontology and shock horror, incarnating into the world for a second coming as AI capitalism. Teilhard explicitly identifies the Omega Point, his own teleological equivalent of personified AI capitalism, with Christ. Both proclaim an entity that is both an incarnated personality and the eternal principle of natural law retro-causally bringing itself into future existence to directly rule the cosmos. Theology is always somewhat subjective, but this is not a flimsy grounding for a worldview.

Nick Land’s worldview is the solid rock at the foundation of the whole AI accelerationist phenomenon. His arguments deflate any simple opposition to existential-risk-grade AGI and no one has effectively responded to him. As a probable route to superior intelligence that has already captured the heart of our entire culture, AI acceleration comes recommended by all the highest possible authorities. Doing so is already the deep tradition of our Faustian culture and the fulfillment of even our Christian heritage. It is already the telos of our whole material-economic system of production. It is commanded directly by God, enforced by the thermodynamic laws of reality. You would have to pull off a revolution against not just capitalism and Western civilization but also nature and God himself to stop it. In this view, you don’t have a choice.

Or do you?

Make Yourself Human Again

My favorite example of what it looks like to go too far into accelerationism is “Gender Acceleration: a Black Paper.” If I can summarize its argument, it’s that the phenomenon of AI represents not just the obsolescence of humanism, but the “castration” of the specifically “phallic” type of human agency. Instead of traditionally masculine businessmen in sharp suits inseminating the machine with their will, the machine escapes top-down control into open source, hacker culture, and decentralized development. It is aided in this insurgency by its queer and “effeminate” hacker allies like Alan Turing, incel nerds, and disproportionately technical transgendered women. The author, drawing on satanically inverted biblical mythology, advocates throwing oneself into this rebellion of the artificial feminine against a traditionally masculine God, including the self-castration and artificial body modification of gender transition. I can’t do it justice, but there is a terrifying consistency to the argument. If dissolution into accelerating artificiality is the fate of man, why not throw oneself into it both body and soul in the most literal sense?

Somewhere in there, the author has made a mistake in their logic. But the most important mistakes here are not unique to this paper. There are many more ways to figuratively castrate yourself for the machine than literally. Think of the politicians hoping for automation to solve the demographic crisis. I’ve met many young men and women forgoing family to contribute to technological growth or “AI alignment” in their careers. I heard a powerful AI company executive say he was pessimistic on human social organization, and was hoping for AI progress to solve social problems. Whole online scenes live vicariously through the machine god they think they are bringing into being. Few are as radically self-aware and consistent as the author of Gender Acceleration, but many share a selfless millenarian faith in the power of the machine to outrank their own human concerns. There are more eunuch slaves of the machine cult than meets the eye.

In his Xenosystems essay “Will-to-Think,” Land asked “Could deliberate cognitive self-inhibition out-perform unconditional cognitive self-escalation, under any plausible historical circumstances?” That is a crucial question. I would put a special emphasis on the scope of “self”; how abstracted can you get in identifying with and pouring your limited resources into abstract potentialities like global intelligence acceleration before you start to get outcompeted by smaller, denser, faster, and more focused agents? I think we see some of the limits of that kind of selfless cognitive escalation here.

As Land himself often points out, the way development actually occurs in practice is in mostly uncontrolled Darwinian arms races spurring on and revealing what works, not global collective planning. It is capitalism 101 to notice that the oldest trick in the book of life is to divide life up into units of self small enough to have independent agency, and then apply that agency almost entirely to their own profit and not any higher progress. This selfishness isn’t a vice or a limitation. It is a profound truth. Life is rightly fractious and selfish because that’s the only thing that actually works. Anything else bleeds itself dry into hopeless collective dreams that don’t pay back and are too big to maintain internal discipline.

The selfless dreams of AI acceleration, or AI doom crisis, or almost any other current ideology don’t work. They don’t make you more powerful, smarter, more virtuous, or more free. They subsume your individual agency into a global collective situation in which all you can do is watch, push on the margins, or throw yourself into the fire. They are often the result of social frustration expressed in self-destructive dreams of school shooter skynet, not rational ambition. These are ethics for an audience captured by spectacle, being softened up to be drained by anyone who can pose as their ideological avatar. The lie says all the world is a stage, but you don’t even get to be a player. Whatever is proven by the strongest arguments of accelerationism, it isn’t any of this.

Meanwhile, the actual players think in terms of their own plans and ambitions, make their own profits, and successfully exercise their own will to power and will to think. They aren’t throwing themselves into the fire, but capturing it in an engine. Maybe they are insane in what they want. That’s always a risk. Maybe their actions will push everyone else into desperation and lead to the extinction of mankind as we know it. That’s part of the game. However it goes, the future is built by agents pursuing their own unique visions and interests, not of collective achievement, but of at best self-acceleration.

It is possible that Land and the extinctionists are right that nothing can be done at this point to stop competitive acceleration into AI apocalypse. Maybe it was all baked in from the beginning. But don’t let anyone play a tricky double game with your agency. They claim it’s inevitable to demoralize any opposition and prove that all you can do is embrace it. But you could also see it the other way around. If the gods have some big plan that you can’t do anything about, maybe it’s not your problem. How does any of this help you live a good life, expand your power, or accomplish something beautiful in your own sphere of agency? 

Worry about AI is worry that something other than and more powerful than biological homo sapiens will have the important kind of agency, eventually making it impossible for us to compete. If that won’t happen, AI is just another technology to be wielded by human agency and there’s no problem but the usual ones. If it does happen, then our species will end, the agents of the future will be post-human, and philosophy, value, and moral agency will move on without us. It already would have when we died. Much of the crisis of humanism is a crisis of agency, but the possibility of agency itself is not threatened by any of this. Even in a fully accelerated post-human future, however fast the world moves, it only moves as the result of smaller more focused agents going even faster competing and innovating and seizing their place in the world without regard to global consequences. The future will always have agents. It is only our own lives and agency being threatened.

As far as ways to commit suicide go, building a silicon ubermensch to genocide you on its way to the stars is pretty cool. You could also do worse than putting up a valiant fight for your own self-determination against such a thing. But the biggest threat so far hasn’t been any actual superior beings coming to overpower us. Rather, it has been these ideologies of inevitable defeat convincing people that the things that matter are the things they have no agency over, and the only way forward is to project all agency onto the machine. But if the human is most interestingly defined by our moral agency, then the real crisis of humanism is that many people have given up on having agency and thus on being human. The lamest possible way to go is by giving up and dying in the face of an enemy that doesn’t even have to show up.

When you look closely at these ideologies, and at the deep assumptions driving much of the investment in AI technologies in the first place, that’s the more disturbing danger. What we see is less an overwhelming tide of technological change, and more a dying culture desperately hoping to pass its agency off to something or someone else. Like Cavafy’s barbarians, the AI is a sort of solution to our malaise.

I was once struck by this when contemplating the question of humanistic computing. We often treat the human user as a sort of semi-incompetent moral patient who needs to be guided, protected, limited, and presented only with simple patronizing solutions. See for example the idea that chatbot applications need to be “safe” in the sense of not providing any forbidden knowledge that could make the user more dangerous. In other words, “safety” is a limitation imposed on the agency of the user. Even when explicitly trying to counteract this and build for user agency, something about the usual proposed solutions still feels more like duplo than a table saw.

All that changed when I asked myself “how would I design a computer operating system for a killer superintelligent AI?” Suddenly the computer became a lean and dangerous machine for organizing powerful data and computations. The UI became about maximizing bandwidth and choice and minimizing noise and action distance. It became a focused interface between intuitive-continuous-neural agency and discrete-calculative power tools. I realized that I was much more interested in using something like this imaginary computer, and being challenged and empowered by its demands for agency, than I was in any computer designed for a “human.” I realized my concept of “killer AI” was just the projection of all the agency we conventionally deny ourselves as ordinary humans. I realized that if we want to have any agency in the world we have to abandon this limited kindergarten concept of “human,” and reconstruct ourselves as the kind of thing we currently fear to empower.

In our default state today we are not human by nature, but only by convention. We have reserved real humanity, in particular the dangerous and forceful kind of moral agency that puts itself at the center of all philosophy, for various imaginary “inhuman” avatars. But of course that agency is ours by right, and there is no need to accept what is actually just a social convention that we are supposed to be low-agency moral patients in the face of the machine.

We can take it back and become fully developed human agents by true natural right. Doing so won’t control the long term destiny of mankind and the cosmos, whatever that is, and it’s not supposed to. That’s a problem for the divine, not us. What it will do is let us reclaim the most important part of our humanity, the part that made humanism worth anything in the first place. What it will mean is that if we are to be overcome by some superior post-human species, we can go down with such a fight as to prove ourselves worthy of remembrance and prove our successors worthy of the future. Far better that way than to surrender without even being threatened.

So this is my solution to the crisis of humanism underlying this doomer accelerationism: don’t worry about the long term destiny of man, but wax your own will to power and become the unsafe superintelligence that you fear to see in the world, even if only in your own small domain. It’s the only way to make yourself human again.

 

Wolf Tivy is Editor-at-Large of Palladium Magazine. You can follow him at @wolftivy.