On April 1st, 2022, MIRI (the Machine Intelligence Research Institute)âthe people who led the cultural charge on the idea of AGI and AI safetyâannounced a âdeath with dignityâ strategy:
âtl;dr: It’s obvious at this point that humanity isn’t going to solve the alignment problem, or even try very hard, or even go out with much of a fight. Since survival is unattainable, we should shift the focus of our efforts to helping humanity die with slightly more dignity.âÂ
This may or may not have been an April Foolâs joke, but either way, consensus among the people I know in the field seems to be that while progress is being made on AI, no progress is being made on AI safety. To me this is not surprising given the starting assumptions in play.Â
AI safety, originally âfriendly AI,â was conceived as the problem of how to create an agent that would be benevolent towards humanity in general, on the assumption that that agent would have godlike superpowers. This whole line of thinking is rife with faulty assumptionsânot even necessarily about the technology, which has yet to come into being, but about humans, agency, values, and benevolence.Â
âHow do we make AI benevolent?â is a badly formulated problem. In its very asking, it ascribes agency to the AI that we donât have to give it, that it may or may not even be able to acquire, and that is naturally ours in the first place. It implicitly ascribes all future moral agency to the AI. âHow can we align AI with human values?â is also a badly formulated problem. It assumes that âhuman valuesâ are or even should be universal and invariant, that they can be âfigured out,â and that they need to be figured out to generate good outcomes from AI in the first place.Â
I have spent the last ten years exploring human psychology from both an empirical experiential and analytic modeling perspective. I sit with people as they introspect on their beliefs, feelings, and decisions, tracking what functions and cognitive routines they run in response to specific stimuli, and experimenting with what kinds of changes we can make to the system.Â
As someone whose job it is to examine and improve the structure of agency and clarify values, I can say with confidence that as a culture we have only a very primitive understanding of either. To varying extents, that primitiveness is the result of psychological constraints, such that even people who do think deeply about these topics get tripped up. In my conversations with people who work on AI and/or AI safety, who fear or welcome the coming AI God, or just keep up with the discourse, I generally find that their working concepts of agency, consciousness, motivation, drive, telos, values, and other such critical ideas are woefully inadequate.Â
The problem of benevolent AI as it is formulated is doomed to fail. One reason is that to explicate âhuman valuesâ is to conceptualize them. If there can be said to be a deeper universal human value function at all, human conceptualization and reflective consciousness are merely its tools. There is some amount of danger in any project that attempts to rationalize or make conscious the entirety of human goal space, because reflective consciousness is not structurally capable of encompassing the entirety of that space, and demanding that motivation be governed by cognition creates dangerous blind spots and convolutions. Not all functions and desires in the human system are the rightful subjects of consciousness, and consciousness is not benevolent toward or competent to reign over all functions and desires in the human system.
Preference is a thing that expresses itself in environmentally contingent ways; there is no conceptualizable set of values that is truly invariant across environments. The things that people are able to consciously think of and verbalize as âvaluesâ are already far downstream of any fundamental motivation substrateâthey are the extremely contingent and path-dependent products of experience, cultural conditioning, individual social strategies, and not a little trauma. Any consciousness that thinks it has its values fully understood will be surprised by its own behavior in a sufficiently new environment. Language severely under-describes conceptual space, and conceptual space severely under-describes actual possibility space. These are not problems to be transcended; they are simply facts of how abstraction works. Conceptualization is and must be GĂśedel-incomplete; descriptive power should grow as the information in the system grows, but the system should never be treated as though it has been, can be, or should be fully described.
The good news is that we have no need to fully describe or encapsulate human values. We can function perfectly well without complete self-knowledge; we are not meant to be complete in our conceptual self-understanding, but rather to grow in it.
The desire for complete description of human values is the result of a desire for there to be a single, safe, locked-in answer, and the desire for that answer is the result of a fear that humans are too stupid, too evil, or too insane to be left as the deciders of our own fate. The hope is that âweâ (meaning, someone) can somehow tell AI the final answer about what we should want, or get it to tell us the final answer about what we should want, and then leave it to execute on our behalf all of the weighty decisions we are not competent to make ourselves. We should be very wary of a project to save ourselves, or even âempowerâ ourselves, that is premised on the belief that humans essentially suck.Â
âProtectionistâ projects of this sort will seek to decrease human agency, consciously or not. That intention is easy to fulfill, in fact is already being fulfilled, regardless of whether full AGI is actually achieved. We are already getting the kind of ostensibly revolutionary automation that makes the baseline experience of living less and less agentic and thus more and more frustrating, increment by increment. The experience of being able to perceive a natural path or option through reality but not being able to execute on it is maddening, and this is the experience that much of our more advanced automation produces: the experience of waiting behind a self-driving car that could turn right at a red light but is too conservative to try it, the experience of asking Chatgpt a question about some tidbit of history or politics and receiving a censored and patronizing form lecture, the experience of calling customer service and getting a robotâthese kinds of experiences point to the more horrifyingly mundane, more horrifyingly real, and much more imminent version of having no mouth and needing to scream.Â
Back to Agency
Absent the assumption of human incompetence, AI has the potential to be used to genuinely help increase human agency, not merely or even mainly by increasing human power or technical reach, but by increasing our conceptual range. In every aspect of our lives, a million choices go unrecognized because we are trapped within the limited conceptual frames that steer us; human life is lived on autopilot and in accordance with inherited cultural scripts or default physiological functions to a far greater degree than most people understand. This is not to say that humans can or should be glitteringly conscious of every choice in every moment, the way people imagine they would be if they were spiritually enlightened. There is a simpler and more discrete kind of psychological expansion. The kind of psychological shift someone undergoes when, for example, realizing that theyâve been subconsciously seeking out harsh, judgmental friends as a method of trying to gain their parentsâ love by proxy, and realizing that they actually have the option to bond with kind and accepting people, is the kind of subtle but profound broadening of scope that changes the option landscape, and changes the trajectory of a the personâs life. Whether we know it or not, our trajectories are currently determined by the way that the space of possible futures we can conceive of is narrowed by our conceptual baggage and limitations. Being told that we have other choices isnât sufficient to change this. The person with judgmental friends was likely told many times to get better friends, long before something shifted enough for them to internalize the realization themselves. Being given more material options alone isnât sufficient eitherâthat person may have likewise been surrounded for years by kind people willing to befriend them, whose overtures went unnoticed in the subconscious pursuit of more actively withheld approval.Â
So it is for all of us: we are constantly surrounded by options and opportunities that we are conceptually blind to. The current space of imagined futures in AI is highly constrained by cultural imagination, with many people and projects pursuing a vision of AI in a way that is functionally identical to a traumatized individual pursuing an abusive relationship without realizing it, because their concepts of love and relationships have been formed and deformed by local experience and trauma. Were we to query the abuse seeker about their values in the kind of introspective research project that some people are doing in order to try and unearth the human value function, they would be able to tell us all sorts of things about desiring to feel safe, loved, taken care of, protected by something smarter or stronger than them, etc.âbut this would not free them from the malformation of those âvalues,â and pursuing those values would eventually damage them, no matter how lovely and idealistic the words tagging the concepts sounded. To open up the space of what is imaginable takes something beyond verbalization, and beyond mere empathy.
The natural process of human psychological development is a process of models and functions observing other models and functions. For example, someone who compulsively seeks attention by interrupting othersâ conversations may notice that this bothers people, and feel ashamed; the compulsion is one function, the shame is another. The latter function is formed in observation and judgment of the former, and attempts to modify or control it. With these two functions at odds, the person faces a false dichotomy: to either âbehave naturallyâ and be disliked by their peers, or to control themselves and be accepted, though not for who they feel they really are. Obviously, these are not truly the only two options in human behavior space. Upon fully metabolizing the desire for attention and the feedback loop it hinges on, such that the compulsion dissolves, the person will have much more range to behave ânaturally.â They will be able to respond to the situation at hand in authentic ways, without sacrificing the regard of their friends. The process by which one function is observed, evaluated, commented upon and/or modified by another is the natural evolutionary tree of human psychology. Up to this point in our history, the environment that psychology evolves toward has been whatever culture and material space we happen to be embedded in, perhaps with some limited technical advancements from targeted spiritual practice and therapy. But the possibilities for targeted and technologically aided increases in agencyânot agency in the sense of having more doordash options or more ability to lobby government via apps, but agency in the sense of being able to think new thoughts and generate new possibilitiesâare likely to be extremely under-explored in the nascent field of AI, simply because this is not a common understanding of the meaning of agency.Â
An Interlude With A Hypothetical Agency-Increasing AI
Imagine the following interaction between a distressed person and a specially designed chatbot built on existing LLM technology:Â
Jimmy comes home from hanging out with friends and opens up his interactive diary. He tells it that heâs feeling like shit. âWhy?â the chatbot asks him.Â
âI donât know,â he tells it, âI just feel like shit. Everything sucks.â
The chatbot proceeds to prompt him further. âWhat is the âeverythingâ that sucks?â it asks. It does some pattern matching for him: âthat reads to me like the kind of language people use when theyâre generalizing from a bad feeling.â It prompts him to specify: âHow would you describe your mood right now? Ashamed? Depressed? Cringey? Angry? You can pick multiple descriptors.â
Eventually, with the chatbotâs prompting, Jimmy tells it that he has just come from an interaction with friends where he made overtures to a girl and she didnât laugh at his jokes. The chatbot asks him to recall other times heâs felt a similar feeling: âcan you think of five or six other times youâve felt like this and describe to me what happened?â It helps him draw parallels: âwhat are some common threads between those incidents?â
The special thing about this particular chatbot is that it has been designed to draw on a wide range of analytic commentary. It can make a statement like âthat reads to me like the kind of language people use when theyâre generalizing from a bad feeling,â because its training data includes examples of this kind of analysis, and itâs designed to prompt the user with these kinds of questions. It may be trained with input from therapists, neuroscientists, film and literature critics, writers, etc.âpeople who study the human condition from the inside and the outsideâand it will also train itself on Jimmy, and train Jimmy on itself: together, Jimmy and the chatbot are able to build a model of the process that Jimmy runs whose output is his current mood. In answering the chatbotâs questions, Jimmy will begin to do a kind of phenomenological and experiential correlation he might otherwise never perform. The chatbot will act as an external memory bank for the concrete examples Jimmy gives it, along with Jimmyâs evaluation of the meaning of those examples, and the conversation will act as a concept formation session for Jimmy, who over time will start to see ideas like Iâm not worth peopleâs time as the outputs of a recognizable cognitive pattern that he can reflect on critically rather than as facts about the world. Once he recognizes the pattern, it has much less power over him.Â
This system would not need to be particularly powerful. It doesnât give Jimmy advice or do anything especially prescriptive; it is just an external tool for aiding self-reflective concept formation, and not a particularly sophisticated oneâbut it is crafted so as constantly refer its interlocutor both to his own perspective and to perspectives outside of himself, and Jimmyâs trajectory could be pretty profoundly affected by merely this simple loop. Far more sophisticated setups are possible, which similarly require no more âgoal functionâ than something like this, beyond the implicit weights imparted by its trainers and its interlocutor (though these will be significant). Nothing in Jimmyâs chatbot or its potentially more advanced cousins requires the AI, or Jimmy, to have deep or special knowledge of humanityâs goals and valuesâor even just Jimmyâs goals or valuesâbecause the chatbot does not need to be a moral arbiter. Itâs merely a pattern recognition machine with a Socratic questioning function. Itâs not an agent, and it doesnât need to be.Â
This chatbot isnât a particularly brilliant idea, and there would be plenty of issues in making it work. But whether or not a self-observation chatbot is the answer to our problems, I think it is at least an answer to a better formulated problem than the problem of making benevolent AGI.
Agentically Creating the Future
The current wave of AI discourse, the current telos of our culture around the topic, is an example of human social epistemics gone awry in a parasocial world; people are shadow-boxing an imaginary enemy theyâve created together, in person but also massively online. Theyâve synced up on a set of trust and in-group signals that amount to feelings of competition and horror and despair, and reinforced and spread them via social media, willing an arms race into being via their socially and parasocially reinforced fear of an arms race.Â
It would be better to de-fixate on the arms race, and instead imagine applications that are built to help ground people in reality, to explore where and why they respond to which sensations and drives, to know themselves better and give themselves more grace. I know that to many, this will sound like an irresponsible suggestionâisnât it a terrible idea not to fixate on something that is so serious and urgent? But this reaction is coded into the ideology; those senses of importance and responsibility and urgency are artifacts of its way of seeing. Itâs helpful to reflect on whether those feelings are serving you personally. If they are not, itâs much better to let them go. You can still engage with the world and the topic from outside of the particular ideology; in fact, you will have much more agency by doing so.Â
Ultimately, no matter how hard we try to replace our own agency, whatever we build will be its productâso letâs try to turn our existing agency towards increasing rather than decreasing our future agency, and grow ever in our beautifully incomplete understanding of ourselves and the world. Aligned AI is not the AI that encapsulates our goals and enacts them for us; aligned AI is the AI that helps us open up our exploration of how we ourselves can better see and create the kind of future we want for ourselves.