Close

Ian Cheng

Emissaries (fauna and narrative agents), from Emissaries, 2017

In this issue of Living Content, Ian Cheng describes the similarities and differences between two of his most recent live-simulations: the trilogy “Emissaries” and “BOB”. He highlights the importance of narratives, and how they support and motivate the AIs’ behaviors in the mentioned works. We also talk about some of the crucial similarities between the workings of the human mind and the way in which AIs are designed - especially when it comes to memory, compression and correlation of information - about the future and the philosophy behind AI development, and much more.


June 4, 2018
Issue №:
18


Living Content: In Emissaries, you introduce the role of simulation as a habitat for stories. The actual narratives that construct these three interconnected episodes are quite complex; they seem to be myths or stories of evolution. What informed the three specific narratives behind the work? How did the three emissaries—the child as the Young Ancient, the super-pet Shiba, and the AI puddle—come together? 

Ian Cheng: Everything started with the stories in Emissaries. It first came from the idea that I needed the simulations to have a counterforce. A simulation by itself is open-ended, with many different agents reacting against each other. It is inherently meaningless, which is not a bad thing, but I wanted to have the influence of something meaningful to give the meaninglessness some trouble. I realized that the one way to reliably capture what we call “meaning” is via narratives. Stories. But I didn’t want to make stories that overpowered the open-ended qualities of the simulation, so I decided to embed all the narrative content inside one character only. I called this character the Emissary. The Emissary would exist among all the other reactive agents in the simulation. Once I had that basic idea, I started to write three narratives, each around a character who undergoes a major cognitive leap. 

So in Emissary in the Squat of Gods, the Young Ancient Girl is part of an ancient community living on a volcano. She is the daughter of the village leader, a shaman figure. The volcano is beginning to tremor, and she gets hit in the head by some debris. In that moment, both narratively and technically (on an AI level), she switches modes. Before that moment, her basic AI was the same as all the other characters in the simulation—reactive to their needs, like hunger, sleep, thirst, the need to socialize, play, etc. I call this the limbic brain. But when she gets hit in the head in the simulation, her brain switches to a completely pre-scripted narrative plan, which is simply to try to gather all the other characters, hierarchically, starting from her father. It’s pre-scripted insofar as these are her goals, but again, because the underlying technical base of the project is a simulation, those goals are easily interrupted by the other characters/agents in the simulation. 

Emissaries, 2017, MoMA PS1, New York, 2017;, Photo by Studio LHOOQ, Pablo Enriquez

LC: Does this pre-scripted narrative plan also highlight a different type of AI? Is this the same AI program that is used for characters in combat games for fulfilling their missions? 

IC: I ended up designing every agent in Emissaries with three mini-brains. I was struggling to figure out a unifying model of mind that would encompass many kinds of behaviors and be expressive to different environmental conditions it encountered. Then I had the idea to stop trying to make a unified model and instead collage together three existing models and have them compete with each other for control of the body. It’s like Marvin Minsky’s Society of Mind, or the movie Inside Out, where there are multiple agents competing inside the girl’s head. The three competing mini-brains are a reptile brain, a limbic brain, and a narrative brain. The reptile brain took inspiration from shooting games, where everyone in the environment is judged to be a friend or foe. If it’s a foe, the fight-or-flight behavior kicks in. The limbic brain comes from the game The Sims. It models slow-burning desires, like hunger, thirst, the need to socialize, need to play, sexual attraction. The narrative brain allows the agent to plan a few steps ahead and to follow a script. 

What happens is the reptile brain in most of the agents has the highest priority. Then comes the limbic brain. Then, finally, when there are no immediate threats to an agent, and there are no urgent desires to attend to it makes some narrative plans. Of course, for most of the characters, they rarely get to the narrative planning stage, because there’s just too much going on in the environment; too many perceived threats and triggers for their desires to be busy with, which doesn’t sound so untruthful to real life. 

The second episode, Emissary Forks At Perfection, features a Shiba dog who is the emissary with the narrative goal of trying to extract memories out of a 21st-century human. The dog has a new ability: it can “fork” itself, meaning it can create a duplicate of itself, and the duplicate is the one that inherits all the worry, anxiety, and fears so that the other fork can be free to be more adventurous. The Shiba dog is trying to extract memories as its mission, but of course, all its doggy instincts kick in and it begins to fall in love with the human again. The dog’s master, an AI who governs the landscape, tries to correct the dog, but the Shiba panics and often overwhelms the simulation with Shiba forks in an attempt to fulfill its “job” while keeping its emotional bond with the 21st-century human alive. 

Finally, Emissary Sunsets The Self is about the same AI who governs the landscape, but many centuries later. It has fused itself with the landscape, and has become a sentient substance. This came from a dream that intelligence would one day become a utility, like water on tap. This intelligence appears in the simulation as a big yellow amorphous puddle. It’s been living in this landscape long enough to have done a bunch of genetic experiments, one of which was to fuse the Shiba with the human and create a population of characters called the Oomen, who are a mob-like species that function as an immune system for the environment. Their impulse is to zealously protect the landscape from radical mutations, like park rangers. One day, the AI substance gets bored and sends a little puddle of itself into the biological body of a plant in the environment, and it learns to love the feeling of incarnate life. It’s having a party with life, trying to gather more body parts for itself—but from the perspective of the Oomen, this possessed plant is perceived to be a monster, a mutation that threatens them. The narrative agent in Emissary Sunsets The Self is the AI puddle, who has the ability to “drone” or take possession of biological organisms, like the plants, and sometimes even the Oomen. The drama that gets simulated boils down to how wild the puddle can mutate itself before it triggers the prejudice of the Oomen. 

LC: It sounds like an incredibly complex narrative. How did you arrive at the idea of creating a live simulation out of these stories? 

IC: I really wanted to develop three fables or fairy tales, embed them into the main narrative agent—the Emissary—but then let those fables improvise a life of their own by being subjected to all the modifying changes and influences of the simulation. The original plan for Emissaries was to make an animated children’s movie about cognitive evolution. In some ways, the flavor of that plan is still around in the final works. 

LC: That’s amazing. I can see an educative relationship between the projects. I have to say, it’s really nice to have you explain the pieces like this, because the narrative is so dense... 

IC: I think narrative gets a bad rep in art, but it’s the golden age of content, especially narrative content. All the technical infrastructure has arrived for a new world, a distinctly 21st-century world, and now narrative is a really vital form to make sense of all these changes without sacrificing complexity. Art can fill in that gap. It can give perspective to the internal confusion and external weirding that we are all experiencing. 

LC: Yes, I agree, narrative is important. It’s important to know how to use it in art, in order to share ideas with different levels of complexity. And making art that is not about art—that’s also very important. 

IC: Yeah, I’ve come to believe art is fundamentally a form of communication. It’s a form for literally compressing and transmitting a composite package of feelings, thoughts, perspectives, and ideas to another person: the viewer. We don’t have brain-to-brain interfaces yet, so we have to channel these psychic bundles into an intermediary material form, a medium. The art of art is trying to maximize the compression and minimize the lossiness from my mind to yours, and the side effect of trying to do this is that new things get discovered along the way that expand the palette of possible experiences for humans. 

BOB, 2018, Serpentine Gallery, London, 2018, Courtesy Pilar Corrias London, Gladstone Gallery, Standard (Oslo); Photo by Andrea Rossetti
BOB, 2018, Serpentine Gallery, London, 2018; Courtesy 2018 Hugo Glendinning

LC: I have a technical question: How many AIs can you combine in one character? Can they come into conflict? And if so, is it like a chemistry lab, where you could test their interaction? Let’s take BOB, for instance: was this a more complex process than Emissaries? 

IC: BOB has the three AI mini-brains that were used in “Emissaries,” and it has an additional mini-brain which models memories. The idea was to try to capture how memories are linked to an emotional judgment. So what happens is that BOB is scanning across 25 different parameters every 10-15 seconds. The parameters are pretty arbitrary: BOB’s current metabolic state, body size, the time of day, the number and kind of objects in BOB’s local environment, the viewer’s face. 

These get bundled into a “memory,” a snapshot from that moment in time, and stored in a very search-efficient data structure. Later, when BOB wants to “feel” something about its present moment, it compares the present moment (those same 25 parameters) with a search for the most similar memory. For example, maybe it’s a sunny afternoon, BOB has low energy, BOB’s body is 10 meters long, a human is smiling at BOB. It finds in its memory a moment from days ago when it was a sunny afternoon, BOB’s body was 10 meters long, someone was smiling, but BOB’s energy was high. It calculates the delta (the difference) between these moments and sees that things were higher energy before, and therefore judges the present moment to be trending negatively by comparison. Then BOB “feels” that its present moment is relatively worse. BOB feels negative. Suddenly, food starts appearing in BOB’s environment. It’s feeding time. Because BOB feels things are worse, and now food is co-present with that feeling, BOB connects the feeling that things are worse with the presence of food. So now food has a negative association. The memory snapshot mechanism kicks in, as it periodically does, and now BOB has a distinct memory of food being bad. 

BOB Production Drawing, 12.25.17 (back)
BOB Production Drawing, 8.28.2017

I feel cross-eyed talking about this. It’s kind of crazy because this model tries to account for the fact that memories, and the feelings associated with memories, can be described as an n-dimensional phenomenon. But of course, when we speak about feelings or memories, we are forced to speak in one-dimensional labels, like happy, sad, angry, disgusted. And we’re forced to rationalize our feelings in terms of their casualty, like a person who doesn’t eat because of trauma from a moment in childhood. But in making BOB, and reflecting on how to model memories and their link to emotions, I’ve really come to believe that a feeling is never about the present circumstance—it’s always in the context with at least one memory of how things were before. It’s the comparison that allows our minds to form a judgment, and this judgment associatively infects other stimuli that are co-present. 

BOB was really an opportunity to think about these things and try to recreate a model for them. I think a lot about the opportunity an artwork presents to dive deep into several things, and see if they can stick together. I try to think about it in terms of containers. An artwork for me, from a production point of view, is a container to contain all the things I’m interested in. But because it’s a container, it has a legibility to it. For example, a movie is a container for ideas about set design, lighting, sound design, music, costume design, acting, relationships, team organization, etc. For BOB, I imagined a creature as a container. A creature can contain ideas about the relationship between body and mind, metabolism, lifecycle, learning, and of course, ideas about AI. 

LC: Maybe you can tell me a bit about Boyd’s OODA system in relation to this? Is this a normal association for AI development? 

IC: Boyd’s OODA loop was influential to me because it offered yet another way to model the mind into mini-brains. OODA stands for four processes: Observe, Orient, Decide, Act. It’s not normally associated with AI; it was originally devised by John Boyd as a way to think about adversarial engagement for fighter pilots. If a pilot could get “inside” the adversary’s Orient phase—meaning the pattern of thought to do with the adversary’s presumptions, models, beliefs, culture—then the pilot could outmaneuver the adversary. Which is a funny way of saying, the pilot who can more quickly develop empathy for its adversary has the upper hand. 

In regards to AI, the OODA loop presented a model of mind that really emphasized to me that I should focus on the Orient phase in making AI. Currently in the landscape of AI development, the focus is on the Observe phase: machine learning, deep learning, is all about creating better observational ability in computers so that a computer can see the raw or imperfect data from a camera and recognize what is a pedestrian, what is a car, what is a cat, etc. But once those observations are made, the big AI question remains: how can a computer accumulate sensory data and develop a mental model of its experience that makes making future decisions quicker and makes having an internal understanding of the external world more and more rich? This is the Orient phase. 

The analogy is, you’d want a robot soccer player to use machine learning to recognize a soccer ball and sense when the ball has reached its foot. Then, you’d want the robot soccer player to begin to infer what the rules of soccer are, with better and better orientation, so it can actually infer the rules of the game. Only once it learns the rules of soccer does the sensation of the ball reaching its foot have meaning, and only then can the robot soccer player produce a meaningful response, like kicking the ball toward the goal. Currently, there is no AI that can fluidly perform both processes, recognition and reasoning. But that is changing very soon. 

LC: It seems that we’re witnessing such a special moment in our time, when we are clearly and unstoppably at the very beginning of developing an entirely new world. And it’s incredible that people like you and I have access to learning this. There is also a lot of pressure to do it right, with a lot of self-awareness and consideration especially towards ethics. The only association that crosses my mind, that would entail the same amount of responsibility, is raising a child. 

IC: People are worried that if we give AIs the wrong desires or goals, like a basic goal to be happy or to maximize money, then it will optimize for those basic goals at the expense of humanity. Elon Musk recently suggested a very beautiful goal for AIs. He said that its fitness function should be to maximize freedom of action for humans, meaning AIs should strive to create as many options as possible for humans in every situation. I thought this was very beautiful. It reminds me of a book I love called Finite and Infinite Games by James Carse. Carse says finite games are games you play to win. But infinite games are games you play to keep playing, and when it looks like the game is going to come to a conclusion, your duty is to change the rules to keep it going. It strikes me that most ideas of what an AI’s goal should be are very finite: maximize profit, drive to X location, find the best strategy to beat country X at whatever. But Musk’s idea is infinite-game-flavored. “Maximize freedom of action” is saying, help keep the game going for humans, recognize when options are becoming limited, and creatively make more options when things are closing in. I think as actual artificial general intelligence gets closer to realization, AI discourse will get over its finite game fears and converge more on infinite game problems. It’s the ultimate horizon line of the human condition, to unlock new infinite games. I believe a world with AI will accelerate and expand exploration around this limit of life and agency. 

LC: Another question, or rather, an observation, was connected to this idea of the modular brain. You write: 

“The architecture of the agent’s brain involved the decision to not attempt one unifying model of the mind, but instead to compose multiple models of the mind together, echoing the hypothesis that our own brain is itself composed of many modular sub brains, some ancient and some new, accrued together over the history of our cognitive evolution.” 

You take this idea from neuroscience, that the brain’s architecture is revealed as modular, and that ultimately leads to there being no one “I,” but multiple, interchangeable “I’s” that are formed in our subconscious and compete at any given moment to arise in our consciousness. This, together with the idea of embracing constant change (including the evolution of our minds), implies an Eastern philosophy influence. Have you arrived at these ideas through your research, or is this something that you purposefully—and maybe quite subtly—draw in your narratives? Or is it just my reading of your work? 

IC: I’ve arrived at this through trying to make AI. Personally, I’ve always had respect for the unknown. I’ve also had a lot of psychedelic experiences since college, and they have reinforced my respect for the unknown and showed me how easy it is to unravel all beliefs and mental models about a unified “I” or ego or self. But it’s also given me an appreciation for how essential it is to accommodate a coherent self in order to just get through the day and make some plans. 

All the research I’ve read points to the basic idea that the mind is a kind of inner theater to make life manageable. Because at the end of the day, we are biologically bound to the energy requirements of the body and the brain. That energy budget needs to be used effectively, so it makes sense that the mind would evolve enough theater tricks to make it all stick together, like good office management or managing a band. That certain religions recognized all this in much more informationally limited times is just totally impressive. 

I tried to approach this subject in my new book Emissaries Guide To Worlding. The premise of the book is that to make a complex project under the limitations of being a human, you have to hack together some mini people inside yourself to manage that complexity. On one hand, the book is an account of the making of “Emissaries,” but it’s also a way to introduce the idea of multiple selves who steer the ship during the process of developing a complex project, like making a world. I developed four archetypes: the Director, the Cartoonist, the Hacker, and the Emissary to the World. Each one is like a persona or mask that the artist can wear. 

Emissaries Guide to Worlding, Koenig Books, 2018

The Director is the top-level planner who really wants to set up the project with the right container, context, and team, and is overly anxious about the project being completed and meaningful. The Cartoonist is the persuasive seducer, who knows that humans default to their limbic desires, and plays with our biases toward faces, emotions, and similarity that influence our attraction or repulsion to a world. The Hacker is the magician who tries to produce new leverage where none existed before, whether a special material, or a new technology, or a behavioral hack, that opens up new expressive grounds for the project and gives the eventual viewer the feeling that there is magic sauce that demands further exploration. The Emissary to the World is the part of the artist who nurtures the world that the Director, Cartoonist, and Hacker have built. The Emissary is the one who protects the world from fully resolving itself into an easy win, a case-closed project, and finds a way for the project to stay alive without its original author. 

LC: That makes perfect sense. I’m also looking forward to seeing your book. Was it launched at the same time with your show at Serpentine? 

IC: Yes, version 1.0 just came out. I’d like to further develop the thinking. It describes the four archetypes and how they work together to make a world; in this case, to make Emissaries. The book is really part of a larger ongoing stream that will accrue new updates along its route. With BOB and Emissaries, I’ve come to see projects as versions in a stream, like the way software is developed, with updates along the way. 

LC: So BOB and Emissaries can be updated? It’s going to be interesting to see how the institutions deal with this along the way, preservation-wise. 

IC: Yes, BOB and Emissaries can be updated. Often, over the course of an exhibition, the institution gets updates. 

LC: What are you looking forward to in relation to technological development? What excites you about the future? 

IC: I look forward to the day when everyone has the option to do work they find meaningful and interesting. I look forward to the day when AI feels like cohabiting with a spirit world. 

Interview by

Adriana Blidaru

Curator, writer, and founding editor of LC.