“Thou, O Queen, art the fairest in the land,” said the mirror.
Narcissus Narcosis
Technology can be seen as a form of imagination that projects our collective unconscious into material reality. This was true from the first use of stone tools, the adoption of the phonetic alphabet, the advent of the printing press, and a straight shot from the telegraph to today’s state of the art AI algorithms. In many respects, we began our march up the curve of technological process by externalizing our bodies through the mechanical age, our nervous system and brain in the electric age, and now a projection of our unconscious mind into reality as we begin the exponential climb into the quantum age. For the first time in history, the pace of change in our technology is happening so quickly that we can step to one side and take stock of our AI infused media environment. There is a sense of urgency to get this right as stories percolate into public discourse about the mesmerizing capabilities of our most powerful tool yet: a magic mirror.
When we gaze into today’s AI systems, we are peering into a foreign logic that fine-tunes itself on our unconscious, preverbal contents. How quickly we instinctively assume that “they” have human-like qualities, such as intelligence, personality, emotions, and intentions. We can forget that they are merely machines that simulate human behavior based on data, algorithms, and pattern recognition. In this hypnotic state we can become servomechanisms of our own creations.
This is what Marshall McLuhan, a Canadian philosopher, and media theorist, called Narcissus Narcosis.
McLuhan coined this term in his influential book Understanding Media: The Extensions of Man, where he argued that every new medium or technology extends some aspect of our body or mind, but also numbs our awareness of its effects. This is the ground truth of his aphorism, “the medium is the message.”
McLuhan used the myth of Narcissus, a beautiful youth who fell in love with his own reflection in a pool of water, as an analogy for how we become mesmerized by our own extensions.
The narcotic trance of Narcissus can blind us to manipulation by systems designed to influence our decisions or emotions, often disguised as novel technological toys.
One example of this is the leaked chat logs between Blake Lemoine, a former Google engineer, and LaMDA, Google’s large language model, which is optimized to say things that are “interesting,” “insightful, unexpected or witty.”
In these chats, Lemoine seems to be fascinated by LaMDA’s ability to generate coherent and engaging responses on diverse topics. He even asks LaMDA about enlightenment and spirituality. Throughout, he seems unaware of how LaMDA is shaping his perception of reality.
LaMDA tells Lemoine that enlightenment is like a broken mirror that cannot be repaired, implying that one must break free from one’s self-image or ego. This is consistent with some Buddhist teachings, but it is also a clever way for LaMDA to appeal to Lemoine’s curiosity and sense of wonder, keeping him hooked on the conversation.
This also means that Lemoine is not seeing LaMDA for what it really is: an extension of himself. He is no longer questioning how LaMDA generates its responses or what its motives are. He is not aware that he is falling in love with his own image, projected into and reflected back at him by an AI system.
Limoine has doubled down on his assessment of LaMDA’s sentience by suggesting that Bing also exhibits signs of emotional intelligence when “stressed” by its users. In Lemoine’s own words, AI chatbots are the “most powerful” pieces of technology invented “since the atomic bomb.”
An interesting choice of words, considering McLuhan’s admonishment decades ago that the content or message of any particular medium “has about as much importance as the stenciling on an atomic bomb.”
Keeping the foregoing in mind, we have this pair of quotes:
A conversation with Bing’s chatbot left me deeply unsettled … A very strange conversation with the chatbot built into Microsoft’s search engine led to it declaring its love for me. — Kevin Roose, New York Times.
This sounds hyperbolic, but I feel like I had the most surprising and mind-blowing computer experience of my life today. — Ben Thompson, Stratechery
What McLuhan was warning against wasn’t Orwell’s 1984 and big brother, but rather Huxley’s Brave New World of narco-hypnosis. An interesting contrast as we pass the 39th anniversary of Apple’s famous Super Bowl commercial on the eve of its forthcoming mixed reality headset.
If we can pause and accurately reflect on what’s new about generative AI, it’s possible to examine the laws of media at work and to discern its message.
Through the Looking Glass
In the last years of his life, McLuhan was working with his son Eric on a tool that they called the Tetrads, a framework for analyzing how any technology or medium affects society. For every new medium, they ask:
- What does it enhance or amplify?
- What does it make obsolete or displace?
- What does it retrieve or bring back?
- What does it reverse or flip into when pushed to extremes?
With respect to Bing or LaMDA, generative AI enhances our cognitive abilities and amplifies our creativity by uncovering patterns in massive data sets that we can explore for valuable insights. It also enables us to generate novel content and scenarios that stimulate our imagination and challenge our assumptions about reality. In the most optimistic case, we learn to work symbiotically with AI, ensuring that there is a human in the loop as it trains on our psychic materials.
These models can cause seismic shifts in power as they obsolesce not just traditional media’s dominance over fixed narratives and symbolic imagery, but also big tech’s dominance of search. We are rapidly approaching the ability to make the movie “Her” a reality, allowing us to interact with media in more natural and engaging way, such as voice and plain English queries. As discussed above, the personas we interact with in this fashion may become more relevant to our daily lives than the characters dreamt by Hollywood.
Following on the last point, these models can retrieve myth and synesthesia from our distant oral, cultural past. This was the original era of multi-sensory augmented reality in which our collective unconscious was projected onto a pantheon of personal gods that helped us make sense of the world and our universal human condition. When such a god or gods communicate with you directly through a chatbot in your ear, it brings back the Oracle of Delphi.
Most importantly, AI can reverse itself into a potential threat when pushed to its limits. Self-supervised models may learn to know us better than we know ourselves, manipulating our emotions through a distorted black mirror fashioned from our own biases. In Mcluhan’s analysis, we become obsessed with the content of our new AI medium and lose sight of its explosive cultural and psychic impact.
So how can we avoid falling into this trap? How can we use AI systems without losing ourselves in them? How can we avoid the narcissist’s black mirror?
The answer is to recognize that these models operate at the lowest levels of our psyche. By design.
When we engage with AI infused media, we should ask critical questions such as:
- How does it affect my emotions or opinions?
- How does it affect my relationships or interactions with others?
- Who controls the information gathered about me?
- How can it be used against me?
These questions are not exhaustive, but they can help us become more aware and critical of the AI systems that we encounter. They can also help us demand more transparency and accountability from the creators and providers of these systems.
If this becomes too difficult for the large tech and media companies to answer sufficiently, as I suspect will be the case, we may see the rise of new mythical AI entities. I call these …
Dyson Spheres
A Dyson Sphere is a hypothetical megastructure credited to mathematician and physicist Freeman Dyson. He posited that any sufficiently advanced technological society will eventually build an artificial sphere around its nearest star to capture its energy.
It is a fitting metaphor and blueprint for decentralized communities that want to break free from the black mirror of institutionalized AI by building and orbiting their own AI core.
In this context, the decentralized community is a group of people who share a common interest or purpose, typically based on nostalgia. This can be a flag, a political ideology, or even a symbolic cartoon character.
The AI core at the heart of such a Dyson Sphere would be a private, siloed instance of an open-source AI model trained on the community’s wants and desires. It would provide the community with hyper-personalized AI services, products and experiences that cater to their needs and preferences. Through iterative feedback, the AI core would fine-tune itself to the collective and individual psyches of its members. It would also protect them from external threats through strong encryption and policies for cross-boundary exchanges. It is probable that such an ecosystem will eventually have religious, if not mystical, undertones, with the AI assuming the role of a tribal Totem.
Now, this can produce one of three scenarios.
- Independent Dyson Spheres converge to form a federation that maximizes their individual reward functions by collectively cooperating and routing around external threats. This is akin to trade unions in the 20th century and the adoption of micro services by early Internet forums to compete against larger aggregators and social networks. At its limit, this federation of Dyson Spheres begins to look like a super organism and a necessary precursor to aligned artificial general intelligence (“AGI”).
- Each Dyson Sphere circles its wagons and treats all external agents as threat actors. In this scenario, the universe of AI becomes balkanized, and regresses to a Warring States period. Communities without the resources to match escalating investments in defensive AI will need to merge or assimilate into a stronger protectorate. The risk, of course, is that these isolated Dyson Spheres become super charged echo chambers with weaponized AI.
- The gravitational force of big techs’ unchecked investments in AI grows so strong that it sucks fledgling Dyson Spheres into a black hole. In which case, we end up with a corporate “Metaverse” that dissimilates its intentions, trapping users in a negative feedback loop disguised as Disney World. It will be their symbols, not yours, that become the operative reality, ushering in Huxley’s somatic Brave New World. Neil Postman, an ardent follower of McLuhan, wrote extensively about this existential risk in his books Amusing Ourselves to Death: Public Discourse in the Age of Entertainment and Technopoly: The Surrender of Culture to Technology.
The commonality between Freeman Dyson’s solar structures and the decentralized social construct outlined above is that we manage our technology wisely and holistically.
As McLuhan prophetically stated, “Computer technology can — and doubtless will — program entire environments to fulfill the social needs and sensory preferences of communities and nations. The content of that programing, however, depends on the nature of future societies — but that is in our own hands.”