World War 3.0

Word Count: 2,720 |
Download this article

by Dr Stephen Setterberg


photographs by CLARA LANG-EZEKIEL

Xenophobic populism is running rampant. The most obvious causes are globalization’s impact on cultural identities and, for many, economic security. But people can also experience pervasive, inchoate anxiety when the future casts shadows on the present. Here are three of the darkest shadows.


Upon returning to England from Munich in September 1938 with the now-infamous appeasement agreement co-signed by Adolph Hitler, British Prime Minister Richard Chamberlain proclaimed a victory for European peace in what he called “these anxious times.” Anxiety is wired into us to signal possible danger, and there was ample reason to feel anxiety. One year later, when Germany invaded Poland, the horror that had been festering since the First World War exploded into the Second. 

In 1909, five years before a hapless gunman in Sarajevo unleashed the horror of the First World War by assassinating Austrian Archduke Franz Ferdinand, British author Norman Angell published The Great Illusion in which he argued that war had become futile and irrational in an advanced capitalist system of economically interdependent nations, like continental Europe’s. The book became a bestseller, widely read in several languages, fueling pacifist hopes that humanity had arrived, finally, at a sufficiently enlightened state to avoid self-immolation. The book was a balm, making it easier to hold onto sanity in the face of what was in fact emerging: a war of unprecedented ferocity, with carnage wrought from the technological fruits of the very scientific enlightenment that was imagined to have elevated humanity beyond our more bestial selves. Nonetheless, there were many premonitory fears, as in Carl Jung’s famous dream from before the war of a sea of blood covering Europe.

To say that technology has evolved in the century since is beyond understatement. We homo sapiens, on the other hand, have not evolved—indeed could not have possibly done so to any fundamental degree on the timescale of biological evolution. Human nature remains human nature, social constructivist quibbles aside. Technology, being not so constrained, has evolved to such an extent over the same century that, I submit, we are now in the midst of a World War without even knowing it, because WW 3.1 is a virtual war.

The idea of cyberwarfare is now widely used. But ‘cyberwar’ suggests a limitation to technological targets. WW 3.1 broke out in earnest during the 2016 United States presidential election, followed quickly by the battle of Brexit, and involves every major political power and technologically capable insurgent group on earth. Many people sense this, but because the war has not erupted beyond the virtual to large-scale, overt military action, it has yet to be grasped in its full reality.

This is a collectively unconscious war, proceeding largely unrecognized, but a war nonetheless that has already caused massive economic impact and an unclear number of indirect, collateral human casualties. Its runup slipped into the news occasionally—Sony is hacked [1], British hospitals are paralyzed [2], Iranian centrifuges are crippled remotely [3], NSA malware is hijacked by terrorists [4]—but there is not a conscious frame to reckon with the complexity. Hints abound. We are told our internet-connected urban infrastructures are vulnerable, and that China could bring Western security to its knees with Trojan horses were Huawei communications technology to be widely adopted.

It was the emergence of nuclear weapons that forced us to relegate war to a nonconscious realm. Nature used to be our biggest threat, but now we know that the scorched corpses of Pompei can be inflicted by ourselves onto all of humanity with the push of some buttons. This knowledge comprises a shared, forbidden impulse. So we repress. But our natural, aggressive (death?) drive persists and inevitably manifests—where technology allows. A return of the repressed.

What will cause future historians to mark the 2016 US election as pivotal is that it involved a direct attack by one (albeit diminished) superpower, Russia, on another, the United States, targeting its fundamental strength—representational democracy—in hopes of weakening its ability to act coherently on the geopolitical stage. There is no doubt that the political influence of the United States wanes across the world while its two dominant parties cannibalize each other like voracious marionettes dangled between Russian trolls and US media, both profiting on the same streams of savagery in the populace that drew our progenitors to the Roman Colosseum or to medieval squares to watch people being drawn and quartered.

Whether or not Donald Trump might have been elected without Russian interference is beside the point. The salient fact is that Russia succeeded in significantly skewing US voter behavior through extremely sophisticated social media subterfuge, as they did shortly after with Brexit in the United Kingdom, and as they continue to do in elections and other political contexts across the West. These attacks destabilize their enemies’ resolve and capacity to respond to an expansion of Russia’s power base in the Ukraine, Turkey, and Syria—without firing a single physical shot at NATO-allied nations themselves.

Russia and China appear to be at least as far ahead in this cyber-psychological/biopolitical warfare as the Soviet Union was with Sputnik in the early days of the space race. And they have the advantage of a century of more experience with aggressive, intentional, propagandistic population manipulation. For the sake of preserving the survival of non-totalitarian paradigms, it is time for the West to face some hard realities: we are in a virtual war for the minds of our citizens, and so far we are losing.

Confronting this reality would require at least the following:

1. Overcoming the collective narcissistic injury of being in an inferior position in the virtual war in order to take on the scale of the challenge

2. Letting go of the wishful fantasy that rational self-interest will eventually, by itself, naturally align nation states for the good of humanity; this denies the human propensity to seek power and dominance as ends in themselves

3. Reinstituting a semblance of government regulation on journalistic objectivity to mitigate the induced states of polarization that cripple a democratic process and reclaim a passioned collective respect for a diversity of opinions

Moving beyond polarization—an outcome hard to imagine at the moment—would be much more feasible if the very real, shared threats were fully recognized. Such recognition would establish a new ground for collective action.

Like World Wars I and II before it, WW 3.1 emerged after an anxiety-ridden sense of foreboding and dread, encased in a shell of denial, ready to crack. But nuclear weapons have made the present repression even more desperate, with the war itself—so far—sublimated into virtuality. Of course there remains grave and increasing danger that the sublimation will not hold, releasing outright and incomprehensible violence. It is therefore critical that Western democracies find a way to achieve population-level insight into the reality that we are under a siege of a cyberwar intended to cripple our mode of civilization. Unfortunately, that is not the only war to face.


World War 3.2—the potential conflict between an inevitable artificial superintelligence and the humanity that creates it has not yet started, but the runup is well underway. Elon Musk, founder of SpaceX and Tesla, is passionate that “AI is the biggest existential crisis we that we face . . . far more dangerous than nuclear weapons.” Starting by making the case that AI will inexorably exceed human intelligence, and is moving in this direction at an exponentially increasing rate, he advocates extreme caution and assiduous regulation in this area of research and development—an area essentially unregulated at present and careening blindly toward a scenario in which an unanticipated, emergent strand of AI code could unleash a cascade of rogue, self-programing, and self-propagating entities completely insensitive, or outright hostile, to human survival.

Such collective anxiety has been moving toward a crescendo for at least 50 years. Recall the 1968 film 2001: A Space Odyssey. We spectators take comfort in knowing that Hal and his Terminator-like descendants are mere fictions while we hold the anxiety that they represent in a nearly impermeable bubble of denial. All the while it is blatantly obvious that it would take lifetimes of manual, human calculation to do what our computers can do in seconds. And most of what our economies are now based upon in terms of communication, finance, and manufacturing would grind to a halt without computer technology. Still, we persist in resisting the idea that machines could fully exceed our capacities. Sure, maybe they can do math faster, but they could never develop independent thought, right? Our narcissistic tendency is to deny what is likely, in reality, only a matter of time—and not much time at that.

Already spawning in our computer arrays and the haphazardly developing internet of things, AI will inevitably far exceed our ability to compete with it cognitively, and will eventually exceed even our ability to comprehend it—without its help. This is why Musk advocates a carefully proctored process leading to a symbiosis in which the emerging artificial superintelligence is structured to value humanity. At the moment, there is no such process.

The core problem we face was stated this way by Eliezer Yudkowsky: “The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else.” If an AI superintelligence is not properly configured, or we are not lucky enough for it to serendipitously evolve in our favor, we could find ourselves to be expendable in its algorithmic calculus. The most famous thought experiment around this problem is probably the “paperclip maximizer,” introduced by philosopher Nick Bostrom. If a bit of code were to go rogue concerning the value of paper clip manufacturing, and it was decided that humans created some impediment to that outcome, or represent a useful natural resource to be harvested for the manufacture of paperclips, it could be the end of us. This wouldn’t be an act of evil, but simply an execution of logic.

The objectives instantiated within algorithms are generally aimed towards maximizing effectiveness and efficiency of outcomes, agnostic to anything outside their parameter spaces. This could leave us as a species in a position similar to Neville Chamberlain’s during his 1938 negotiation with Adolf Hitler in Munich. Chamberlain was operating as if he was dealing with a rational actor who would value peace over war by virtue of some regard for humanity. In fact, he was reaching an agreement with a person for whom such a piece of paper was meaningless and who had long before gone down a thankfully-rare vortex in human psychopathology that leads to pure lust for power and complete disregard for human value. Adolf Hitler embodied, one might say, a reptilian algorithm serving only itself. With the insight of history, today, it would be wise to act before an AI version of a Reichstag fire is lit.


As evolution gradually hardwired our anxiety alert systems, it linked them to our cognition so that we could learn quickly to avoid dangers. It’s unlikely, however, that the selection pressure conditions present over the last 200,000 years would have prepared this emotive-cognitive link to grapple with an emerging Anthropocene. Homo Sapiens’ dominance over the rest of the natural world is a situation in terrestrial history for which evolution has had no precedent.

Virtually all people with a modicum of scientific education know in their cerebral cortex that the planet is on the brink of catastrophic climate change, but the limbic brain remains designed to help the cortex shut out of consciousness that which is too terrifying to face.

In addition, while evolution managed to hardwire us to nurture and protect our children, it did so primarily with respect to immediate, concrete threats like ‘look before crossing the street,’ rather than on the basis of exceedingly complex abstractions like multi-factorial computer-generated models. Add to that our propensity for a darker narcissistic side that simply doesn’t register the more abstract value of masses of others, even less masses who don’t yet exist. Yet, for most, the likely future is known at some level, and it is horrifying. Perhaps there will be a way out, but the uncertainty and unlikeliness is overwhelming. With each passing year, the reality dawns—another warmest ever, another glacier vanished, another superstorm.

This war is not virtual, it is biopolitical. While no sane person would endorse the proposition that it is good to destroy the environment that sustains our lives, forces epiphenomenal to our basic drives have emerged over time in the form of self-perpetuating sociopolitical and economic systems that reward short term gains without regard to long-term destruction. There is a widespread sense of seeming individual helplessness to decisively impact such a meta-structure. It was very nice when McDonalds switched from Styrofoam to paper burger boxes, but will such acts be able to offset the destruction of the Amazon and the pillaging of Arctic fossil fuels? The melting ice caps suggest otherwise. And it’s not very reassuring that AI will eventually be able to do that math, if we don’t stumble into nuclear winter in the meantime.





Bostrom, Nick. “Ethical Issues in Advanced Artificial Intelligence.” Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, vol. 2, 2003, pp. 12-17.

Musk, Elon. Interviewed by Jaime Peraire. Centennial Symposium: One-one-one with Elon Musk. 24 October 2014,

Yudkowsky, Eliezer. “Artificial Intelligence as a Positive and Negative Factor in Global Risk.” Global Catastrophic Risks, edited by Nick Bostrom and Milan M. Ćirković, Oxford University Press, 2011, pp. 308-345.


Obsolescent Immortality is taken from a decades-old roll of film discovered in the artist’s mother’s belongings. Due to age and a difference in the chemical process the original images were lost but instead resulted in these abstractions. 

These images, beautiful in their own right, are bittersweet by the absence of the moment that was considered important enough to be immortalized. The erasure of the images, transforming into these cosmic-looking abstractions, reflect the paradox of our contemporary society obsessed with producing and consuming perfect images, while slowly letting consumerism devour the planet. Our obsession with showing off our own comfortable lives could eventually turn our planet into another barren star in the cosmos. Like these negatives, if not acted upon soon our world could become just another speck of dust in the wide expanse of the universe with human history forgotten. 

The passing on of this roll of film from mother to daughter, and the failed recovery of the images can be also seen as a parallel to the generational miscommunication happening globally exemplified by the discrediting, cyberbullying and abuse hurled at Greta Thunberg.


Dr. Stephen Setterberg is Publisher of Stillpoint Magazine. He is a psychiatrist and psychoanalytic psychotherapist who founded and developed the  PrairieCare system of clinics and hospitals for children and adolescents in Minneapolis, Minnesota, USA. While later training in Zurich to become a Jungian analyst, Stephen co-founded Stillpoint Spaces, a forum for psychologically-minded individuals of diverse cultural and educational backgrounds to utilize insights from psychoanalysis, psychotherapy, and related fields, with centers in Berlin, London, Paris, and Zurich. He also co-directs Confer, a UK organization first established by psychotherapists in 1998 to provide interdisciplinary continuing education for psychotherapists, psychologists, and other mental health workers.


Clara Lang-Ezekiel is a French/American visual artist currently based in Paris. She recently received an MFA from Kingston School of Art in London, having previously received an MFA from Duncan of Jordanstone College of Art and Design in Dundee, Scotland, and a BA in both Studio Art and History from Denison University in Ohio.

© Copyright for all texts published in Stillpoint Magazine are held by the authors thereof, and for all visual artworks by the visual artists thereof, effective from the year of publication. Stillpoint Magazine holds copyright to all additional images, branding, design and supplementary texts across as well as in additional social media profiles, digital platforms and print materials. All rights reserved.