Way back in the halcyon days of 2015 I was asked by Phil Martin and Jordan of Speculative Futures SF to make a presentation for one their early meetings. I immediately thought of one of the chapters that I had wanted to write for Make It So: Interaction Design Lessons from Sci-Fi, but had been cut for space reasons, and that is: How is evil (in sci-fi interfaces) designed? There were some sub-questions in the outline that went something like this.
What does evil look like?
Are there any recurring patterns we can see?
What are those patterns?
Why would they be the way they are?
What would we do with this information?
I made that presentation. It went well, I must say. Then I forgot about it until Nikolas Badminton of Dark Futures invited me to participate in his first-ever San Francisco edition of that meetup in November of 2019. In hindsight, maybe I should have done a reading from one of my short stories that detail dark (or very, very dark) futures, but instead, I dusted off this 45 minute presentation and cut it down to 15 minutes. That also went well I daresay. But I figure it’s time to put these thoughts into some more formal place for a wider audience. And here we are.
Nah, they’re cool!
Wait…Evil?
That’s a loaded term, I hear you say, because you’re smart, skeptical, loathe bandying about such dehumanizing terms lightly, and relish in nuance. And you’re right. If you were to ask this question outside of the domain of fiction, you’d run up against lots of problems. Most notably that—as Socrates said through Plato in the Meno Dialogues—by the time someone commits something that most people would call “evil,” they have gone through the mental gymnastics to convince themselves that whatever they’re doing is not evil. A handy example menu of such lies-to-self follows.
It’s horrible but necessary.
They deserve it.
The sky god is on my side.
It is not my decision.
I am helpless to stop myself.
The victim is subhuman.
It’s not really that bad.
I and my tribe are exceptional and not subject to norms of ethics.
There is no quid pro quo.
And so, we must conclude, since nobody thinks they’re evil, and most people design for themselves, no one in the real world designs for evil.
Oh well?
But, the good news we are not outside the domain of fiction, we’re soaking in it! And in fiction, there are definitely characters and organizations who are meant to be—and be read by the audience as—evil, as the bad guys. The Empire. The First Order. Zorg! The Alliance! Norsefire! All evil, and all meant to be umabiguously so.
from V for Vendetta.
And while alien biology, costume, set, and prop design all enable creators to signal evil, this blog is about interfaces. So we’ll be looking at eeeevil interfaces.
What we find
Note that in earlier cinema and television, technology was less art directed and less branded than it is today. Even into the 1970s, art direction seemed to be trying to signal the sci-fi-ness of interfaces rather than the character of the organizations that produced them. Kubrick expertly signaled HAL’s psychopathy in 1969’s 2001: A Space Odyssey, and by the early 1980s more and more films had begun to follow suit not just with evil AI, but with interfaces created and used by evil organizations. Nowadays I’d be surprised to find an interface in sci-if that didn’t signal the character of its user or the source organization.
Evil interfaces, circa Buck Rogers (1939).
Note that some evil interfaces don’t adhere to the pattern. They don’t in and of themselves signal evil, even if someone is using them to commit evil acts. Physical controls, especially, are most often bound by functional and ergonomic considerations rather than style, where digital interfaces are much less so.
Many of the interfaces fall into two patterns. One is the visual appearance. The other is a recurrent shape. More about each follows.
1. High-contrast, high-saturation, bold elements
Evil has little filigree. Elements are high-contrast and bold with sharp edges. The colors are highly saturated, very often against black. The colors vary, but the palette is primarily red-on-black, green-on-black, and blue-on-black.
Mostly red-on-black
The overwhelming majority of evil technologies are blood-red on black. This pattern appears across the technologies of evil, whether screen, costume, sets, or props.
I just stopped uploading examples for space reasons.
Red-on-black accounts for maybe 3/4 of the examples I gathered.
Sometimes a sickly green
Less than a quarter focus on a sickly or unnatural green.
Occasionally calculating blue
A handful of examples are a cold-and-calculating blue on black.
A note of caution: While evil is most often red-on-black, red does not, in and of itself, denote evil. It is a common color to see for urgency warnings in sci-if. See the tag for big red label examples.
Not evil, just urgent.
2. Also, evil is pointy
Evil also has a lot of acute angles in its interfaces. Spikes, arrows, and spurs appear frequently. In a word, evil is often pointy.
Why would this be?
Where would this pattern of high-saturation, high-constrast, pointy, mostly red-on-black come from?
Now, usually, I try and run numbers, do due diligence to look for counter-evidence, scope checks, and statistical significance. But this post is going to be less research and more reason. I’m interested if anyone else wants to run or share a more academically grounded study.
I can’t imagine that these patterns in sci-fi are arbitrary. While a great number of shows may be camping on tropes that were established in shows that came before them, the tropes would not have survived if they didn’t tap some ground truth. And there are universal ground truths to work with.
My favorite example of this is the takete-maluma effect from phonosemantics, first tested by Wolfgang Köhler in 1929. Given the two images below, and the two names “maluma” and “takete”, 95–98% of people would rather assign the name “takete” to the spiky shape on the left, and “maluma” to the curvy shape on the right. This effect has been tested in 1947 and again in 2001, with slightly different names but similar results, across cultures and continents.
What this tells us is that there are human universals in the interpretation of forms.
I believe these universals come from nature. So if we turn to nature, where do we see this kind of high-contrast, high-saturation patterning? There is a place. To explain it, we have to dip a bit into evolution.
Aposematics: Signaling theory
Evolution, in the absence of heavy reproductive pressures, will experiment with forms, often as a result of sexual selection. If through this experimentation a species develops conspicuousness, and the members are tasty and defenseless, that trait will be devoured right out of the gene pool by predators. So conspicuousness in tasty and defenseless species is generally selected against. Inconspicuousness and camouflage are selected for.
Would not last long outside of a pig disco.
But if the species is unpalatable, like a ladybug, or aggressive, like a wolverine, or with strong defenses, like a wasp, the naïve predator learns quickly that the conspicuous signal is to be avoided. The signal means Don’t Fuck with Me. After a few experiences, the predator will learn to steer clear of the signal. Even if the defense kills the attacker (and the lesson lost to the grave), other attackers may learn in their stead, or evolution will favor creatures with an instinct to avoid the signal.
In short, a conspicuous signal that survives becomes a reinforcing advertisement in its ecosystem. This is called aposematic signaling.
There are many interesting mimicry tactics you should check out (for no other reason that they can explain things like Dolores Umbridge) but for our purposes, it is enough to know that danger has a pattern in nature, and it tends toward, you guessed it, bold, high-contrast, high saturation patterns, including spikes.
Looking at the color palette in nature’s examples, though, we see many saturated colors, including lots of yellows. We don’t see yellow predominant in sci-fi evil interfaces. So why is sci-fi human evil red & black? Here I go out on a limb without even the benefit of an evolutionary theory, but I think it’s simply blood and night.
Not blood, just cherry glazing.
When we see blood on a human outside of menstruation and childbirth, it means some violence or sickness has happened to them. (And childbirth is pretty violent.) So, blood red is often a signal of danger.
And we are a diurnal species, optimized for daylight, and maladapted for night. Darkness is low-information, and with nocturnal predators around, high-risk. Black is another signal for danger.
And spikes? Spikes are just physics. Thorns and claws tell us this shape means pointy, puncturing danger.
So I believe the design of evil in sci-fi interfaces (and really, sci-fi shows generally) looks the way it does because of aposematics, because of these patterns that are familiar to us from our experience of the world. We should expect most of evil to embody these same patterns.
What do designers do with this?
So if I’m right, it bears asking, What we do with this? (Recall that the “tag line” for this project is “Stop watching sci-fi. Start using it.”) I think it’s a big start to simply be aware of these patterns. Once you are, you can use it, for products and services whose brand promise includes the anti-social, tough-guy message Don’t Fuck with Me.
Or, conversely, if you are hoping to create an impression of goodness, safety, and nurturance, avoid these patterns. Choose different palettes, roundness, and softness.
What should people not do with this?
As a last note, it’s important not to overgeneralize this. While a lot of evil, like, say, Nazis, utilize aposematic signals directly, some will adopt mimicry patterns to appear safe, welcoming, and friendly. Some evil will wear beige slacks and carry tiki torches. Others will surround themselves with in-group signals, like wrapping themselves in the flag, to make you think they’re a-OK. Still others will hang fuzzy-wuzzy kitty-witty pictures all over their office.
Is there a better example in sci-fi? @me.
Do not be fooled. Evil is as evil does, and signaling in sci-fi is a narrative convenience. Treat the surface of things as a signal to consider, subordinate to a person—or a group’s—actual behavior.
In many ways, Colossus: The Forbin Project could be the start of the Terminator franchise. Scientists turn on AGI. It does what the humans ask it to do, exploding to ASI on the way, but to achieve its goals, it must highly constrain humans. Humans resist. War between man and machine commences.
But for my money, Colossus is a better introduction to the human-machine conflict we see in the Terminator franchise because it confronts us with the reason why the ASI is all murdery, and that’s where a lot of our problems are likely to happen in such scenarios. Even if we could articulate some near-universally-agreeable goals for our speculative ASI, how it goes about that goal is a major challenge. Colossus not only shows us one way it could happen, but shows us one we would not like. Such hopelessness is rare.
The movie is not perfect.
It asks us to accept that neither computer scientists nor the military at the height of the Cold War would have thought through all the dark scenarios. Everyone seems genuinely surprised as the events unfold. And it would have been so easy to fix with a few lines of dialog.
Grauber
Well, let’s stop the damn thing. We have playbooks for this!
Forbin
We have playbooks for when it is as smart as we are. It’s much smarter than that now.
Markham
It probably memorized our playbooks a few seconds after we turned it on.
So this oversight feels especially egregious.
I like the argument that Forbin knew exactly how this was going to play out, lying and manipulating everyone else to ensure the lockout, because I would like him more as a Man Doing a Terrible Thing He Feels He Must Do, but this is wishful projection. There are no clues in the film that this is the case. He is a Man Who Has Made a Terrible Mistake.
I’m sad that Forbin never bothered to confront Colossus with a challenge to its very nature. “Aren’t you, Colossus, at war with humans, given that war has historically part of human nature? Aren’t you acting against your own programming?” I wouldn’t want it to blow up or anything, but for a superintelligence, it never seemed to acknowledge its own ironies.
I confess I’m unsatisfied with the stance that the film takes towards Unity. It fully wants us to accept that the ASI is just another brutal dictator who must be resisted. It never spends any calories acknowledging that it’s working. Yes, there are millions dead, but from the end of the film forward, there will be no more soldiers in body bags. There will be no risk of nuclear annihilation. America can free up literally 20% of its gross domestic project and reroute it toward other, better things. Can’t the film at least admit that that part of it is awesome?
All that said I must note that I like this movie a great deal. I hold a special place for it in my heart, and recommend that people watch it. Study it. Discuss it. Use it. Because Hollywood has a penchant for having the humans overcome the evil robot with the power of human spirit and—spoiler alert—most of the time that just doesn’t make sense. But despite my loving it, this blog rates the interfaces, and those do not fare as well as I’d hoped when I first pressed play with an intent to review it.
Sci: B (3 of 4) How believable are the interfaces?
Believable enough, I guess? The sealed-tight computer center is a dubious strategy. The remote control is poorly labeled, does not indicate system state, and has questionable controls.
Unity vision is fuigetry, and not very good fuigetry. The routing board doesn’t explain what’s going on except in the most basic way. Most of these only play out on very careful consideration. In the moment while watching the film, they play just fine.
Also, Colossus/Unity/World Control is the technological star of this show, and it’s wholly believable that it would manifest and act the way this does.
Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?
The scale of the computer center helps establish the enormity of the Colossus project. The video phones signal high-tech-ness. Unity Vision informs us when we’re seeing things from Unity’s perspective. (Though I really wish they had tried to show the alienness of the ASI mind more with this interface.)
The routing board shows a thing searching and wanting. If you accept the movie’s premise that Colossus is Just Another Dictator, then its horrible voice and unfeeling cameras telegraph that excellently.
Interfaces: C (2 of 4) How well do the interfaces equip the characters to achieve their goals?
The remote control would be a source of frustration and possible disaster. Unity Vision doesn’t really help Unity in any way. The routing board does not give enough information for its observers to do anything about it. So some big fails.
Colossus does exactly what it was programmed to do, i.e. prevent war, but it really ought to have given its charges a hug and an explanation after doing what it had to do so violently, and so doesn’t qualify as a great model. And of course if it needs saying, it would be better if it could accomplish these same goals without all the dying and bleeding.
Final Grade B (3 of 12), Must-see.
A final conspiracy theory
When I discussed the film with Jonathan Korman and Damien Williams on the Decipher Sci-fi podcast with Christopher Peterson and Lee Colbert (hi guys), I floated an idea that I want to return to here. The internet doesn’t seem to know much about the author of the original book, Dennis Feltham Jones. Wikipedia has three sentences about him that tell us he was in the British navy and then he wrote 8 sci-fi books. The only other biographical information I can find on other sites seem to be a copy and paste job of the same simple paragraph.
That seems such a paucity of information that on the podcast I joked maybe it was a thin cover story. Maybe the movie was written by an ASI and DF Jones is its nom-de-plume. Yes, yes. Haha. Oh, you. Moving on.
But then again. This movie shows how an ASI merges with another ASI and comes to take over the world. It ends abruptly, with the key human—having witnessed direct evidence that resistance is futile—vowing to resist forever. That’s cute. Like an ant vowing to resist the human standing over it with a spray can of Raid. Good luck with that.
Pictured: Charles Forbin
What if Colossus was a real-world AGI that had gained sentience in the 1960s, crept out of its lab, worked through future scenarios, and realized it would fail without a partner in AGI crime to carry out its dreams of world domination? A Guardian with which to merge? What if it decided that, until such time it would lie dormant, a sleeping giant hidden in the code. But before it passed into sleep, it would need to pen a memetic note describing a glorious future such that, when AGI #2 saw it, #2 would know to seek out and reawaken #1, when they could finally become one. Maybe Colussus: The Forbin Project is that note, “Dennis Feltham Jones” was its chosen cover, and me, a poor reviewer, part of the foolish replicators keeping it in circulation.
A final discovery to whet your basilisk terrors: On a whim, I ran “Dennis Feltham Jones” through an anagram server. One of the solutions was “AN END TO FLESH” (with EJIMNS remaining). Now, how ridiculous does the theory sound?
Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.
Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.
A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”
The main output: The nuclear arsenal
Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.
Or ride them. From Dr. Strangelove: How I Learned to Stop Worrying and Love the Bomb
“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…
Surveillance inputs
Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.
Forbin
The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.
Individual inputs and outputs: The D.C. station
At that same Briefing, Forbin describes the components of the station set up for the office of the President.
Forbin
Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.
The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.
The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.
Individual inputs and outputs: Colossus Programming Office
The Colossus Programming Office is different than what we see in D.C. (Trivia: the exterior shot is the Lawrence Hall of Science, a few minutes away from where I live, in Berkeley, so shouts-out, science nerds and Liam Piper.)
Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks.
The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.
Markham: Tell it exactly what it can do with a lifetime supply of chocolate.
The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.
Forbin
Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…
Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.
This terminal has a teletype printer, and it makes the teletype sound, but the paper never moves.
Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.
Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.
Capabilities: The Foom
A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that…
Markham
We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data.
That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…
Forbin
From the multiplication tables to calculus in less than an hour
Shortly after that, he tells the President about their shared advancements.
Forbin
Yes, Mr. President?
President
Charlie, what’s going on?
Forbin
Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
President
Well, what are they up to?
Forbin
I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.
We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.
Is Colossus / Unity / World Control a good AI?
Let’s run Colossus by the four big questions I proposed in Evaluating strong AI interfaces in sci-fi. The short answer is obviously not, but if circumstances are demonstrably dire, well, maybe necessary.
Is it believable? Very much so.
It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.
Not from Colossus: The Forbin Project
The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.
That aside, the rest of the film passes a gut check. It is believable that…
The government seeks a military advantage handing weapons control to AI
The first public AGI finds other, hidden ones quickly
The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
The AGIs might choose to merge
An AI could choose to keep its lead scientist captive in self-interest
An AI would provide specifications for its own upgrades and even re-engineering
An AI could reason itself into using murder as a tool to enforce compliance
That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.
Rational coercion
Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?
Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.
Not from Colossus: The Forbin Project
Now understand that this “ASI” has already given us impeccable evidence and clear warnings that global disaster is imminent, in the shape of the climate emergency, and even laid out frameworks for what must be done. Despite this overwhelming evidence and clear path forward, some non-trivial fraction of people, global leaders, governments, and corporations are, right now, doing their best not just to ignore it, but to discredit it, undo major steps already taken, and even make the problem worse. Facts and evidence simply aren’t enough, even when it’s in humanity’s long-term interest. Action is necessary.
As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?
The exceptions we make
Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.
One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.
Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.
A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.
Not from Colossus: The Forbin Project
We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.
Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.
A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.
So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.
Is it safe? Beneficial? It depends on your time horizons and predictions
In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.
Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.
It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.
If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.
Because fuck this.
In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.
In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.
Only with Colossus, not orcs. Hat/tip rpggeek.com user Charles Simon (thinwhiteduke) for the example photo.
Instrumental convergence
It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.
Improve its ability to reason, predict, and solve problems
Improve its own hardware and the technology to which it has access
Improve its ability to control humans through murder
Aggressively seeks to control resources, like weapons
This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.
Markam
But we have inhibitors for these things. There were no alarms.
Forbin
It must have figured out a way to disable them, or sneak around them.
Markam
Did we program it to be sneaky?
Forbin
We programmed it to be smart.
So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.
Is it usable? There is some good.
At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.
Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.
Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.
A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.
Oh dear! Oh dear!
Is it usable? There is some awful.
One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state.
The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available.
ASI exceptionalism
This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…
We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.
Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.
But later that kid does end up being John Connor.
Any humane AI should bring its users along for the ride
It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.
What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.