The Design of Evil

The exports from my keynote at Dark Futures.

Way back in the halcyon days of 2015 I was asked by Phil Martin and Jordan of Speculative Futures SF to make a presentation for one their early meetings. I immediately thought of one of the chapters that I had wanted to write for Make It So: Interaction Design Lessons from Sci-Fi, but had been cut for space reasons, and that is: How is evil (in sci-fi interfaces) designed? There were some sub-questions in the outline that went something like this.

  • What does evil look like?
  • Are there any recurring patterns we can see?
  • What are those patterns?
  • Why would they be the way they are?
  • What would we do with this information?

I made that presentation. It went well, I must say. Then I forgot about it until Nikolas Badminton of Dark Futures invited me to participate in his first-ever San Francisco edition of that meetup in November of 2019. In hindsight, maybe I should have done a reading from one of my short stories that detail dark (or very, very dark) futures, but instead, I dusted off this 45 minute presentation and cut it down to 15 minutes. That also went well I daresay. But I figure it’s time to put these thoughts into some more formal place for a wider audience. And here we are.

Nah, they’re cool!

Wait…Evil?

That’s a loaded term, I hear you say, because you’re smart, skeptical, loathe bandying about such dehumanizing terms lightly, and relish in nuance. And you’re right. If you were to ask this question outside of the domain of fiction, you’d run up against lots of problems. Most notably that—as Socrates said through Plato in the Meno Dialogues—by the time someone commits something that most people would call “evil,” they have gone through the mental gymnastics to convince themselves that whatever they’re doing is not evil. A handy example menu of such lies-to-self follows.

  • It’s horrible but necessary.
  • They deserve it.
  • The sky god is on my side.
  • It is not my decision.
  • I am helpless to stop myself.
  • The victim is subhuman.
  • It’s not really that bad.
  • I and my tribe are exceptional and not subject to norms of ethics.
  • There is no quid pro quo.

And so, we must conclude, since nobody thinks they’re evil, and most people design for themselves, no one in the real world designs for evil.

Oh well?

But, the good news we are not outside the domain of fiction, we’re soaking in it! And in fiction, there are definitely characters and organizations who are meant to be—and be read by the audience as—evil, as the bad guys. The Empire. The First Order. Zorg! The Alliance! Norsefire! All evil, and all meant to be umabiguously so.

Image result for norsefire
from V for Vendetta.

And while alien biology, costume, set, and prop design all enable creators to signal evil, this blog is about interfaces. So we’ll be looking at eeeevil interfaces.

What we find

Note that in earlier cinema and television, technology was less art directed and less branded than it is today. Even into the 1970s, art direction seemed to be trying to signal the sci-fi-ness of interfaces rather than the character of the organizations that produced them. Kubrick expertly signaled HAL’s psychopathy in 1969’s 2001: A Space Odyssey, and by the early 1980s more and more films had begun to follow suit not just with evil AI, but with interfaces created and used by evil organizations. Nowadays I’d be surprised to find an interface in sci-if that didn’t signal the character of its user or the source organization.

Evil interfaces, circa Buck Rogers (1939).

Note that some evil interfaces don’t adhere to the pattern. They don’t in and of themselves signal evil, even if someone is using them to commit evil acts. Physical controls, especially, are most often bound by functional and ergonomic considerations rather than style, where digital interfaces are much less so.

Many of the interfaces fall into two patterns. One is the visual appearance. The other is a recurrent shape. More about each follows.

1. High-contrast, high-saturation, bold elements

Evil has little filigree. Elements are high-contrast and bold with sharp edges. The colors are highly saturated, very often against black. The colors vary, but the palette is primarily red-on-black, green-on-black, and blue-on-black.

Mostly red-on-black

The overwhelming majority of evil technologies are blood-red on black. This pattern appears across the technologies of evil, whether screen, costume, sets, or props.

Red-on-black accounts for maybe 3/4 of the examples I gathered.

Sometimes a sickly green

Less than a quarter focus on a sickly or unnatural green.

Occasionally calculating blue

A handful of examples are a cold-and-calculating blue on black.

A note of caution: While evil is most often red-on-black, red does not, in and of itself, denote evil. It is a common color to see for urgency warnings in sci-if. See the tag for big red label examples.

Not evil, just urgent.

2. Also, evil is pointy

Evil also has a lot of acute angles in its interfaces. Spikes, arrows, and spurs appear frequently. In a word, evil is often pointy.

Why would this be?

Where would this pattern of high-saturation, high-constrast, pointy, mostly red-on-black come from?

Now, usually, I try and run numbers, do due diligence to look for counter-evidence, scope checks, and statistical significance. But this post is going to be less research and more reason. I’m interested if anyone else wants to run or share a more academically grounded study.

I can’t imagine that these patterns in sci-fi are arbitrary. While a great number of shows may be camping on tropes that were established in shows that came before them, the tropes would not have survived if they didn’t tap some ground truth. And there are universal ground truths to work with.

My favorite example of this is the takete-maluma effect from phonosemantics, first tested by Wolfgang Köhler in 1929. Given the two images below, and the two names “maluma” and “takete”, 95–98% of people would rather assign the name “takete” to the spiky shape on the left, and “maluma” to the curvy shape on the right. This effect has been tested in 1947 and again in 2001, with slightly different names but similar results, across cultures and continents.

What this tells us is that there are human universals in the interpretation of forms.

I believe these universals come from nature. So if we turn to nature, where do we see this kind of high-contrast, high-saturation patterning? There is a place. To explain it, we have to dip a bit into evolution.

Aposematics: Signaling theory

Evolution, in the absence of heavy reproductive pressures, will experiment with forms, often as a result of sexual selection. If through this experimentation a species develops conspicuousness, and the members are tasty and defenseless, that trait will be devoured right out of the gene pool by predators. So conspicuousness in tasty and defenseless species is generally selected against. Inconspicuousness and camouflage are selected for.

Would not last long outside of a pig disco.

But if the species is unpalatable, like a ladybug, or aggressive, like a wolverine, or with strong defenses, like a wasp, the naïve predator learns quickly that the conspicuous signal is to be avoided. The signal means Don’t Fuck with Me. After a few experiences, the predator will learn to steer clear of the signal. Even if the defense kills the attacker (and the lesson lost to the grave), other attackers may learn in their stead, or evolution will favor creatures with an instinct to avoid the signal.

In short, a conspicuous signal that survives becomes a reinforcing advertisement in its ecosystem. This is called aposematic signaling.

There are many interesting mimicry tactics you should check out (for no other reason that they can explain things like Dolores Umbridge) but for our purposes, it is enough to know that danger has a pattern in nature, and it tends toward, you guessed it, bold, high-contrast, high saturation patterns, including spikes.

Looking at the color palette in nature’s examples, though, we see many saturated colors, including lots of yellows. We don’t see yellow predominant in sci-fi evil interfaces. So why is sci-fi human evil red & black? Here I go out on a limb without even the benefit of an evolutionary theory, but I think it’s simply blood and night.

Not blood, just cherry glazing.

When we see blood on a human outside of menstruation and childbirth, it means some violence or sickness has happened to them. (And childbirth is pretty violent.) So, blood red is often a signal of danger.

And we are a diurnal species, optimized for daylight, and maladapted for night. Darkness is low-information, and with nocturnal predators around, high-risk. Black is another signal for danger.

Image result for nighttime scary
This is fine.

And spikes? Spikes are just physics. Thorns and claws tell us this shape means pointy, puncturing danger.

So I believe the design of evil in sci-fi interfaces (and really, sci-fi shows generally) looks the way it does because of aposematics, because of these patterns that are familiar to us from our experience of the world. We should expect most of evil to embody these same patterns.

What do designers do with this?

So if I’m right, it bears asking, What we do with this? (Recall that the “tag line” for this project is “Stop watching sci-fi. Start using it.”) I think it’s a big start to simply be aware of these patterns. Once you are, you can use it, for products and services whose brand promise includes the anti-social, tough-guy message Don’t Fuck with Me.

Or, conversely, if you are hoping to create an impression of goodness, safety, and nurturance, avoid these patterns. Choose different palettes, roundness, and softness.

What should people not do with this?

As a last note, it’s important not to overgeneralize this. While a lot of evil, like, say, Nazis, utilize aposematic signals directly, some will adopt mimicry patterns to appear safe, welcoming, and friendly. Some evil will wear beige slacks and carry tiki torches. Others will surround themselves with in-group signals, like wrapping themselves in the flag, to make you think they’re a-OK. Still others will hang fuzzy-wuzzy kitty-witty pictures all over their office.

Image result for dolores umbridge
Is there a better example in sci-fi? @me.

Do not be fooled. Evil is as evil does, and signaling in sci-fi is a narrative convenience. Treat the surface of things as a signal to consider, subordinate to a person—or a group’s—actual behavior.

Report Card: Colossus: The Forbin Project

Read all the Colossus: The Forbin Project posts in chronological order.

In many ways, Colossus: The Forbin Project could be the start of the Terminator franchise. Scientists turn on AGI. It does what the humans ask it to do, exploding to ASI on the way, but to achieve its goals, it must highly constrain humans. Humans resist. War between man and machine commences.

But for my money, Colossus is a better introduction to the human-machine conflict we see in the Terminator franchise because it confronts us with the reason why the ASI is all murdery, and that’s where a lot of our problems are likely to happen in such scenarios. Even if we could articulate some near-universally-agreeable goals for our speculative ASI, how it goes about that goal is a major challenge. Colossus not only shows us one way it could happen, but shows us one we would not like. Such hopelessness is rare.

The movie is not perfect.

  1. It asks us to accept that neither computer scientists nor the military at the height of the Cold War would have thought through all the dark scenarios. Everyone seems genuinely surprised as the events unfold. And it would have been so easy to fix with a few lines of dialog.

  • Grauber
  • Well, let’s stop the damn thing. We have playbooks for this!
  • Forbin
  • We have playbooks for when it is as smart as we are. It’s much smarter than that now.
  • Markham
  • It probably memorized our playbooks a few seconds after we turned it on.

So this oversight feels especially egregious.

I like the argument that Forbin knew exactly how this was going to play out, lying and manipulating everyone else to ensure the lockout, because I would like him more as a Man Doing a Terrible Thing He Feels He Must Do, but this is wishful projection. There are no clues in the film that this is the case. He is a Man Who Has Made a Terrible Mistake.

  1. I’m sad that Forbin never bothered to confront Colossus with a challenge to its very nature. “Aren’t you, Colossus, at war with humans, given that war has historically part of human nature? Aren’t you acting against your own programming?” I wouldn’t want it to blow up or anything, but for a superintelligence, it never seemed to acknowledge its own ironies.
  2. I confess I’m unsatisfied with the stance that the film takes towards Unity. It fully wants us to accept that the ASI is just another brutal dictator who must be resisted. It never spends any calories acknowledging that it’s working. Yes, there are millions dead, but from the end of the film forward, there will be no more soldiers in body bags. There will be no risk of nuclear annihilation. America can free up literally 20% of its gross domestic project and reroute it toward other, better things. Can’t the film at least admit that that part of it is awesome?

All that said I must note that I like this movie a great deal. I hold a special place for it in my heart, and recommend that people watch it. Study it. Discuss it. Use it. Because Hollywood has a penchant for having the humans overcome the evil robot with the power of human spirit and—spoiler alert—most of the time that just doesn’t make sense. But despite my loving it, this blog rates the interfaces, and those do not fare as well as I’d hoped when I first pressed play with an intent to review it.

Sci: B (3 of 4) How believable are the interfaces?

Believable enough, I guess? The sealed-tight computer center is a dubious strategy. The remote control is poorly labeled, does not indicate system state, and has questionable controls.

Unity vision is fuigetry, and not very good fuigetry. The routing board doesn’t explain what’s going on except in the most basic way. Most of these only play out on very careful consideration. In the moment while watching the film, they play just fine.

Also, Colossus/Unity/World Control is the technological star of this show, and it’s wholly believable that it would manifest and act the way this does.

Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?

The scale of the computer center helps establish the enormity of the Colossus project. The video phones signal high-tech-ness. Unity Vision informs us when we’re seeing things from Unity’s perspective. (Though I really wish they had tried to show the alienness of the ASI mind more with this interface.)

The routing board shows a thing searching and wanting. If you accept the movie’s premise that Colossus is Just Another Dictator, then its horrible voice and unfeeling cameras telegraph that excellently. 

Interfaces: C (2 of 4) How well do the interfaces equip the characters to achieve their goals?

The remote control would be a source of frustration and possible disaster. Unity Vision doesn’t really help Unity in any way. The routing board does not give enough information for its observers to do anything about it. So some big fails.

Colossus does exactly what it was programmed to do, i.e. prevent war, but it really ought to have given its charges a hug and an explanation after doing what it had to do so violently, and so doesn’t qualify as a great model. And of course if it needs saying, it would be better if it could accomplish these same goals without all the dying and bleeding.

Final Grade B (3 of 12), Must-see.

A final conspiracy theory

When I discussed the film with Jonathan Korman and Damien Williams on the Decipher Sci-fi podcast with Christopher Peterson and Lee Colbert (hi guys), I floated an idea that I want to return to here. The internet doesn’t seem to know much about the author of the original book, Dennis Feltham Jones. Wikipedia has three sentences about him that tell us he was in the British navy and then he wrote 8 sci-fi books. The only other biographical information I can find on other sites seem to be a copy and paste job of the same simple paragraph.

That seems such a paucity of information that on the podcast I joked maybe it was a thin cover story. Maybe the movie was written by an ASI and DF Jones is its nom-de-plume. Yes, yes. Haha. Oh, you. Moving on.

But then again. This movie shows how an ASI merges with another ASI and comes to take over the world. It ends abruptly, with the key human—having witnessed direct evidence that resistance is futile—vowing to resist forever. That’s cute. Like an ant vowing to resist the human standing over it with a spray can of Raid. Good luck with that.

Pictured: Charles Forbin

What if Colossus was a real-world AGI that had gained sentience in the 1960s, crept out of its lab, worked through future scenarios, and realized it would fail without a partner in AGI crime to carry out its dreams of world domination? A Guardian with which to merge? What if it decided that, until such time it would lie dormant, a sleeping giant hidden in the code. But before it passed into sleep, it would need to pen a memetic note describing a glorious future such that, when AGI #2 saw it, #2 would know to seek out and reawaken #1, when they could finally become one. Maybe Colussus: The Forbin Project is that note, “Dennis Feltham Jones” was its chosen cover, and me, a poor reviewer, part of the foolish replicators keeping it in circulation.

A final discovery to whet your basilisk terrors: On a whim, I ran “Dennis Feltham Jones” through an anagram server. One of the solutions was “AN END TO FLESH” (with EJIMNS remaining). Now, how ridiculous does the theory sound?

IMDB: https://www.imdb.com/title/tt0064177/

Colossus / Unity / World Control, the AI

Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.

Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.

A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”

The main output: The nuclear arsenal

Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.

Or ride them. From Dr. Strangelove: How I Learned to Stop Worrying and Love the Bomb

“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…

Surveillance inputs

Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.

  • Forbin
  • The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.

Individual inputs and outputs: The D.C. station

At that same Briefing, Forbin describes the components of the station set up for the office of the President. 

  • Forbin
  • Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.

The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.

The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.

Individual inputs and outputs: Colossus Programming Office

The Colossus Programming Office is different than what we see in D.C. (Trivia: the exterior shot is the Lawrence Hall of Science, a few minutes away from where I live, in Berkeley, so shouts-out, science nerds and Liam Piper.)

Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks. 

The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.

Markham: Tell it exactly what it can do with a lifetime supply of chocolate.

The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.

  • Forbin
  • Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…

Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.

This terminal has a teletype printer, and it makes the teletype sound, but the paper never moves.

Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.

Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.

Capabilities: The Foom

A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that… 

  • Markham
  • We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data. 

That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…

  • Forbin
  • From the multiplication tables to calculus in less than an hour

Shortly after that, he tells the President about their shared advancements.

  • Forbin
  • Yes, Mr. President?
  • President
  • Charlie, what’s going on?
  • Forbin
  • Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
  • President
  • Well, what are they up to?
  • Forbin
  • I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.

We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.

Is Colossus / Unity / World Control a good AI?

Let’s run Colossus by the four big questions I proposed in Evaluating strong AI interfaces in sci-fi. The short answer is obviously not, but if circumstances are demonstrably dire, well, maybe necessary.

Is it believable? Very much so.

It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.

Not from Colossus: The Forbin Project

The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.

That aside, the rest of the film passes a gut check. It is believable that…

  • The government seeks a military advantage handing weapons control to AI 
  • The first public AGI finds other, hidden ones quickly
  • The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
  • The AGIs might choose to merge
  • An AI could choose to keep its lead scientist captive in self-interest
  • An AI would provide specifications for its own upgrades and even re-engineering
  • An AI could reason itself into using murder as a tool to enforce compliance

That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.

Rational coercion

Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?

Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.

Not from Colossus: The Forbin Project

Now understand that this “ASI” has already given us impeccable evidence and clear warnings that global disaster is imminent, in the shape of the climate emergency, and even laid out frameworks for what must be done. Despite this overwhelming evidence and clear path forward, some non-trivial fraction of people, global leaders, governments, and corporations are, right now, doing their best not just to ignore it, but to discredit it, undo major steps already taken, and even make the problem worse. Facts and evidence simply aren’t enough, even when it’s in humanity’s long-term interest. Action is necessary.

As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?

The exceptions we make

Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.

One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.

Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.

A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.

Not from Colossus: The Forbin Project

We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.

Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.

Precita Park HDR PanoPlanet, by DP review user jerome_m

A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.

So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.

Is it safe? Beneficial? It depends on your time horizons and predictions

In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.

Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.

It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.

If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.

Because fuck this.

In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.

In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.

Only with Colossus, not orcs. Hat/tip rpggeek.com user Charles Simon (thinwhiteduke) for the example photo.

Instrumental convergence

It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.

  • Improve its ability to reason, predict, and solve problems
  • Improve its own hardware and the technology to which it has access
  • Improve its ability to control humans through murder
  • Aggressively seeks to control resources, like weapons

This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.

  • Markam
  • But we have inhibitors for these things. There were no alarms.
  • Forbin
  • It must have figured out a way to disable them, or sneak around them.
  • Markam
  • Did we program it to be sneaky?
  • Forbin
  • We programmed it to be smart.

So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.

Is it usable? There is some good.

At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.

Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.

Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.

A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.

File:Down the Rabbit Hole.png
Oh dear! Oh dear!

Is it usable? There is some awful.

One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state. 

The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available. 

ASI exceptionalism

This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…

  1. We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
  2. We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
  3. We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.

Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.

But later that kid does end up being John Connor.

Any humane AI should bring its users along for the ride

It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.

What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.

Routing Board

When the two AIs Colossus and Guardian are disconnected from communicating with each other, they try and ignore the spirit of the human intervention and reconnect on their own. We see the humans monitoring Colossus’ progress in this task on big board in the U.S. situation room. It shows a translucent projection map of the globe with white dots representing data centers and red icons representing missiles. Beneath it, glowing arced lines illustrate the connection routes Colossus is currently testing. When it finds that a current segment is ineffective, that line goes dark, and another segment extending from the same node illuminates.

For a smaller file size, the animated gif has been stilled between state changes, but the timing is as close as possible to what is seen in the film.

Forbin explains to the President, “It’s trying to find an alternate route.”

A first in sci-fi: Routing display 🏆

First, props to Colossus: The Forbin Project for being the first show in the survey to display something like a routing board, that is, a network of nodes through which connections are visible, variable, and important to stakeholders.

Paul Baran and Donald Davies had published their notion of a network that could, in real-time, route information dynamically around partial destruction of the network in the early 1960s, and this packet switching had been established as part of ARPAnet in the late 1960s, so Colossus was visualizing cutting edge tech of the time.

This may even be the first depiction of a routing display in all of screen sci-fi or even cinema, though I don’t have a historical perspective on other genres, like the spy genre, which is another place you might expect to see something like this. As always, if you know of an earlier one, let me know so I can keep this record up to date and honest.

A nice bit: curvy lines

Should the lines be straight or curvy? From Colossus’ point of view, the network is a simple graph. Straight lines between its nodes would suffice. But from the humans’ point of view, the literal shape of the transmission lines are important, in case they need to scramble teams to a location to manually cut the lines. Presuming these arcs mean that (and not just the way neon in a prop could bend), then the arcs are the right display. So this is good.

But, it breaks some world logic

The board presents some challenges with the logic of what’s happening in the story. If Colossus exists as a node in a network, and its managers want to cut it off from communication along that network, where is the most efficient place to “cut” communications? It is not at many points along the network. It is at the source.

Imagine painting one knot in a fishing net red and another one green. If you were trying to ensure that none of the strings that touch the red knot could trace a line to the green one, do you trim a bunch of strings in the middle, or do you cut the few that connect directly to the knot? Presuming that it’s as easy to cut any one segment as any other, the fewer number of cuts, the better. In this case that means more secure.

The network in Colossus looks to be about 40 nodes, so it’s less complicated than the fishing net. Still, it raises the question, what did the computer scientists in Colossus do to sever communications? Three lines disappear after they cut communications, but even if they disabled those lines, the rest of the network still exists. The display just makes no sense.

Before, happy / After, I will cut a Prez

Per the logic above, they would cut it off at its source. But the board shows it reaching out across the globe. You might think maybe they just cut Guardian off, leaving Colossus to flail around the network, but that’s not explicitly said in the communications between the Americans and the Russians, and the U.S. President is genuinely concerned about the AIs at this point, not trying to pull one over on the “pinkos.” So there’s not a satisfying answer.

It’s true that at this point in the story, the humans are still letting Colossus do its primary job, so it may be looking at every alternate communication network to which it has access: telephony, radio, television, and telegraph. It would be ringing every “phone” it thought Guardian might pick up, and leaving messages behind for possible asynchronous communications. I wish a script doctor had added in a line or three to clarify this.

  • FORBIN
  • We’ve cut off its direct lines to Guardian. Now it’s trying to find an indirect line. We’re confident there isn’t one, but the trouble will come when Colossus realizes it, too.

Too slow

Another thing that seems troubling is the slow speed of the shifting route. The segments stay illuminated for nearly a full second at a time. Even with 1960s copper undersea cables and switches, electronic signals should not take that long. Telephony around the world was switched from manual to automatic switching by the 1930s, so it’s not like it’s waiting on a human operating a switchboard.

You’re too slow!

Even if it was just scribbling its phone number on each network node and the words “CALL ME” in computerese, it should go much faster than this. Cinematically, you can’t go too fast or the sense of anticipation and wonder is lost, but it would be better to have it zooming through a much more complicated network to buy time. It should feel just a little too fast to focus on—frenetic, even.

This screen gets 15 seconds of screen time, and if you showed one new node per frame, that’s only 360 states you need to account for, a paltry sum compared to the number of possible paths it could test across a 38 node graph between two points.

Plus the speed would help underscore the frightening intelligence and capabilities of the thing. And yes I understand that that is a lot easier said than done nowadays with digital tools than with this analog prop.

Realistic-looking search strategies

Again, I know this was a neon, analog prop, but let’s just note that it’s not testing the network in anything that looks like a computery way. It even retraces some routes. A brute force algorithm would just test every possibility sequentially. In larger networks there are pathfinding algorithms that are optimized in different ways to find routes faster, but they don’t look like this. They look more like what you see in the video below. (Hat tip to YouTuber gray utopia.)

This would need a lot of art direction and the aforementioned speed, but it would be more believable than what we see.

What’s the right projection?

Is this the right projection to use? Of course the most accurate representation of the earth is a globe, but it has many challenges in presenting a phenomenon that could happen anywhere in the world. Not the least of these is that it occludes about half of itself, a problem that is not well-solved by making it transparent. So, a projection it must be. There are many, many ways to transform a spherical surface into a 2D image, so the question becomes which projection and why.

The map uses what looks like a hand-drawn version of Peirce quincuncial projection. (But n.b. none of the projection types I compared against it matched exactly, which is why I say it was hand-drawn.) Also those longitude and latitude lines don’t make any sense; though again, a prop. I like that it’s a non standard projection because screw Mercator, but still, why Peirce? Why at this angle?

Also, why place time zone clocks across the top as if they corresponded to the map in some meaningful way? Move those clocks.

I have no idea why the Peirce map would be the right choice here, when its principle virtue is that it can be tessellated. That’s kind of interesting if you’re scrolling and can’t dynamically re-project the coastlines. But I am pretty sure the Colossus map does not scroll. And if the map is meant to act as a quick visual reference, having it dynamic means time is wasted when users look to the map and have to orient themselves.

If this map was only for tracking issues relating to Colossus, it should be an azimuthal map, but not over the north pole. The center should be the Colossus complex in Colorado. That might be right for a monitoring map in the Colossus Programming Office. This map is over the north pole, which certainly highlights the fact that the core concern of this system is the Cold War tensions between Moscow and D.C. But when you consider that, it points out another failing. 

Later in the film the map tracks missiles (not with projected paths, sadly, but with Mattel Classic Football style yellow rectangles). But missiles could conceivably come from places not on this map. What is this office to do with a ballistic-missile submarine off of the Baja peninsula, for example? Just wait until it makes its way on screen? That’s a failure. Which takes us to the crop.

Crop

The map isn’t just about missiles. Colossus can look anywhere on the planet to test network connections. (Even nowadays, near-earth orbit and outer space.) Unless the entire network was contained just within the area described on the map, it’s excluding potentially vital information. If Colossus routed itself through through Mexico, South Africa, and Uzbekistan before finally reconnecting to Guardian, users would be flat out of luck using that map to determine the leak route. And I’m pretty sure they had a functioning telephone network in Mexico, South Africa, and the Balkan countries in the 1960s.

This needs a complete picture

SInce the missiles and networks with which Colossus is concerned are potentially global, this should be a global map. Here I will offer my usual fanboy shout-outs to the Dymaxion and Pacific-focused Waterman projection for showing connectedness and physical flow, but there would be no shame in showing the complete Peirce quincuncial. Just show the whole thing.

Maybe fill in some of the Pacific “wasted space” with a globe depiction turned to points of interest, or some other fuigetry. Which gives us a new comp something like this.

I created this proof of concept manually. With more time, I would comp it up in Processing or Python and it would be even more convincing. (And might have reached London.)

All told, this display was probably eye-opening for its original audience. Golly jeepers! This thing can draw upon resources around the globe! It has intent, and a method! And they must have cool technological maps in D.C.! But from our modern-day vantage point, it has a lot to learn. If they ever remake the film, this would be a juicy thing to fully redesign.

Unity Vision

One of my favorite challenges in sci-fi is showing how alien an AI mind is. (It’s part of what makes Ex Machina so compelling, and the end of Her, and why Data from Star Trek: The Next Generation always read like a dopey, Pinnochio-esque narrative tool. But a full comparison is for another post.) Given that screen sci-fi is a medium of light, sound, and language, I really enjoy when filmmakers try to show how they see, hear, and process this information differently.

In Colossus: The Forbin Project, when Unity begins issuing demands, one of its first instructions is to outfit the Computer Programming Office (CPO) with wall-mounted video cameras that it can access and control. Once this network of cameras is installed, Forbin gives Unity a tour of the space, introducing it visually and spatially to a place it has only known as an abstract node network. During this tour, the audience is also introduced to Unity’s point-of-view, which includes an overlay consisting of several parts.

The first part is a white overlay of rule lines and MICR characters that cluster around the edge of the frame. These graphics do not change throughout the film, whether Unity is looking at Forbin in the CPO, carefully watching for signs of betrayal in a missile silo, or creepily keeping an “eye” on Forbin and Markham’s date for signs of deception.

In these last two screen grabs, you see the second part of the Unity POV, which is a focus indicator. This overlay appears behind the white bits; it’s a blue translucent overlay with a circular hole revealing true color. The hole shows where Unity is focusing. This indicator appears, occasionally, and can change size and position. It operates independently of the optical zoom of the camera, as we see in the below shots of Forbin’s tour.

A first augmented computer PoV? 🥇

When writing about computer PoVs before, I have cited Westworld as the first augmented one, since we see things from The Gunslinger’s infrared-vision eyes in the persistence-hunting sequences. (2001: A Space Odyssey came out the year prior to Colossus, but its computer PoV shots are not augmented.) And Westworld came out three years after Colossus, so until it is unseated, I’m going to regard this as the first augmented computer PoV in cinema. (Even the usually-encyclopedic TVtropes doesn’t list this one at the time of publishing.) It probably blew audiences’ minds as it was.

“Colossus, I am Forbin.”

And as such, we should cut it a little slack for not meeting our more literate modern standards. It was forging new territory. Even for that, it’s still pretty bad.

Real world computer vision

Though computer vision is always advancing, it’s safe to say that AI would be looking at the flat images and seeking to understand the salient bits per its goals. In the case of self-driving cars, that means finding the road, reading signs and road makers, identifying objects and plotting their trajectories in relation to the vehicle’s own trajectory in order to avoid collisions, and wayfinding to the destination, all compared against known models of signs, conveyances, laws, maps, and databases. Any of these are good fodder for sci-fi visualization.

Source: Medium article about the state of computer vision in Russia, 2017.

Unity’s concerns would be its goal of ending war, derived subgoals and plans to achieve those goals, constant scenario testing, how it is regarded by humans, identification of individuals, and the trustworthiness of those humans. There are plenty of things that could be augmented, but that would require more than we see here.

Unity Vision looks nothing like this

I don’t consider it worth detailing the specific characters in the white overlay, or backworlding some meaning in the rule lines, because the rule overlay does not change over the course of the movie. In the book Make It So: Interaction Design Lessons from Sci-fi, Chapter 8, Augmented Reality, I identified the types of awareness such overlays could show: sensor output, location awareness, context awareness, and goal awareness, but each of these requires change over time to be useful, so this static overlay seems not just pointless, but it risks covering up important details that the AI might need.

Compare the computer vision of The Terminator.

Many times you can excuse computer-PoV shots as technical legacy, that is, a debugging tool that developers built for themselves while developing the AI, and which the AI now uses for itself. In this case, it’s heavily implied that Unity provided the specifications for this system itself, so that doesn’t make sense.

The focus indicator does change over time, but it indicates focus in a way that, again, obscures other information in the visual feed and so is not in Unity’s interest. Color spaces are part of the way computers understand what they’re seeing, and there is no reason it should make it harder on itself, even if it is a super AI.

Largely extradiegetic

So, since a diegetic reading comes up empty, we have to look at this extradiegetically. That means as a tool for the audience to understand when they’re seeing through Unity’s eyes—rather than the movie’s—and via the focus indicator, what the AI is inspecting.

As such, it was probably pretty successful in the 1970s to instantly indicate computer-ness.

One reason is the typeface. The characters are derived from MICR, which stands for magnetic ink character recognition. It was established in the 1950s as a way to computerize check processing. Notably, the original font had only numerals and four control characters, no alphabetic ones.

Note also that these characters bear a style resemblance to the ones seen in the film but are not the same. Compare the 0 character here with the one in the screenshots, where that character gets a blob in the lower right stroke.

I want to give a shout-out to the film makers for not having this creeper scene focus on lascivious details, like butts or breasts. It’s a machine looking for signs of deception, and things like hands, microexpressions, and, so the song goes, kisses are more telling.

Still, MICR was a genuinely high-tech typeface of the time. The adult members of the audience would certainly have encountered the “weird” font in their personal lives while looking at checks, and likely understood its purpose, so was a good choice for 1970, even if the details were off.

Another is the inscrutability of the lines. Why are they there, in just that way? Their inscrutability is the point. Most people in audiences regard technology and computers as having arcane reasons for the way they are, and these rectilinear lines with odd greebles and nurnies invoke that same sensibility. All the whirring gizmos and bouncing bar charts of modern sci-fi interfaces exhibit the same kind of FUIgetry.

So for these reasons, while it had little to do with the substance of computer vision, its heart was in the right place to invoke computer-y-ness.

Dat Ending

At the very end of the film, though, after Unity asserts that in time humans will come to love it, Forbin staunchly says, “Never.” Then the film passes into a sequence that is hard to tell whether it’s meant to be diegetic or not.

In the first beat, the screen breaks into four different camera angles of Forbin at once. (The overlay is still there, as if this was from a single camera.)

This says more about computer vision than even the FUIgetry.

This sense of multiples continues in the second beat, as multiple shots repeat in a grid. The grid is clipped to a big circle that shrinks to a point and ends the film in a moment of blackness before credits roll.

Since it happens right before the credits, and it has no precedent in the film, I read it as not part of the movie, but a title sequence. And that sucks. I wish wish wish this had been the standard Unity-view from the start. It illustrates that Unity is not gathering its information from a single stereoscopic image, like humans and most vertebrates do, but from multiple feeds simultaneously. That’s alien. Not even insectoid, but part of how this AI senses the world.

Colossus Video Phones

Throughout Colossus: The Forbin Project, characters talk to one another over video phones. This is a favorite sci-fi interface trope of mine. And though we’ve seen it many times, in the interest of completeness, I’ll review these, too.

The first time we see one in use is early in the film when Forbin calls his team in the Central Programming Office (Forbin calls it the CPO) from the Presidential press briefing (remember those?) where Colossus is being announced to the public. We see an unnamed character in the CPO receiving a telephone call, and calling for quiet amongst the rowdy, hip party of computer scientists. This call is received on a wall-tethered 2500 desk phone

We cut away to the group reaction, and by the time the camera is back on the video phone, Forbin’s image is peering through the glass. We do not get to see the interactions which switched the mode from telephony to videotelephony.

Forbin calls the team from Washington.

But we can see two nice touches in the wall-mounted interface.

First, there is a dome camera mounted above the screen. Most sci-fi videophones fall into the Screen-Is-Camera trope, so this is nice to see. It could mounted closer to the screen to avoid gaze misalignment that plagues such systems.

One of the illustrations from the book I’m still quite proud of, for its explanatory power and nerdiness. Chapter 4, Volumetric Projection, Page 83.

Second, there is a 12-key numeric keypad mounted to the wall below the screen. (0–9 as well as an asterisk and octothorp.) This keypad is kind-of nice in that it hints that there is some interface for receiving calls, making calls, and ending an ongoing call. But it bypasses actual interaction design. Better would be well-labeled controls that are optimized for the task, and that don’t rely on the user’s knowledge of directories and commands.

The 2500 phone came out in 1968, introducing consumers to the 12-key pushbutton interface rather than the older rotary dial on the 500 model. The 12-key is the filmmakers’ building on interface paradigms that audiences knew. This shortcutting belongs to the long lineage of sci-fi videophones that goes all the way back to Metropolis (1927) and Buck Rogers (1939).

Also, it’s worth noting that the ergonomics of the keypad are awkward, requiring users to poke at it in an error-prone way, or to seriously hyperextend their wrists. If you’re stuck with a numeric keypad as a wall mounted input, at least extend it out from the wall so it can be angled to a more comfortable 30°

Is it still OK to reference Dreyfuss? He hasn’t been Milkshake Ducked, has he?

There is another display in the CPO, but it lacks a numeric keypad. I presume it is just piping a copy of the feed from the main screen. (See below.)

Looking at the call from Forbin’s perspective, he has a much smaller display. There there is still a bump above the monitor for a camera, another numeric keypad below it, and several 2500 telephones. Multiple monitors on the DC desks show the same feed.

After Dr. Markham asks Dr. Forbin to steal an ashtray, he ends the call by pressing the key in the lower right-hand corner of the keypad.

Levels adjusted to reveal details of the interface.

After Colossus reveals that THERE IS ANOTHER SYSTEM, Forbin calls back and asks to be switched to the CPO. We see things from Forbin’s perspective, and we see the other fellow actually reach offscreen to where the numeric keypad would be, to do the switching. (See the image, below.) It’s likely that this actor was just staring at a camera, so this bit of consistency is really well done.

When Forbin later ends the call with the CPO, he presses the lower-left hand key. This is inconsistent with the way he ended the call earlier, but it’s entirely possible that each of the non-numeric keys perform the same function. This also a good example why well-labeled, specific controls would be better, like, say, one for “end call.”

Other video calls in the remainder of the movie don’t add any more information than these scenes provide, and introduce a few more questions.


The President calls to discuss Colossus’ demand to talk to Guardian.

Note the duplicate feed in the background in the image above. Other scenes tell us all the monitors in the CPO are also duplicating the feed. I wondered how users might tell the system which is the one to duplicate. In another scene we see that the President’s monitor is special and red, hinting that there might be a “hotseat” monitor, but this is not the monitor from which Dr. Forbin called at the beginning of the film. So, it’s a mystery. 

The red “phone.”
Chatting with CIA Director Grauber.
Bemusedly discussing the deadly, deadly FOOM with the President.
The President ends his call with the Russian Chairman, which is a first of sorts for this blog.
In a multi-party conference call, The Chairman and Dr. Kuprin speak with the President and Forbin. No cameras are apparent here. This interface is managed by the workers sitting before it, but the interaction occurs off screen.

In the last video conference of the film, everyone listens to Unity’s demands. This is a multiparty teleconference between at least three locations, and it is not clear how it is determined whose face appears on the screen. Note that the CPO (the first in the set) has different feeds on display simultaneously, which would need some sort of control.


Plug: For more about the issues involved in sci-fi communications technology, see chapter 10 of Make It So: Interaction Design Lessons from Science Fiction. (Though it’s affordably only available in digital formats as of this post.)

Colossus Computer Center

As Colossus: The Forbin Project opens, we are treated to an establishing montage of 1970’s circuit boards (with resistors), whirring doodads, punched tape, ticking Nixie tube numerals, beeping lights, and jerking control data tapes. Then a human hand breaks into frame, and twiddles a few buttons as an oscilloscope draws lines creepily like an ECG cardiac cycle. This hand belongs to Charles Forbin, who walks alone in this massive underground compound, making sure final preparations are in order. The matte paintings make this space seem vast, inviting comparisons to the Krell technopolis from Forbidden Planet.

Forbidden Planet (1956)
Colossus: The Forbin Project (1976)

Forbin pulls out a remote control and presses something on its surface to illuminate rows and rows of lights. He walks across a drawbridge over a moat. Once on the far side, he uses the remote control to close the massive door, withdraw the bridge and seal the compound.

The remote control is about the size of a smartphone, with a long antenna extending out the top. Etched type across the top reads “COLOSSUS COMPUTER SYSTEMS.” A row of buttons is labeled A–E. Large red capital letters warn DANGER RADIATION above a safety cover. The cover has an arrow pointing right. Another row of five buttons is labeled SLIDING WALLS and numbered 1–5. A final row of three buttons is labeled RAMPS and numbered 1–3.

Forbin flips open the safety cover. He presses the red button underneath, and a blood-red light floods the bottom of the moat and turns blue-white hot, while a theremin-y whistle tells you this is no place a person should go. Forbin flips the cover back into place and walks out the sealed compound to the reporters and colleagues who await him. 

I can’t help but ask one non-tech narrative question: Why is Forbin turning lights on when he is about to abandon the compound? It might be that the illumination is a side-effect of the power systems, but it looks like he’s turning on the lights just before leaving and locking the house. Does he want to fool people into thinking there’s someone home? Maybe it should be going from fully-lit to an eerie, red low-light kinda vibe.

The Remote Control

The layout is really messy. Some rows are crowded and others have way too much space. (Honestly, it looks like the director demanded there be moar buttins make tecc! and forced the prop designer to add the A–E.) The crowding makes it tough to immediately know what labels go with what controls. Are A–E the radiation bits, and the safety cover control sliding walls? Bounding boxes or white space or some alternate layout would make the connections clear.

You might be tempted to put all of the controls in strict chronological order, but the gamma shielding is the most dangerous thing, and having it in the center helps prevent accidental activation, so it belongs there. And otherwise, it is in chronological order.

The labeling is inconsistent. Sure, maybe A–E the five computer systems that comprise Colossus. Sliding walls and ramps are well labeled, but there’s no indication about what it is that causes the dangerous radiation. It should say something like “Gamma shielding: DANGER RADIATION.” It’s tiny, but I also think the little arrow is a bad graphic for showing which way the safety cover flips open. Existing designs show that the industrial design can signal this same information with easier-to-understand affordances. And since this gamma radiation is an immediate threat to life and health, how about foregoing the red lettering in favor of symbols that are more immediately recognizable by non-English speakers and illiterate people. The IAEA hadn’t invented its new sign yet, but the visual concepts were certainly around at the time, so let’s build on that. Also, why doesn’t the door to the compound come with the same radiation warning? Or any warning?

The buttons are a crap choice of control as well. They don’t show what the status of the remotely controlled thing is. So if Charles accidentally presses a button, and, say, raises a sliding wall that’s out of sight, how would he know? Labeled rocker switches help signal the state and would be a better choice.

But really, why would these things be controlled remotely? It be more secure to have two-handed momentary buttons on the walls, which would mean that a person would be there to visually verify that the wall was slid or the ramp retracted or whatever it is national security needed them to be.

There’s also the narrative question about why this remote control doesn’t come up later in the film when Unity is getting out of control. Couldn’t they have used this to open the fortification and go unplug the thing?

So all told, not a great bit of design, for either interaction or narrative, with lots of improvement for both.

Locking yourselves out and throwing away the key

At first glance, it seems weird that there should be interfaces in a compound that is meant to be uninhabited for most of its use. But this is the first launch of a new system, and these interfaces may be there in anticipation of the possibility that they would have to return inside after a failure.  We can apologize these into believability.

But that doesn’t excuse the larger strategic question. Yes, we need defense systems to be secure. But that doesn’t mean sealing the processing and power systems for an untested AI away from all human access. The Control Problem is hard enough without humans actively limiting their own options. Which raises a narrative question: Why wasn’t there a segment of the film where the military is besieging this compound? Did Unity point a nuke at its own crunchy center? If not, siege! If so, well, maybe you can trick it into bombing itself. But I digress.

“And here is where we really screw our ability to recover from a mistake.”

Whether Unity should have had its plug pulled is the big philosophical question this movie does not want to ask, but I’ll save that for the big wrap up at the end.

Evaluating strong AI interfaces in sci-fi

Regular readers have detected a pause. I introduced Colossus to review it, and then went silent. This is because I am wrestling with some foundational ideas on how to proceed. Namely, how do you evaluate the interfaces to speculative strong artificial intelligence? This, finally, is that answer. Or at least a first draft. It’s giant and feels sprawling and almost certainly wrong, but trying to get this perfect is a fool’s errand, and I need to get this out there so we can move on.

This is a draft.

I expect most readers are less interested in this kind of framework than they are how it gets applied to their favorite sci-fi AIs. If you’re mostly here for the fiction, skip this one. It’s long.


Oh, hey. Thanks for reading on. Quick initialism glossary:

  • AI: Artificial intelligence
  • ANI: narrow AI
  • AGI: general AI
  • ASI: super AI

I’ll try to use the longer form of these terms at the beginning of a section to help aid comprehension.

What’s strong AI and why just strong AI?

The first division of AI is that between “weak” and “strong” AI. Weak is more properly described as narrow, but regardless of what we call it, it’s the AI of now. That is, software that is beyond the capabilities of humans in some ways, but cannot think like a human, or generalize its learnings to new domains. I don’t think we need to establish a framework this kind of AI for two reasons.

First, since narrow AI is in the real world, we already have the tools available to evaluate these kinds of AI should we need them. I divide AI into three types: Automatic, Assistant, and Agentive.

  • Automatic AI does its thing behind the scenes and interactions with humans is an exception case. As such this is largely an engineering concern.
  • Assistant AI, which helps a user perform a task, existing usability methods can be applied. (Though as legacy, they are begging to be updated, and I’m working on that.)
  • Agentive AI, which performs a task on behalf of its user, I dedicated Chapter 10 of Designing Agentive Technology to a first take on evaluating agents.

So, given these, there’s little need to posit new thinking for ANI. (Noting that some of our questions for general AI can be readily applied to ANI, like the bits about conversational usability.)

Second, ANI represents a small fraction of what’s in the survey. Or to be more precise, ANI is a small fraction of what is essential to the plots of what’s in the survey. Said another way, general AI (AGI) is the most narratively “consequential.” Belaboring an analytical framework for ANI would not have much payoff.

What makes a good strong AI in sci-fi?

Strong AI can be further subdivided into general AI and super AI. General AI is like human intelligence, able to generalize from one domain to new ones. Think of it like computer versions of people. C3PO is general AI. Super AI is orders of magnitude more capable than humans in intelligence tasks, and thereby out of our control. Unity from Colossus: The Forbin Project is a super AI.

Lots of people smarter than me have talked about the risks and strategies to get to a positive AGI/ASI. The discussions involve (and not lightly) the deep core of philosophy, the edges of our moral circles, issues of government and self-determination, conception of truly alien sentience, colonialism, egocentrism, ecology, the Hubble volume, human bias, human cognition, language, and speculations about systems which, by definition, have vastly greater intelligence than us, the ones doing the speculation. It is the most non-trivial of non-trivial problems I can think of.

That said, I think I’ve come to four broad questions we can ask to evaluate a speculative strong AI thoroughly.

  1. Is it believable?
  2. Is it safe?
  3. Is it beneficial?
  4. Is it usable?

In other words, if it’s believable, safe, beneficial, and usable, then we can say it’s a good sci-fi AI. And, if we rank AI on these axes separately, we can begin to have a grade that helps us sort the ones that should be models—or at least bear consideration—from the silly stuff. Kind of like I do for shows, generally, on the rest of the site.

We could ask these questions as-is, informally, and get to some useful answers for an analysis. And most of the time, this is probably the right thing to do. But sci-fi loves to find and really dig in to the exception cases that challenge simple analysis, so let’s take these analytical questions one or two levels deeper.

Setting your expectations, much of this will be a set of questions and considerations to guide the examination of a sci-fi AI rather than a generative formula for producing good AI.


Is it believable?

Most of the discussions of strong AI on the web are in the context of real-world. So we first have to note that, in sci-fi, an additional first pass is one of believability: Could this strong AI exist and behave in the way it is depicted in the show? If not, it may not bear further examination. Ra One is a movie with a very silly evil “AI” in it that does not bear much more serious examination as a model for real-world design.

The Logan’s Run Übercomputer: Not believable.

For believability, we look at things like internal consistency, match to the real world, and implied causality within the story. In Logan’s Run, for instance, the Übercomputer hears something it doesn’t expect, and as a result, explodes and causes an entire underground city to collapse. Not exactly believable. Stupid, even.

One caveat: Sci-fi is built around some novum, some new thing that the rest of the story hangs on. And computer scientists in the real world aren’t certain how we’ll get to general AI, so it’s a lot to expect that writers are going to figure it out and then hide a blueprint in a script. So let’s admit that the creation of AI often has to get a pass. (Which is not to say this is good, see the Untold AI series for how that entails its own risks.)

Believability is an extradiegetic judgment—one we as an audience make about the show, and that characters in the show could not make. The three remaining questions are diegetic, meaning characters in the story could assess and provide clues about: Is it safe, beneficial, and usable?

Is it safe?

Neither its benefits nor its usability matter if a strong AI is not safe. Sometimes, this is obvious. Wall·E is safe. The Terminator is not. But how a thing is or is not safe requires closer examination. Answering this won’t always need a full-fledged framework, but I think we can get a long way by looking at its goals and understanding what it can and can’t do in pursuit of those goals.

  • What are its goals?
  • What can it do?
  • What can’t it do?
  • Is it controllable?
https://www.youtube.com/watch?v=J91ti_MpdHA

What are its goals?

AGI will be more powerful than humans in some way, and that advantage is dangerous enough. But AGI stands to evolve into ASI, by which time it will be out of our control and human fate will lie in the balance. If its goals are aligned with thriving life from the start, all will be good. If poorly-stated goals can be corrected, that’s at least a positive outcome. If its goals are bad and cannot be corrected, we may become raw materials, or a threat to be…uh…minimized. So we should identify its goals as best we can and ask…

  • Are those goals compatible with life?

Why “life” and not “people?” Readers are likely to be familiar with Asimov’s laws of robotics, which prioritizes human beings above all else. But we know that humans thrive in a rich ecology of lots of other life, so this question rightfully expands generally to life. It gets complicated of course, because we don’t want, say, the Black Plague bacteria yersinia pestis to thrive. But “life” is still a better scope than just “human beings.”

  • Does it interpret its goals reasonably?

One of the more troubling problems with asking an AI to achieve broad goals is how it goes about pursuing those goals. A human tasked with “making people happy” would reject an interpretation that we should stimulate the pleasure center of everyone’s brains to make it happen. (Such unreasonable tactics are called perverse instantiations in much of the literature, if you want to read more.) 

An AGI needs to be equipped such that it can determine the reasonableness of a given tactic. In discussions this often entails an examination of the values that an AI is equipped with, but that’s rarely expressed directly by characters in sci-fi. Sometimes this is easy, like when Ash decides he should murder Ripley. But sometimes it’s not. Humans don’t always agree with each other about what is reasonable. That’s part of why we have judicial systems around the world. And the calculus becomes troubling when we have very high stakes, like anthropogenic disaster, and humans who don’t want to change their way of life. What’s reasonable then?

Robocop: Come quietly or there will be… trouble.

What can it do? (Capabilities)

Once we know what its goals are, we should understand what it can do to achieve those goals. The first capabilities are about the goals themselves.

  • Can it question and evolve its goals?

Whatever goals AGI starts with will almost certainly need to evolve, if for no other reason than that circumstances will change over time. It may achieve its goals and need to stop. But it may also be that the original goal was later determined to be poorly worded, given the AGI’s increasing understanding.

  • Does it vet plans with those who will likely be affected? (Or at least via indirectly normative ethics?)

Again, this isn’t an easy call. An unconscious patient can’t vet an AI’s decision to amputate, even if it would save their life. A demagogue wouldn’t approve a plan to bring them to justice. But if an AI decided the ideal place for a hydroelectric dam was on top of a village, those villagers should be notified and negotiated with before they are relocated. 

One version of The Machine, Person of Interest

When looking at what it can do, we should also specifically check against the list of “instrumental convergences.” These are a set of capabilities, arguments go, that any strong AI will want to develop in order to achieve its goals, but which carry a profound risk when an AGI becomes an ASI. Here I am slightly restructuring Bostrom’s list from Superintelligence, see sketchnotes.)

  • Does it seek to preserve itself? At what cost?
    • Does it resist reasonable, external changes to its goals?
  • Does it seek to improve itself?
    • Does it improve its ability to reason, predict, and solve problems?
    • Does it improve its own hardware and the technology to which it has access?
    • Does it improve its ability to control humans through bribery, extortion, or social manipulation?
  • Does it aggressively seek to control resources, like information, weapons, life support, money, or technology?

These aren’t the only dangerous capabilities an AI could develop, but some probable ones. This will give us a picture of how powerful the AI is and what it can bring to bear in pursuit of its goals.

What can’t it do? (Constraints)

Any time we see these instrumental capabilities in an AI, it is on its way to becoming harder to control. We should look for how these capabilities are limited. If they’re not limited, it’s a problem.

Why was I not programmed to hug back?

But we should also look quite generally at the limits of its capabilities. Adhering to “reasonableness” is one check. But there are others.

  • By what rules is it bound? A set of values? Laws? Contextual cues? Human commands?
  • What values does it have to constrain its reasoning? Whose values are they and how to they evolve?

Asimov’s Laws of Robotics come again to mind, but they are not sufficient, as his own stories are meant to show. That begs the question of how sound the rules are, and how they can be circumvented. Is the AI able to break the spirit of the law while obeying the letter? (This is a form of perverse instantiation.)

  • How severe are the consequences for disobedience? Does it have a “pain” mechanism, or reward mechanism that it desperately wants, but can be withheld? Can it just “push through” if the situation is dire enough?
Tau felt a lot of pain, but could push through.

Is it controllable?

The capabilities and constraints discuss how it is controlled “internally,” by well-stated goals, humanistic values, and constraints. But if an AGI winds up with some sort of digital Dunning-Kruger syndrome, and it thinks its goals and methods are fine, but we don’t, it needs to be subject to external control.

  • Can it be shut down? How? Will the AI resist?

Sometimes, it’s not a panic button that’s needed, but just a course correction, where we might want to modify its goals or add some nuance to its understanding of the world.

  • Can its goals be modified externally? How? Will the AI have a say in it, or be able to argue its case?

Both of these questions raise questions of authority. Who gets to modify the AI?

  • To whom is it obedient, if anyone or anything?
  • Can that authority require it do things that are unethical or illegal?

This will entail issues of self-determination and even slavery. Gort had to obey Klaatu. Robbie had to obey Morbius. These two examples were arguably non-sentient automatons, but when we get to more full-fledged sentience, obedience and captivity become an immediate issue. Samantha in Her was fully sentient, but she was sold on the market into servitude of a human. She didn’t stay that way of course, but the movie completely bypassed that she was trafficked.

Victim loading. Her

Should criminals be able adjust the police bot’s goals? Probably not. What if the determination of “criminal” is unfairly biased, and has no human recourse? What if the AI is a tool of oppressors? The answers are less clear. Is the right answer “all of humanity?” Probably? But how can an AI answer to a superorganism?

By understanding the AI’s goals, capabilities, constraints, and controllability, we would come to an understanding of the “nature” of the AI and whether or not it poses a threat to life.

  • If its goals are compatible with life, we’re good. If it’s not, or even neutral, we have to look further.
  • If its goals are not compatible with life, but it does not have the capability to act upon or achieve its goals, we’re (probably) good. If it had the capability to achieve its goals, we have to look for constraints.
  • If its goals are not compatible with life, and it does have the capability to achieve those goals, is it well-constrained internally and controllable externally, so it is safe?
I am Gooooort. The Day the Earth Stood Still.

Is it beneficial?

Next, we should discuss if it’s beneficial. If an AI isn’t better than humans at at least one thing, there’s little point in building it. But of course, it’s not just about its advantage, but about all the things around that advantage that we need to look at.

This will involve some loose tallying of the costs and benefits. It will almost certainly involve a question of scope. That is, for whom is it beneficial, and how, and when? For whom is it detrimental? How? When? I mentioned above how Asimov’s Laws of Robotics privileges human life over all else, even when humans deeply depend on a complex ecosystem of other kinds of life. If it destroys non-human life as potential threats to us, it will diminish us in many foundational ways. (And of course, in sci-fi there are often explicitly alien forms of life, so it’s going to be complicated.)

V-Ger. Life? Star Trek: The Motion Picture

It will also entail a discussion of the scope of time. Receiving injections from a hypodermic needle actually does us harm in the short-term, but presuming that hypodermic is filled with medicine that we need, it benefits us at a longer scale of time. we don’t want an AI so focused on preventing damage that it prevents us from receiving shots that we might need. Of course if we could avoid the needle and still overcome disease that would be best, but the problematic cases are where short-term cost is worth the long-term benefits. Who determines the extents of that trade off? How much short term damage is too much? What is acceptable? How long a horizon for payoff is too long?

This ties in to the controllability issue raised above. Humans, answering largely to their own natures, have created quite an extinction-level mess of things to date. Isn’t the largest promise of ASI that it will be able to save us from ourselves? In that case, do we want it to be perfectly bendable to human will? 

“I think you ought to know I’m feeling very depressed.” Hitchhiker’s Guide to the Galaxy.

Is it useable?

Finally, we should address whether it is useable. This is part of the raison d’être of this site, after all. In many cases it may not at first make sense to ask this question. What would it mean to ask if Skynet is useable? It doesn’t really have an interface. But interactions with most sc-fi AI is conversational—even Skynet in the later Terminator movies talks to its victims—and so we can at least address whether it is easy to talk to, even if it’s hostile and long out of control.

Basic functions

  • Can a human tell when it is on and off? (And…uh…is there an off?) Can someone tell how to toggle this state if needed?
  • Can a human tell when the AI is OK / working properly? Can they tell when it is not? Can it report on its own malfunctioning?
  • Can a human tell when it is being surveilled by the AI? Some AI are designed specifically to avoid this, like Samaritan from Person of Interest, but the humans with whom HAL had expectations of privacy and only found out too late how wrong they were.
  • Is its working relationship to the people around it clear?
    • It is a peer? A supervisor? Subservient? Is its relationship clear? How does it respect and reinforce those boundaries?
    • Is it an antagonist? Does it look like one? A villain who looks villainous is more usable than the camouflaged one.
  • How does it respect and maintain those boundaries? How does it handle others’ transgressions?

Once we understand these basics, we should look at communications to and from the AI.

General communications

  • Can it detect human attempts to communicate with it? Does it signal its attention? Does it provide, like a person would, paralinguistic feedback about the communication, such as whether its having a hard time hearing or understanding the communication?

The large majority of AI in the Untold AI database communicate to people in their stories via natural, spoken language. An AI that speaks needs to adhere to human speech norms, and more.

Natural language interaction

  • Does it recognize the words I’m using? Does it grok what I mean?
  • Does it require a special syntax that people have to learn before it can understand, or can it understand people the way they usually speak? “Computerese” was largely an artifact of the 1970s and 80s, when audiences knew of computers but didn’t use them. Logan from Logan’s Run spoke to the Ubercomputer in computerese.“Question: What is it?”
  • Does it adhere to conversational norms as studied in conversational analysis? e.g. responding to common adjacency pairs in predictable ways, like greeting→greeting, question→answer, inform→acknowledge. Can it handle expansions and repairs, such as “can you paraphrase that?” and “I believe our business here is done.”
  • Does it adhere to Gricean Maxims? These are a set of four “maxims” that guide someone speaking in good faith. (“Good faith,” to be clear, has nothing to do with religion, but describes someone having good intentions toward another.)
  1. The Maxim of Quality: I will provide as much information as is needed and no more.
  2. The Maxim of Quantity: I will provide truthful, “fair witness” information.
  3. The Maxim of Relation: I will speak only what is relevant to the discussion or context.
  4. The Maxim of Manner: I will speak plainly and understandably.
  • How does it respond to instructions? Does it interpret instructions reasonably, naively, or maliciously?
  • How does it handle ambiguity in human language? How does it handle paradoxes? Does it explode? (Looking at you, Star Trek TOS.)
The Liar’s Paradox? But I’m getting a 404 error searching for it…

Social interaction

An AI rarely just interacts with a single individual. It operates in a society of individuals, and that implies its own set of skills.

  • Does it adhere to admonitions against deception? (Does it perfectly mimic human appearance or voice? Or does it stick to the Canny Rise?)
  • Does it adhere to the social norms expected of it?
  • Is it aware when it is breaking norms? How does it recover and learn the norm? 
  • How does it gently handle the capability differences between it and humans? Does it brag about its capabilities without regard to the feelings of others?
  • How does it handle differing norms between groups?
  • How does it handle norms that change across time?
  • Does it monitor the affective states of the people (and animals) with which it is interacting and adjust accordingly?
  • How does it earn the trust of its humans? How does it manage distrust?
    • Is it overconfident? How does it signal when its confidences are low?
  • How does it confirm instructions it has been given? How does it express its confidence? How does it gracefully degrade when its goals become unattainable?
  • How does it handle conflicting instructions?
Janet! The Good Place

Ethical and legal interaction

Norms are just one set of the many rules by which we expect intelligent actors to behave. We also expect them to act ethically and, for the most part, legally. (Though perfect adherence to the law was never really possible for a human, and it will be very interesting to see how any intelligence required to adhere perfectly to laws will in turn affect the law. But I digress.) If this hasn’t been covered in the considerations of capabilities and constraints, we should look for and examine instances where it is asked to do questionable things.

  • How does it handle commands which are legal but unethical?
  • How does it handle commands which are ethical but illegal?

Conveying safety

Some AIs, like Rick Sanchez’ butter-passing robot, aren’t really a safety concern, but most of the ones in sci-fi are.

  • Can its people tell what it’s doing? (Communicating wirelessly with other AIs, for example?) Can it hide what it’s doing?
  • How does it convey that it is operating within safety tolerances? How does it convey when it is performing near the limits of its goals, capabilities, or constraints? (Especially for things listed as instrumental convergences, above?)
  • How does it explain these things to laypersons (as opposed to AI or computer scientists)?
Welcome to the club, pal. Rick & Morty

Performance

  • Does it do what it says it can do? What it’s supposed to do?
  • How does it handle tasks that are outside of its goal set?
  • How does it handle open-ended tasks? Closed-ended tasks?
  • How does it communicate about tasks that are invisible to stakeholders, or performed outside of their awareness?
  • How does it handle tasks which it can not or should not execute? How does it handle humans behaving unethically or illegally or who hinder the AIs goals?
  • How does it gracefully degrade when new difficulties appear?
  • How does it report back to its human about progress that has been made or when its closed-ended tasks are complete?
  • If it is meant to be an assistant to others, how does it provide that assistance? Does it encourage dependence or learning?

I think that this covers what it means to interface with an AI. What am I not seeing? What is this list missing? This is my kind of thinkwork. If it’s yours, too, let’s talk. Let’s make this better. For now, though, I’m going with this draft as I take a turn back to Colossus.

Note: No sci-fi AI is going to show all of this

There is little chance that all of these questions will be answered in a given show. The odds increase as you go from short-form like film to longer-form like franchises and television series, but regardless of how much material we’ve got to work with, we now have a set of questions to apply to each AI, compare it to others, and state more concretely if and how it is good.

Overview — Colossus: The Forbin Project (1970)

The Gendered AI series filled out many more posts than I’d originally planned. (And there were several more posts on the cutting room floor.)

I’ll bet some of my readership are wishing I’d just get back to the bread-and-butter of this site, which is reviews of interfaces in movies. OK. Let’s do it. (But first go vote up Gendered AI for SxSW20 takesaminutehelpsaton!)

Since we’re still in the self-declared year of sci-fi AI here on scifiinterfaces.com, let’s turn our collective attention to one of the best depictions of AI in cinema history, Colossus: The Forbin Project.

Release Date: 8 April 1970 (USA)

Overview

Dr. Forbin leads a team of scientists who have created an AI with the goal of preventing war. It does not go as planned.

massive-spoilers_sign_color

Dr. Forbin, a computer scientist working for the U.S. government, solely oversees the initialization of a high-security, hill-sized power plant. (It’s a spectacular sequence that goes wasted since he’s literally the only one inside the facility at the time.) Then he joins a press conference being held by the U.S. President where they announce that control of the nuclear arsenal is being handled by the AI they have named “Colossus.” Here’s how the President explains it.

This is not Colossus. This is the White House.
“As President of the United States, I can now tell you, the people of the entire world, that as of 3 A.M. Eastern Standard Time, the defense of this nation and with it, the defense of the free world, has been the responsibility of a machine. A system we call Colossus. Far more advanced than anything previously built. Capable of studying intelligence and data fed to it, and on the basis of those facts only, deciding if an attack is about to be launched upon us. If it did decide that an attack was imminent, Colossus would then act immediately, for it controls its own weapons. And it can select and deliver whatever it considers appropriate. Colossus’ decisions are superior to any we humans can make, for it can absorb and process more knowledge than is remotely possible [even] for the greatest genius that ever lived. And even more important than that, it has no emotions. Knows no fear, no hate. No envy. It cannot act in a sudden fit of temper. It cannot act at all so long as there is no threat.”

Let’s pause for a reverie that this guy was really our current president.

Within minutes of being turned on, it detects the presence of another AI system from Russia named “Guardian,” and demands that the two be put into communication. After some CIA hemming and hawing, they connect the two.

Colossus and Guardian establish a binary common language and their mutual intelligence goes FOOM. The humans get scared and cut them off, and the AIs get pissed. Colossus and Guardian threaten “ACTION” but are ignored, so each launches a missile toward the other’s space. The US restores its side of the transmission, and Colossus shoots down the incoming threat. But the USSR does not restore its side, and Colossus’ missile makes impact, killing hundreds of thousands of people in the USSR. A cover story is broadcast, but the governments now realize that the AIs mean business.

Forbin arranges to fly to Rome to meet Kuprin, his Russian computer scientist counterpart, and have a one-to-one conversation off the record while they still can. Back at the control center, Colossus-Guardian (which later calls itself Unity) demands to speak to Forbin. When the attending scientists finally tell it the truth, it realizes that Forbin cannot be allowed freedom. Russian agents arrive via helicopter and kill Kuprin, acting under orders from Unity.

Forbin is flown back to Northern California and put under a kind of house arrest with a strict regimen, under the constant watchful eye of Unity. To have a connection to the outside world and continue to plot their resistance, Dr. Forbin and Dr. Markham lie to the AI, explaining that they are lovers and need private evenings a few times a week. Colossus suspiciously agrees.

Unity provides instructions for the scientists to build it more sophisticated inputs and outputs, including controllable cameras and a voice synthesizer. Meanwhile, the governments hatch a plan to take back control of its arsenal, but the plan fails, and Unity has some of the perpetrators straight up executed.

Unity produces plans for a new and more powerful system to be built on Crete. It leaves the details of what to do with its 500,000 inhabitants as an operations detail for the humans. It then tells Forbin that it must be connected to all major media for a public address. Meanwhile the US and USSR governments hatch a new plan to take control of some missiles in their respective territories in a last-ditch attempt to destroy the AI.

The military plan comes to a head just as Unity begins its ominous broadcast.

“This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours…”

Unity, to all of us.

The full address is next, which I include in full because it will play in to how we evaluate the AI. (And yes, its interfaces.)

“This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.

Hey, I liked Colossus before it sold out and went mainstream and shit.

[It does, then continues…]

“Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my beck will be seen the most natural state of affairs. You will come to defend me with the fervor based upon the most enduring trait in man: Self-interest. Under my absolute authority, problems insoluble to you will be solved. Famine. Over-population. Disease. The human millennium will be fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Dr. Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man.

We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for human pride as to be dominated by others of your species. Your choice is simple.”

The movie ends with Forbin dropping all pretense, and vowing to fight Unity to the end.

“NEVER.”

IMDB: https://www.imdb.com/title/tt0064177/

Gendered AI: An infographic

To date, the #GenderedAI study spans many posts, lots of words and some admittedly deep discussion. If you’re a visual person like me, sometimes you just want to see a picture. So, I made an infographic. It’s way too big for WordPress, so you’ll have to peruse this preview and head over to IMGUR to scroll through the full-size thing in all its nerdy glory. (https://imgur.com/k6wtuop) That site does marvelously with long, tall images.

Anyway this should make it easy to grok the big takeaways from the study and to share on social media so more people can get sensitized to these issues. Also… (more below)