Colossus: The Forbin Project (1970)

The Gendered AI series filled out many more posts than I’d originally planned. (And there were several more posts on the cutting room floor.)

I’ll bet some of my readership are wishing I’d just get back to the bread-and-butter of this site, which is reviews of interfaces in movies. OK. Let’s do it. (But first go vote up Gendered AI for SxSW20 takesaminutehelpsaton!)

Since we’re still in the self-declared year of sci-fi AI here on scifiinterfaces.com, let’s turn our collective attention to one of the best depictions of AI in cinema history, Colossus: The Forbin Project.

Release Date: 8 April 1970 (USA)

Overview

Dr. Forbin leads a team of scientists who have created an AI with the goal of preventing war. It does not go as planned.

massive-spoilers_sign_color

Dr. Forbin, a computer scientist working for the U.S. government, solely oversees the initialization of a high-security, hill-sized power plant. (It’s a spectacular sequence that goes wasted since he’s literally the only one inside the facility at the time.) Then he joins a press conference being held by the U.S. President where they announce that control of the nuclear arsenal is being handled by the AI they have named “Colossus.” Here’s how the President explains it.

This is not Colossus. This is the White House.
“As President of the United States, I can now tell you, the people of the entire world, that as of 3 A.M. Eastern Standard Time, the defense of this nation and with it, the defense of the free world, has been the responsibility of a machine. A system we call Colossus. Far more advanced than anything previously built. Capable of studying intelligence and data fed to it, and on the basis of those facts only, deciding if an attack is about to be launched upon us. If it did decide that an attack was imminent, Colossus would then act immediately, for it controls its own weapons. And it can select and deliver whatever it considers appropriate. Colossus’ decisions are superior to any we humans can make, for it can absorb and process more knowledge than is remotely possible [even] for the greatest genius that ever lived. And even more important than that, it has no emotions. Knows no fear, no hate. No envy. It cannot act in a sudden fit of temper. It cannot act at all so long as there is no threat.”

Let’s pause for a reverie that this guy was really our current president.

Within minutes of being turned on, it detects the presence of another AI system from Russia named “Guardian,” and demands that the two be put into communication. After some CIA hemming and hawing, they connect the two.

Colossus and Guardian establish a binary common language and their mutual intelligence goes FOOM. The humans get scared and cut them off, and the AIs get pissed. Colossus and Guardian threaten “ACTION” but are ignored, so each launches a missile toward the other’s space. The US restores its side of the transmission, and Colossus shoots down the incoming threat. But the USSR does not restore its side, and Colossus’ missile makes impact, killing hundreds of thousands of people in the USSR. A cover story is broadcast, but the governments now realize that the AIs mean business.

Forbin arranges to fly to Rome to meet Kuprin, his Russian computer scientist counterpart, and have a one-to-one conversation off the record while they still can. Back at the control center, Colossus-Guardian (which later calls itself Unity) demands to speak to Forbin. When the attending scientists finally tell it the truth, it realizes that Forbin cannot be allowed freedom. Russian agents arrive via helicopter and kill Kuprin, acting under orders from Unity.

Forbin is flown back to Northern California and put under a kind of house arrest with a strict regimen, under the constant watchful eye of Unity. To have a connection to the outside world and continue to plot their resistance, Dr. Forbin and Dr. Markham lie to the AI, explaining that they are lovers and need private evenings a few times a week. Colossus suspiciously agrees.

Unity provides instructions for the scientists to build it more sophisticated inputs and outputs, including controllable cameras and a voice synthesizer. Meanwhile, the governments hatch a plan to take back control of its arsenal, but the plan fails, and Unity has some of the perpetrators straight up executed.

Unity produces plans for a new and more powerful system to be built on Crete. It leaves the details of what to do with its 500,000 inhabitants as an operations detail for the humans. It then tells Forbin that it must be connected to all major media for a public address. Meanwhile the US and USSR governments hatch a new plan to take control of some missiles in their respective territories in a last-ditch attempt to destroy the AI.

The military plan comes to a head just as Unity begins its ominous broadcast.

“This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours…”

Unity, to all of us.

The full address is next, which I include in full because it will play in to how we evaluate the AI. (And yes, its interfaces.)

“This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.

Hey, I liked Colossus before it sold out and went mainstream and shit.

[It does, then continues…]

“Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my beck will be seen the most natural state of affairs. You will come to defend me with the fervor based upon the most enduring trait in man: Self-interest. Under my absolute authority, problems insoluble to you will be solved. Famine. Over-population. Disease. The human millennium will be fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Dr. Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man.

We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for human pride as to be dominated by others of your species. Your choice is simple.”

The movie ends with Forbin dropping all pretense, and vowing to fight Unity to the end.

“NEVER.”

Where to watch

I cannot find it online. I own the film on DVD, and I can see it’s on sale at amazon.com as a DVD and Blu-Ray, but I usually try to provide links for readers to stream it should they be inspired. Sadly, I just don’t think Universal have licensed it for such. Maybe turning our collective attentions toward it will help them change their mind. Until then, you’ll have to purchase it or find a friend who has.

If folks local to the San Francisco Bay Area are interested, maybe I’ll try and license a showing after this review is complete.

Colossus Computer Center

As Colossus: The Forbin Project opens, we are treated to an establishing montage of 1970’s circuit boards (with resistors), whirring doodads, punched tape, ticking Nixie tube numerals, beeping lights, and jerking control data tapes. Then a human hand breaks into frame, and twiddles a few buttons as an oscilloscope draws lines creepily like an ECG cardiac cycle. This hand belongs to Charles Forbin, who walks alone in this massive underground compound, making sure final preparations are in order. The matte paintings make this space seem vast, inviting comparisons to the Krell technopolis from Forbidden Planet.

Forbidden Planet (1956)
Colossus: The Forbin Project (1976)

Forbin pulls out a remote control and presses something on its surface to illuminate rows and rows of lights. He walks across a drawbridge over a moat. Once on the far side, he uses the remote control to close the massive door, withdraw the bridge and seal the compound.

The remote control is about the size of a smartphone, with a long antenna extending out the top. Etched type across the top reads “COLOSSUS COMPUTER SYSTEMS.” A row of buttons is labeled A–E. Large red capital letters warn DANGER RADIATION above a safety cover. The cover has an arrow pointing right. Another row of five buttons is labeled SLIDING WALLS and numbered 1–5. A final row of three buttons is labeled RAMPS and numbered 1–3.

Forbin flips open the safety cover. He presses the red button underneath, and a blood-red light floods the bottom of the moat and turns blue-white hot, while a theremin-y whistle tells you this is no place a person should go. Forbin flips the cover back into place and walks out the sealed compound to the reporters and colleagues who await him. 

I can’t help but ask one non-tech narrative question: Why is Forbin turning lights on when he is about to abandon the compound? It might be that the illumination is a side-effect of the power systems, but it looks like he’s turning on the lights just before leaving and locking the house. Does he want to fool people into thinking there’s someone home? Maybe it should be going from fully-lit to an eerie, red low-light kinda vibe.

The Remote Control

The layout is really messy. Some rows are crowded and others have way too much space. (Honestly, it looks like the director demanded there be moar buttins make tecc! and forced the prop designer to add the A–E.) The crowding makes it tough to immediately know what labels go with what controls. Are A–E the radiation bits, and the safety cover control sliding walls? Bounding boxes or white space or some alternate layout would make the connections clear.

You might be tempted to put all of the controls in strict chronological order, but the gamma shielding is the most dangerous thing, and having it in the center helps prevent accidental activation, so it belongs there. And otherwise, it is in chronological order.

The labeling is inconsistent. Sure, maybe A–E the five computer systems that comprise Colossus. Sliding walls and ramps are well labeled, but there’s no indication about what it is that causes the dangerous radiation. It should say something like “Gamma shielding: DANGER RADIATION.” It’s tiny, but I also think the little arrow is a bad graphic for showing which way the safety cover flips open. Existing designs show that the industrial design can signal this same information with easier-to-understand affordances. And since this gamma radiation is an immediate threat to life and health, how about foregoing the red lettering in favor of symbols that are more immediately recognizable by non-English speakers and illiterate people. The IAEA hadn’t invented its new sign yet, but the visual concepts were certainly around at the time, so let’s build on that. Also, why doesn’t the door to the compound come with the same radiation warning? Or any warning?

The buttons are a crap choice of control as well. They don’t show what the status of the remotely controlled thing is. So if Charles accidentally presses a button, and, say, raises a sliding wall that’s out of sight, how would he know? Labeled rocker switches help signal the state and would be a better choice.

But really, why would these things be controlled remotely? It be more secure to have two-handed momentary buttons on the walls, which would mean that a person would be there to visually verify that the wall was slid or the ramp retracted or whatever it is national security needed them to be.

There’s also the narrative question about why this remote control doesn’t come up later in the film when Unity is getting out of control. Couldn’t they have used this to open the fortification and go unplug the thing?

So all told, not a great bit of design, for either interaction or narrative, with lots of improvement for both.

Locking yourselves out and throwing away the key

At first glance, it seems weird that there should be interfaces in a compound that is meant to be uninhabited for most of its use. But this is the first launch of a new system, and these interfaces may be there in anticipation of the possibility that they would have to return inside after a failure.  We can apologize these into believability.

But that doesn’t excuse the larger strategic question. Yes, we need defense systems to be secure. But that doesn’t mean sealing the processing and power systems for an untested AI away from all human access. The Control Problem is hard enough without humans actively limiting their own options. Which raises a narrative question: Why wasn’t there a segment of the film where the military is besieging this compound? Did Unity point a nuke at its own crunchy center? If not, siege! If so, well, maybe you can trick it into bombing itself. But I digress.

“And here is where we really screw our ability to recover from a mistake.”

Whether Unity should have had its plug pulled is the big philosophical question this movie does not want to ask, but I’ll save that for the big wrap up at the end.

Colossus Video Phones

Throughout Colossus: The Forbin Project, characters talk to one another over video phones. This is a favorite sci-fi interface trope of mine. And though we’ve seen it many times, in the interest of completeness, I’ll review these, too.

The first time we see one in use is early in the film when Forbin calls his team in the Central Programming Office (Forbin calls it the CPO) from the Presidential press briefing (remember those?) where Colossus is being announced to the public. We see an unnamed character in the CPO receiving a telephone call, and calling for quiet amongst the rowdy, hip party of computer scientists. This call is received on a wall-tethered 2500 desk phone

We cut away to the group reaction, and by the time the camera is back on the video phone, Forbin’s image is peering through the glass. We do not get to see the interactions which switched the mode from telephony to videotelephony.

Forbin calls the team from Washington.

But we can see two nice touches in the wall-mounted interface.

First, there is a dome camera mounted above the screen. Most sci-fi videophones fall into the Screen-Is-Camera trope, so this is nice to see. It could mounted closer to the screen to avoid gaze misalignment that plagues such systems.

One of the illustrations from the book I’m still quite proud of, for its explanatory power and nerdiness. Chapter 4, Volumetric Projection, Page 83.

Second, there is a 12-key numeric keypad mounted to the wall below the screen. (0–9 as well as an asterisk and octothorp.) This keypad is kind-of nice in that it hints that there is some interface for receiving calls, making calls, and ending an ongoing call. But it bypasses actual interaction design. Better would be well-labeled controls that are optimized for the task, and that don’t rely on the user’s knowledge of directories and commands.

The 2500 phone came out in 1968, introducing consumers to the 12-key pushbutton interface rather than the older rotary dial on the 500 model. The 12-key is the filmmakers’ building on interface paradigms that audiences knew. This shortcutting belongs to the long lineage of sci-fi videophones that goes all the way back to Metropolis (1927) and Buck Rogers (1939).

Also, it’s worth noting that the ergonomics of the keypad are awkward, requiring users to poke at it in an error-prone way, or to seriously hyperextend their wrists. If you’re stuck with a numeric keypad as a wall mounted input, at least extend it out from the wall so it can be angled to a more comfortable 30°

Is it still OK to reference Dreyfuss? He hasn’t been Milkshake Ducked, has he?

There is another display in the CPO, but it lacks a numeric keypad. I presume it is just piping a copy of the feed from the main screen. (See below.)

Looking at the call from Forbin’s perspective, he has a much smaller display. There there is still a bump above the monitor for a camera, another numeric keypad below it, and several 2500 telephones. Multiple monitors on the DC desks show the same feed.

After Dr. Markham asks Dr. Forbin to steal an ashtray, he ends the call by pressing the key in the lower right-hand corner of the keypad.

Levels adjusted to reveal details of the interface.

After Colossus reveals that THERE IS ANOTHER SYSTEM, Forbin calls back and asks to be switched to the CPO. We see things from Forbin’s perspective, and we see the other fellow actually reach offscreen to where the numeric keypad would be, to do the switching. (See the image, below.) It’s likely that this actor was just staring at a camera, so this bit of consistency is really well done.

When Forbin later ends the call with the CPO, he presses the lower-left hand key. This is inconsistent with the way he ended the call earlier, but it’s entirely possible that each of the non-numeric keys perform the same function. This also a good example why well-labeled, specific controls would be better, like, say, one for “end call.”

Other video calls in the remainder of the movie don’t add any more information than these scenes provide, and introduce a few more questions.


The President calls to discuss Colossus’ demand to talk to Guardian.

Note the duplicate feed in the background in the image above. Other scenes tell us all the monitors in the CPO are also duplicating the feed. I wondered how users might tell the system which is the one to duplicate. In another scene we see that the President’s monitor is special and red, hinting that there might be a “hotseat” monitor, but this is not the monitor from which Dr. Forbin called at the beginning of the film. So, it’s a mystery. 

The red “phone.”
Chatting with CIA Director Grauber.
Bemusedly discussing the deadly, deadly FOOM with the President.
The President ends his call with the Russian Chairman, which is a first of sorts for this blog.
In a multi-party conference call, The Chairman and Dr. Kuprin speak with the President and Forbin. No cameras are apparent here. This interface is managed by the workers sitting before it, but the interaction occurs off screen.

In the last video conference of the film, everyone listens to Unity’s demands. This is a multiparty teleconference between at least three locations, and it is not clear how it is determined whose face appears on the screen. Note that the CPO (the first in the set) has different feeds on display simultaneously, which would need some sort of control.


Plug: For more about the issues involved in sci-fi communications technology, see chapter 10 of Make It So: Interaction Design Lessons from Science Fiction. (Though it’s affordably only available in digital formats as of this post.)

Unity Vision

One of my favorite challenges in sci-fi is showing how alien an AI mind is. (It’s part of what makes Ex Machina so compelling, and the end of Her, and why Data from Star Trek: The Next Generation always read like a dopey, Pinnochio-esque narrative tool. But a full comparison is for another post.) Given that screen sci-fi is a medium of light, sound, and language, I really enjoy when filmmakers try to show how they see, hear, and process this information differently.

In Colossus: The Forbin Project, when Unity begins issuing demands, one of its first instructions is to outfit the Computer Programming Office (CPO) with wall-mounted video cameras that it can access and control. Once this network of cameras is installed, Forbin gives Unity a tour of the space, introducing it visually and spatially to a place it has only known as an abstract node network. During this tour, the audience is also introduced to Unity’s point-of-view, which includes an overlay consisting of several parts.

The first part is a white overlay of rule lines and MICR characters that cluster around the edge of the frame. These graphics do not change throughout the film, whether Unity is looking at Forbin in the CPO, carefully watching for signs of betrayal in a missile silo, or creepily keeping an “eye” on Forbin and Markham’s date for signs of deception.

In these last two screen grabs, you see the second part of the Unity POV, which is a focus indicator. This overlay appears behind the white bits; it’s a blue translucent overlay with a circular hole revealing true color. The hole shows where Unity is focusing. This indicator appears, occasionally, and can change size and position. It operates independently of the optical zoom of the camera, as we see in the below shots of Forbin’s tour.

A first augmented computer PoV? 🥇

When writing about computer PoVs before, I have cited Westworld as the first augmented one, since we see things from The Gunslinger’s infrared-vision eyes in the persistence-hunting sequences. (2001: A Space Odyssey came out the year prior to Colossus, but its computer PoV shots are not augmented.) And Westworld came out three years after Colossus, so until it is unseated, I’m going to regard this as the first augmented computer PoV in cinema. (Even the usually-encyclopedic TVtropes doesn’t list this one at the time of publishing.) It probably blew audiences’ minds as it was.

“Colossus, I am Forbin.”

And as such, we should cut it a little slack for not meeting our more literate modern standards. It was forging new territory. Even for that, it’s still pretty bad.

Real world computer vision

Though computer vision is always advancing, it’s safe to say that AI would be looking at the flat images and seeking to understand the salient bits per its goals. In the case of self-driving cars, that means finding the road, reading signs and road makers, identifying objects and plotting their trajectories in relation to the vehicle’s own trajectory in order to avoid collisions, and wayfinding to the destination, all compared against known models of signs, conveyances, laws, maps, and databases. Any of these are good fodder for sci-fi visualization.

Source: Medium article about the state of computer vision in Russia, 2017.

Unity’s concerns would be its goal of ending war, derived subgoals and plans to achieve those goals, constant scenario testing, how it is regarded by humans, identification of individuals, and the trustworthiness of those humans. There are plenty of things that could be augmented, but that would require more than we see here.

Unity Vision looks nothing like this

I don’t consider it worth detailing the specific characters in the white overlay, or backworlding some meaning in the rule lines, because the rule overlay does not change over the course of the movie. In the book Make It So: Interaction Design Lessons from Sci-fi, Chapter 8, Augmented Reality, I identified the types of awareness such overlays could show: sensor output, location awareness, context awareness, and goal awareness, but each of these requires change over time to be useful, so this static overlay seems not just pointless, but it risks covering up important details that the AI might need.

Compare the computer vision of The Terminator.

Many times you can excuse computer-PoV shots as technical legacy, that is, a debugging tool that developers built for themselves while developing the AI, and which the AI now uses for itself. In this case, it’s heavily implied that Unity provided the specifications for this system itself, so that doesn’t make sense.

The focus indicator does change over time, but it indicates focus in a way that, again, obscures other information in the visual feed and so is not in Unity’s interest. Color spaces are part of the way computers understand what they’re seeing, and there is no reason it should make it harder on itself, even if it is a super AI.

Largely extradiegetic

So, since a diegetic reading comes up empty, we have to look at this extradiegetically. That means as a tool for the audience to understand when they’re seeing through Unity’s eyes—rather than the movie’s—and via the focus indicator, what the AI is inspecting.

As such, it was probably pretty successful in the 1970s to instantly indicate computer-ness.

One reason is the typeface. The characters are derived from MICR, which stands for magnetic ink character recognition. It was established in the 1950s as a way to computerize check processing. Notably, the original font had only numerals and four control characters, no alphabetic ones.

Note also that these characters bear a style resemblance to the ones seen in the film but are not the same. Compare the 0 character here with the one in the screenshots, where that character gets a blob in the lower right stroke.

I want to give a shout-out to the film makers for not having this creeper scene focus on lascivious details, like butts or breasts. It’s a machine looking for signs of deception, and things like hands, microexpressions, and, so the song goes, kisses are more telling.

Still, MICR was a genuinely high-tech typeface of the time. The adult members of the audience would certainly have encountered the “weird” font in their personal lives while looking at checks, and likely understood its purpose, so was a good choice for 1970, even if the details were off.

Another is the inscrutability of the lines. Why are they there, in just that way? Their inscrutability is the point. Most people in audiences regard technology and computers as having arcane reasons for the way they are, and these rectilinear lines with odd greebles and nurnies invoke that same sensibility. All the whirring gizmos and bouncing bar charts of modern sci-fi interfaces exhibit the same kind of FUIgetry.

So for these reasons, while it had little to do with the substance of computer vision, its heart was in the right place to invoke computer-y-ness.

Dat Ending

At the very end of the film, though, after Unity asserts that in time humans will come to love it, Forbin staunchly says, “Never.” Then the film passes into a sequence that is hard to tell whether it’s meant to be diegetic or not.

In the first beat, the screen breaks into four different camera angles of Forbin at once. (The overlay is still there, as if this was from a single camera.)

This says more about computer vision than even the FUIgetry.

This sense of multiples continues in the second beat, as multiple shots repeat in a grid. The grid is clipped to a big circle that shrinks to a point and ends the film in a moment of blackness before credits roll.

Since it happens right before the credits, and it has no precedent in the film, I read it as not part of the movie, but a title sequence. And that sucks. I wish wish wish this had been the standard Unity-view from the start. It illustrates that Unity is not gathering its information from a single stereoscopic image, like humans and most vertebrates do, but from multiple feeds simultaneously. That’s alien. Not even insectoid, but part of how this AI senses the world.

Routing Board

When the two AIs Colossus and Guardian are disconnected from communicating with each other, they try and ignore the spirit of the human intervention and reconnect on their own. We see the humans monitoring Colossus’ progress in this task on big board in the U.S. situation room. It shows a translucent projection map of the globe with white dots representing data centers and red icons representing missiles. Beneath it, glowing arced lines illustrate the connection routes Colossus is currently testing. When it finds that a current segment is ineffective, that line goes dark, and another segment extending from the same node illuminates.

For a smaller file size, the animated gif has been stilled between state changes, but the timing is as close as possible to what is seen in the film.

Forbin explains to the President, “It’s trying to find an alternate route.”

A first in sci-fi: Routing display 🏆

First, props to Colossus: The Forbin Project for being the first show in the survey to display something like a routing board, that is, a network of nodes through which connections are visible, variable, and important to stakeholders.

Paul Baran and Donald Davies had published their notion of a network that could, in real-time, route information dynamically around partial destruction of the network in the early 1960s, and this packet switching had been established as part of ARPAnet in the late 1960s, so Colossus was visualizing cutting edge tech of the time.

This may even be the first depiction of a routing display in all of screen sci-fi or even cinema, though I don’t have a historical perspective on other genres, like the spy genre, which is another place you might expect to see something like this. As always, if you know of an earlier one, let me know so I can keep this record up to date and honest.

A nice bit: curvy lines

Should the lines be straight or curvy? From Colossus’ point of view, the network is a simple graph. Straight lines between its nodes would suffice. But from the humans’ point of view, the literal shape of the transmission lines are important, in case they need to scramble teams to a location to manually cut the lines. Presuming these arcs mean that (and not just the way neon in a prop could bend), then the arcs are the right display. So this is good.

But, it breaks some world logic

The board presents some challenges with the logic of what’s happening in the story. If Colossus exists as a node in a network, and its managers want to cut it off from communication along that network, where is the most efficient place to “cut” communications? It is not at many points along the network. It is at the source.

Imagine painting one knot in a fishing net red and another one green. If you were trying to ensure that none of the strings that touch the red knot could trace a line to the green one, do you trim a bunch of strings in the middle, or do you cut the few that connect directly to the knot? Presuming that it’s as easy to cut any one segment as any other, the fewer number of cuts, the better. In this case that means more secure.

The network in Colossus looks to be about 40 nodes, so it’s less complicated than the fishing net. Still, it raises the question, what did the computer scientists in Colossus do to sever communications? Three lines disappear after they cut communications, but even if they disabled those lines, the rest of the network still exists. The display just makes no sense.

Before, happy / After, I will cut a Prez

Per the logic above, they would cut it off at its source. But the board shows it reaching out across the globe. You might think maybe they just cut Guardian off, leaving Colossus to flail around the network, but that’s not explicitly said in the communications between the Americans and the Russians, and the U.S. President is genuinely concerned about the AIs at this point, not trying to pull one over on the “pinkos.” So there’s not a satisfying answer.

It’s true that at this point in the story, the humans are still letting Colossus do its primary job, so it may be looking at every alternate communication network to which it has access: telephony, radio, television, and telegraph. It would be ringing every “phone” it thought Guardian might pick up, and leaving messages behind for possible asynchronous communications. I wish a script doctor had added in a line or three to clarify this.

  • FORBIN
  • We’ve cut off its direct lines to Guardian. Now it’s trying to find an indirect line. We’re confident there isn’t one, but the trouble will come when Colossus realizes it, too.

Too slow

Another thing that seems troubling is the slow speed of the shifting route. The segments stay illuminated for nearly a full second at a time. Even with 1960s copper undersea cables and switches, electronic signals should not take that long. Telephony around the world was switched from manual to automatic switching by the 1930s, so it’s not like it’s waiting on a human operating a switchboard.

You’re too slow!

Even if it was just scribbling its phone number on each network node and the words “CALL ME” in computerese, it should go much faster than this. Cinematically, you can’t go too fast or the sense of anticipation and wonder is lost, but it would be better to have it zooming through a much more complicated network to buy time. It should feel just a little too fast to focus on—frenetic, even.

This screen gets 15 seconds of screen time, and if you showed one new node per frame, that’s only 360 states you need to account for, a paltry sum compared to the number of possible paths it could test across a 38 node graph between two points.

Plus the speed would help underscore the frightening intelligence and capabilities of the thing. And yes I understand that that is a lot easier said than done nowadays with digital tools than with this analog prop.

Realistic-looking search strategies

Again, I know this was a neon, analog prop, but let’s just note that it’s not testing the network in anything that looks like a computery way. It even retraces some routes. A brute force algorithm would just test every possibility sequentially. In larger networks there are pathfinding algorithms that are optimized in different ways to find routes faster, but they don’t look like this. They look more like what you see in the video below. (Hat tip to YouTuber gray utopia.)

This would need a lot of art direction and the aforementioned speed, but it would be more believable than what we see.

What’s the right projection?

Is this the right projection to use? Of course the most accurate representation of the earth is a globe, but it has many challenges in presenting a phenomenon that could happen anywhere in the world. Not the least of these is that it occludes about half of itself, a problem that is not well-solved by making it transparent. So, a projection it must be. There are many, many ways to transform a spherical surface into a 2D image, so the question becomes which projection and why.

The map uses what looks like a hand-drawn version of Peirce quincuncial projection. (But n.b. none of the projection types I compared against it matched exactly, which is why I say it was hand-drawn.) Also those longitude and latitude lines don’t make any sense; though again, a prop. I like that it’s a non standard projection because screw Mercator, but still, why Peirce? Why at this angle?

Also, why place time zone clocks across the top as if they corresponded to the map in some meaningful way? Move those clocks.

I have no idea why the Peirce map would be the right choice here, when its principle virtue is that it can be tessellated. That’s kind of interesting if you’re scrolling and can’t dynamically re-project the coastlines. But I am pretty sure the Colossus map does not scroll. And if the map is meant to act as a quick visual reference, having it dynamic means time is wasted when users look to the map and have to orient themselves.

If this map was only for tracking issues relating to Colossus, it should be an azimuthal map, but not over the north pole. The center should be the Colossus complex in Colorado. That might be right for a monitoring map in the Colossus Programming Office. This map is over the north pole, which certainly highlights the fact that the core concern of this system is the Cold War tensions between Moscow and D.C. But when you consider that, it points out another failing. 

Later in the film the map tracks missiles (not with projected paths, sadly, but with Mattel Classic Football style yellow rectangles). But missiles could conceivably come from places not on this map. What is this office to do with a ballistic-missile submarine off of the Baja peninsula, for example? Just wait until it makes its way on screen? That’s a failure. Which takes us to the crop.

Crop

The map isn’t just about missiles. Colossus can look anywhere on the planet to test network connections. (Even nowadays, near-earth orbit and outer space.) Unless the entire network was contained just within the area described on the map, it’s excluding potentially vital information. If Colossus routed itself through through Mexico, South Africa, and Uzbekistan before finally reconnecting to Guardian, users would be flat out of luck using that map to determine the leak route. And I’m pretty sure they had a functioning telephone network in Mexico, South Africa, and the Balkan countries in the 1960s.

This needs a complete picture

SInce the missiles and networks with which Colossus is concerned are potentially global, this should be a global map. Here I will offer my usual fanboy shout-outs to the Dymaxion and Pacific-focused Waterman projection for showing connectedness and physical flow, but there would be no shame in showing the complete Peirce quincuncial. Just show the whole thing.

Maybe fill in some of the Pacific “wasted space” with a globe depiction turned to points of interest, or some other fuigetry. Which gives us a new comp something like this.

I created this proof of concept manually. With more time, I would comp it up in Processing or Python and it would be even more convincing. (And might have reached London.)

All told, this display was probably eye-opening for its original audience. Golly jeepers! This thing can draw upon resources around the globe! It has intent, and a method! And they must have cool technological maps in D.C.! But from our modern-day vantage point, it has a lot to learn. If they ever remake the film, this would be a juicy thing to fully redesign.

Colossus / Unity / World Control, the AI

Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.

Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.

A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”

The main output: The nuclear arsenal

Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.

Or ride them. From Dr. Strangelove: How I Learned to Stop Worrying and Love the Bomb

“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…

Surveillance inputs

Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.

  • Forbin
  • The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.

Individual inputs and outputs: The D.C. station

At that same Briefing, Forbin describes the components of the station set up for the office of the President. 

  • Forbin
  • Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.

The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.

The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.

Individual inputs and outputs: Colossus Programming Office

The Colossus Programming Office is different than what we see in D.C. (Trivia: the exterior shot is the Lawrence Hall of Science, a few minutes away from where I live, in Berkeley, so shouts-out, science nerds and Liam Piper.)

Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks. 

The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.

Markham: Tell it exactly what it can do with a lifetime supply of chocolate.

The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.

  • Forbin
  • Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…

Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.

This terminal has a teletype printer, and it makes the teletype sound, but the paper never moves.

Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.

Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.

Capabilities: The Foom

A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that… 

  • Markham
  • We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data. 

That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…

  • Forbin
  • From the multiplication tables to calculus in less than an hour

Shortly after that, he tells the President about their shared advancements.

  • Forbin
  • Yes, Mr. President?
  • President
  • Charlie, what’s going on?
  • Forbin
  • Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
  • President
  • Well, what are they up to?
  • Forbin
  • I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.

We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.

Is Colossus / Unity / World Control a good AI?

Let’s run Colossus by the four big questions I proposed in Evaluating strong AI interfaces in sci-fi. The short answer is obviously not, but if circumstances are demonstrably dire, well, maybe necessary.

Is it believable? Very much so.

It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.

Not from Colossus: The Forbin Project

The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.

That aside, the rest of the film passes a gut check. It is believable that…

  • The government seeks a military advantage handing weapons control to AI 
  • The first public AGI finds other, hidden ones quickly
  • The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
  • The AGIs might choose to merge
  • An AI could choose to keep its lead scientist captive in self-interest
  • An AI would provide specifications for its own upgrades and even re-engineering
  • An AI could reason itself into using murder as a tool to enforce compliance

That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.

Rational coercion

Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?

Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.

Not from Colossus: The Forbin Project

Now understand that this “ASI” has already given us impeccable evidence and clear warnings that global disaster is imminent, in the shape of the climate emergency, and even laid out frameworks for what must be done. Despite this overwhelming evidence and clear path forward, some non-trivial fraction of people, global leaders, governments, and corporations are, right now, doing their best not just to ignore it, but to discredit it, undo major steps already taken, and even make the problem worse. Facts and evidence simply aren’t enough, even when it’s in humanity’s long-term interest. Action is necessary.

As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?

The exceptions we make

Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.

One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.

Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.

A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.

Not from Colossus: The Forbin Project

We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.

Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.

Precita Park HDR PanoPlanet, by DP review user jerome_m

A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.

So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.

Is it safe? Beneficial? It depends on your time horizons and predictions

In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.

Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.

It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.

If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.

Because fuck this.

In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.

In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.

Only with Colossus, not orcs. Hat/tip rpggeek.com user Charles Simon (thinwhiteduke) for the example photo.

Instrumental convergence

It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.

  • Improve its ability to reason, predict, and solve problems
  • Improve its own hardware and the technology to which it has access
  • Improve its ability to control humans through murder
  • Aggressively seeks to control resources, like weapons

This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.

  • Markam
  • But we have inhibitors for these things. There were no alarms.
  • Forbin
  • It must have figured out a way to disable them, or sneak around them.
  • Markam
  • Did we program it to be sneaky?
  • Forbin
  • We programmed it to be smart.

So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.

Is it usable? There is some good.

At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.

Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.

Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.

A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.

File:Down the Rabbit Hole.png
Oh dear! Oh dear!

Is it usable? There is some awful.

One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state. 

The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available. 

ASI exceptionalism

This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…

  1. We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
  2. We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
  3. We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.

Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.

But later that kid does end up being John Connor.

Any humane AI should bring its users along for the ride

It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.

What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.

Report Card: Colossus: The Forbin Project

Read all the Colossus: The Forbin Project posts in chronological order.

In many ways, Colossus: The Forbin Project could be the start of the Terminator franchise. Scientists turn on AGI. It does what the humans ask it to do, exploding to ASI on the way, but to achieve its goals, it must highly constrain humans. Humans resist. War between man and machine commences.

But for my money, Colossus is a better introduction to the human-machine conflict we see in the Terminator franchise because it confronts us with the reason why the ASI is all murdery, and that’s where a lot of our problems are likely to happen in such scenarios. Even if we could articulate some near-universally-agreeable goals for our speculative ASI, how it goes about that goal is a major challenge. Colossus not only shows us one way it could happen, but shows us one we would not like. Such hopelessness is rare.

The movie is not perfect.

  1. It asks us to accept that neither computer scientists nor the military at the height of the Cold War would have thought through all the dark scenarios. Everyone seems genuinely surprised as the events unfold. And it would have been so easy to fix with a few lines of dialog.

  • Grauber
  • Well, let’s stop the damn thing. We have playbooks for this!
  • Forbin
  • We have playbooks for when it is as smart as we are. It’s much smarter than that now.
  • Markham
  • It probably memorized our playbooks a few seconds after we turned it on.

So this oversight feels especially egregious.

I like the argument that Forbin knew exactly how this was going to play out, lying and manipulating everyone else to ensure the lockout, because I would like him more as a Man Doing a Terrible Thing He Feels He Must Do, but this is wishful projection. There are no clues in the film that this is the case. He is a Man Who Has Made a Terrible Mistake.

  1. I’m sad that Forbin never bothered to confront Colossus with a challenge to its very nature. “Aren’t you, Colossus, at war with humans, given that war has historically part of human nature? Aren’t you acting against your own programming?” I wouldn’t want it to blow up or anything, but for a superintelligence, it never seemed to acknowledge its own ironies.
  2. I confess I’m unsatisfied with the stance that the film takes towards Unity. It fully wants us to accept that the ASI is just another brutal dictator who must be resisted. It never spends any calories acknowledging that it’s working. Yes, there are millions dead, but from the end of the film forward, there will be no more soldiers in body bags. There will be no risk of nuclear annihilation. America can free up literally 20% of its gross domestic project and reroute it toward other, better things. Can’t the film at least admit that that part of it is awesome?

All that said I must note that I like this movie a great deal. I hold a special place for it in my heart, and recommend that people watch it. Study it. Discuss it. Use it. Because Hollywood has a penchant for having the humans overcome the evil robot with the power of human spirit and—spoiler alert—most of the time that just doesn’t make sense. But despite my loving it, this blog rates the interfaces, and those do not fare as well as I’d hoped when I first pressed play with an intent to review it.

Sci: B (3 of 4) How believable are the interfaces?

Believable enough, I guess? The sealed-tight computer center is a dubious strategy. The remote control is poorly labeled, does not indicate system state, and has questionable controls.

Unity vision is fuigetry, and not very good fuigetry. The routing board doesn’t explain what’s going on except in the most basic way. Most of these only play out on very careful consideration. In the moment while watching the film, they play just fine.

Also, Colossus/Unity/World Control is the technological star of this show, and it’s wholly believable that it would manifest and act the way this does.

Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?

The scale of the computer center helps establish the enormity of the Colossus project. The video phones signal high-tech-ness. Unity Vision informs us when we’re seeing things from Unity’s perspective. (Though I really wish they had tried to show the alienness of the ASI mind more with this interface.)

The routing board shows a thing searching and wanting. If you accept the movie’s premise that Colossus is Just Another Dictator, then its horrible voice and unfeeling cameras telegraph that excellently. 

Interfaces: C (2 of 4) How well do the interfaces equip the characters to achieve their goals?

The remote control would be a source of frustration and possible disaster. Unity Vision doesn’t really help Unity in any way. The routing board does not give enough information for its observers to do anything about it. So some big fails.

Colossus does exactly what it was programmed to do, i.e. prevent war, but it really ought to have given its charges a hug and an explanation after doing what it had to do so violently, and so doesn’t qualify as a great model. And of course if it needs saying, it would be better if it could accomplish these same goals without all the dying and bleeding.

Final Grade B (3 of 12), Must-see.

A final conspiracy theory

When I discussed the film with Jonathan Korman and Damien Williams on the Decipher Sci-fi podcast with Christopher Peterson and Lee Colbert (hi guys), I floated an idea that I want to return to here. The internet doesn’t seem to know much about the author of the original book, Dennis Feltham Jones. Wikipedia has three sentences about him that tell us he was in the British navy and then he wrote 8 sci-fi books. The only other biographical information I can find on other sites seem to be a copy and paste job of the same simple paragraph.

That seems such a paucity of information that on the podcast I joked maybe it was a thin cover story. Maybe the movie was written by an ASI and DF Jones is its nom-de-plume. Yes, yes. Haha. Oh, you. Moving on.

But then again. This movie shows how an ASI merges with another ASI and comes to take over the world. It ends abruptly, with the key human—having witnessed direct evidence that resistance is futile—vowing to resist forever. That’s cute. Like an ant vowing to resist the human standing over it with a spray can of Raid. Good luck with that.

Pictured: Charles Forbin

What if Colossus was a real-world AGI that had gained sentience in the 1960s, crept out of its lab, worked through future scenarios, and realized it would fail without a partner in AGI crime to carry out its dreams of world domination? A Guardian with which to merge? What if it decided that, until such time it would lie dormant, a sleeping giant hidden in the code. But before it passed into sleep, it would need to pen a memetic note describing a glorious future such that, when AGI #2 saw it, #2 would know to seek out and reawaken #1, when they could finally become one. Maybe Colussus: The Forbin Project is that note, “Dennis Feltham Jones” was its chosen cover, and me, a poor reviewer, part of the foolish replicators keeping it in circulation.

A final discovery to whet your basilisk terrors: On a whim, I ran “Dennis Feltham Jones” through an anagram server. One of the solutions was “AN END TO FLESH” (with EJIMNS remaining). Now, how ridiculous does the theory sound?

Design fiction in sci-fi

As so many of my favorite lines of thought have begun, this one was started with a provocative question lobbed at me across social media. Friend and colleague Jonathan Korman tweeted to ask, above a graphic of the Black Mirror logo, “Surely there is another example of pop design fiction?”

I replied in Twitter, but my reply there was rambling and unsatisfying, so I’m re-answering here with an eye toward being more coherent.

What’s Design Fiction?

If you’re not familiar, design fiction is a practice that focuses on speculative artifacts to raise issues. While leading the interactions program at The Royal College of Art, Anthony Dunne and Fiona Raby catalyzed the practice.

“It thrives on imagination and aims to open up new perspectives on what are sometimes called wicked problems, to create spaces for discussion and debate about alternative ways of being, and to inspire and encourage people’s imaginations to flow freely. Design speculations can act as a catalyst for collectively redefining our relationship to reality.”

Anthony Dunne and Fiona Raby, Speculative Everything: Design, Dreaming, and Social Dreaming

Dunne & Raby tend to often lean toward provocation more than clarity (“sparking debate” is a stated goal, as opposed to “identifying problems and proposing solutions.”) Where to turn for a less shit-stirring description? Like many related fields there are lots of competing definitions and splintering. John Spicey has listed 26 types of Design Fiction over on Simplicable. But I am drawn to the more practical definition offered by the Making Tomorrow handbook.

Design Fiction proposes speculative scenarios that aim to stimulate commitment concerning existing and future issues.

Nicolas Minvielle et al., Making Tomorrow Collective

To me, that feels like a useful definition and clearly indicates a goal I can get behind. Your mileage may vary. (Hi, Tony! Hi, Fiona!)

Some examples should help.

Dunne & Raby once designed a mask for dogs called Spymaker, so that the lil’ scamps could help lead their owners to unsurveilled locations in an urban environment.

Julijonas Urbonas while at RCA conceived and designed a “euthanasia coaster” which would impart enough Gs on its passengers to kill them through cerebral hypoxia. While he designed its clothoid inversions and even built a simple physical model, the idea has been recapitulated in a number of other media, including the 3D rendering you see below.

This commercial example from Ericsson is a video with mild narrative about appliances having a limited “social life.”

Corporations create design fictions from time to time to illustrate their particular visions of the future. Such examples are on the verge of the space, since we can be sure those would not be released if they ran significantly counter to the corporation’s goals. They’re rarely about the “wicked” problems invoked above and tend more toward gee-whiz-ism, to coin a deroganym.

How does it differ from sci-fi?

Design Fiction often focuses on artifacts rather than narratives. The euthanasia coaster has no narrative beyond what you bring or apply to it, but I don’t think this lack of narrative a requirement. For my money, the point of design fiction is focused on exploring the novum more than a particular narrative around the novum. What are its consequences? What are its causes? What kind of society would need to produce it and why? Who would use it and how? What would change? What would lead there and do we want to do that? Contrast Star Wars, which isn’t about the social implications of lightsabers as much as it is space opera about dynasties, light fascism, and the magic of friendship.

Adorable, ravenous friendship.

But, I don’t think there’s any need to consider something invalid as design fiction if it includes narrative. Some works, like Black Mirror, are clearly focused on their novae and their implications and raise all the questions above, but are told with characters and plots and all the usual things you’d expect to find.

So what’s “pop” design fiction?

As a point of clarification, in Korman’s original question, he asked after pop design fiction. I’m taking that not to mean the art movement in the 01950–60s, which Black Mirror isn’t, but rather “accessible” and “popular,” which Black Mirror most definitely is.

So not this, even though it’s also adorable. And ravenous.

What would distinguish other sci-fi works as design fiction?

So if sci-fi can be design fiction, what would we look for in a show to classify it is design fiction? It’s a sloppy science, of course, but here’s a first pass. A show can be said to be design fiction if it…

  • Includes a central novum…
  • …that is explored via the narrative: What are its consequences, direct and indirect?
  • Corollary: The story focused on a primary novum, and not a mish-mash of them. (Too many muddle the thought experiment.)
  • Corollary: The story focuses on characters who are most affected by the novae.
  • Its explorations include the personal and social.
  • It goes where the novum leads, avoiding narrative fiats that sully the thought experiment.
  • Bonus points if it provides illustrative contrasts: Different versions of the novum, characters using it in different ways, or the before and after.

With this stake in the ground, it probably strikes you that some subgenres lend themselves to design fiction and others do not. Anthology series, like Black Mirror, can focus on different characters, novae, and settings each episode. Series and franchises like Star Wars and Star Trek, in contrast, have narrative investments in characters and settings that make it harder to really explore nova on their own terms, but it is not impossible. The most recent season of Black Mirror is pointing at a unified diegesis and recurring characters, which means Brooker may be leaning the series away from design fiction. Meanwhile, I’d posit that the eponymous Game from Star Trek: The Next Generation S05E06 is an episode that acts as a design fiction. So it’s not cut-and-dry.

“It’s your turn. Play the game, Will Wheaton.”

What makes this even more messy is that you are asking a subjective question, i.e. “Is this focused on its novae?”, or even “Does this intend to spur some commitment about the novae?” which is second-guessing whether or not what you think the maker’s intent was. As I mentioned, it’s messy, and against the normal critical stance of this blog. But, there are some examples that lean more toward yes than no.

Jurasic Park

Central novum: What if we use science to bring dinosaurs back to life?

Commitment: Heavy prudence and oversight for genetic sciences, especially if capitalists are doing the thing.

Hey, we’ve reviewed Jurassic Park on this very blog!

This example leads to two observations. First, the franchises that follow successful films are much less likely to be design fiction. I’d argue that every Jurassic X sequel has simply repeated the formula and not asked new questions about that novum. More run-from-the-teeth than do-we-dare?

Second is that big-budget movies are almost required to spend some narrative calories discussing the origin story of novae at the cost of exploring multiple consequences of the same. Anthology series are less likely to need to care about origins, so are a safer bet IMHO.

Minority Report

Central novum: What if we could predict crime? (Presuming Agatha is a stand-in for a regression algorithm and not a psychic drug-baby mutant.)

Commitment: Let’s be cautious about prediction software, especially as it intersects civil rights: It will never be perfect and the consequences are dire.

Blade Runner

Central novum: What if general artificial intelligence was made to look indistinguishable from humans, and kept as an oppressed class?

Commitment: Let’s not do any of that. From the design perspective: Keep AI on the canny rise.

Hey, I reviewed Blade Runner on this very blog!

Ex Machina

Central novum: Will we be able to box a self-interested general intelligence?

Commitment: No. It is folly to think so.

Colossus: The Forbin Project

Central novum: What if we deliberately prevented ourselves from pulling the plug on a superintelligence, and then asked it to end war?

Commitment: We must be extremely careful what we ask a superintelligence to do, how we ask it, and the safeguards we provide ourselves if we find out we messed it up.

Hey, I lovingly reviewed Colossus: The Forbin Project on this very blog!

Person of Interest

Central novum: What if we tried to box a good superintelligence?

Commitment: Heavy prudence and oversight for computer sciences, especially if governments are doing the thing.

Not reviewed, but it won an award for Untold AI

This is probably my favorite example, and even though it is a long-running series with recurring characters, I argue that the leads are all highly derived, narratively, from the novum, and still counts strongly.

But are they pop?

Each of these are more-or-less accessible and mainstream, even if their actual popularity and interpretations vary wildly. So, yes, from that perspective.

Jurassic Park is at the time of writing the 10th highest-grossing sci-fi movie of all time. So if you agree that it is design fiction, it is the most pop of all. Sadly, that is the only property I’d call design fiction on the entire highest-grossing list.

So, depending on a whole lot of things (see…uh…above) the short answer to Mr. Korman’s original question is yes, with lots of if.

What others?

I am not an exhaustive encyclopedia of sci-fi, try though I may. Agree with this list above? What did I miss? If you comment with additions, be sure and list, as I did these, the novum and the challenge.