The Fritzes 2020: Audience Choice Voting

The form to cast your vote for Audience Choice is at the bottom of this post.

On 09 FEB 2020, scifiinterfaces.com is announcing awards for interfaces in a 2019 science fiction feature film. An “Audience Choice” will also be announced, and determined by the results of the poll, below. What is your selection? You should see the movies in full, but you can see reminder videos and summaries for each of the nominees, below.

Ad Astra

Sometime in the near future, Roy McBridge heads to Mars to find his father and see if he is responsible for immense electrical surges that have been damaging the earth. His journey is troubled by murderous moon pirates, frenzied space baboons, Roy’s unexpected emotions, and the aforementioned surges.

Alita: Battle Angel

The year is 2563. After doctor Ido revives a mysterious cyborg girl from a junkyard, she discovers he is a bounty hunter for evil rogue cyborgs and wants to be one. After she finds a new superpowered body, she is able to save her friend Hugo by turning him into a cyborg, too. With his new abilities, he tries to scale a cable to the forbidden floating city Zalem, but dies. The movie concludes with Alita committing herself to vengeance.

Avengers: Endgame

Avengers: Endgame is an indie feelgood about a group of friends who go rock hunting together. Just kidding, of course. Endgame is the biggest box-office movie of all time, earning 2.67 billion USD worldwide and bringing to climax 11 years of filmmaking in the Marvel Cinematic Universe. The story happens after Infinity War, where Thanos performed “the snap” that disintegrated half of all life in the universe. Endgame sees the remaining Avengers build a time travel device in order to “undo” the snap, defeat Thanos, and along the way resolve some longstanding personal arcs.

Captive State

After an alien occupation, most of humanity falls in line with the oppressors. But not everyone. Captive State tells the story of a resistance movement bent on freeing humanity and saving the earth from ruthless alien capitalists.

High Life

High Life is certainly the most unusual film among the nominees. Convicts in the far future are sentenced to find a new energy source traveling into a black hole. On route to their certain death, sex is forbidden, but they find release in a Holodeck-style masturbatorium called The Box. Meanwhile, there are power struggles and murders and intense sexual situations. 

I am Mother

A robot raises a child from embryo to young woman in a mysterious underground facility. As the human explores more of her world, she learns dark truths about the facility, her life, and the robot she’s come to know as Mother.

Men in Black: International

The Men in Black franchise got new life in 2019 with the release of Men in Black: International. In it a young woman named Molly charms her way into the MIB, only to join Agent H on a mission to forestall an invasion by the hideous but beautiful race called the Hive. On the way, they uncover a mole in the organization while Molly helps H overcome a dark event from his past.

Spider-Man: Far from Home

In the second 2019 nominee movie from the MCU, Peter Parker fails to have a normal summer studying abroad in Europe. He witnesses what he thinks are elemental monsters wreaking havoc on popular tourist cities, and a new superhero named Mysterio fighting them. Over the course of the film, Parker and his Scoobys discover the terrible truth before defeating and exposing the real bad guy. In the end, Parker learns to accept Tony Stark’s legacy, and then has his secret identity rudely outed.

X-Men Dark Phoenix

Superhero movies are not known for their restraint. Dark Phoenix starts with our mutant team rushing into space to, oh, you know, rescue some astronauts, and Jean Gray absorbing a “mysterious space force” in order to save the day. Over the movie, she finds her psychic and telekinetic powers amplified, but ultimately out of her control. She is hunted by the U.S. military, an alien race called the D’Bari, a Magneto gang, and even her own team, to no avail.


Of those movies, which do you think had the best over all interfaces? Cast your vote below. To avoid flagrant ballot stuffing, you must have a google account and be logged in to that account to cast your vote.

Voting will be open until 01 FEB 2020, 23:59 PST.

Please share this post on your social media to get the vote out! Thanks!

Advertisements

The Fritzes 2020: Nominees

This year scifiinterfaces is going to try something new: Giving out awards for the best interfaces in a movie in the prior calendar year. The timing will roughly correspond to the timing of the Oscars.

It’s going to be an “alpha” release version, mind you, since I don’t have a sponsor lined up, and you always have stuff to figure out the first time you try a massive project, and it’s hard to rally collaborators around a new thing. So, the “award” will be virtual, even though the honor is real. Also everything will happen via this blog and social media rather than any live stage event or anything like that. I have tried to do this on a larger scale in the past, but each time something stood in the way. Wish me luck.

The idea here is to reward and encourage excellence and maybe help readers discover what awesomeness is happening in sci-fi interfaces, without going into the full-scale, scene-by-scene critique that normally occupies this blog. The Oscars give awards for “Achievement in production design,” but this often entails much more than the very specific craft of sci-fi interfaces, which is the focus of this project.

On the name

The award will be called the Fritz, in honor of Fritz Lang, who was the first filmmaker to put realistic interfaces in a sci-fi film, specifically his 1927 film Metropolis. Lang was grappling with the larger role of technology in society, and his interfaces are wonderfully evocative and illustrative. Naming the awards after him honors his pioneering spirit and craft.

4 Awards

Sci-fi films have to answer to many masters, and rather than just give one award, I’m going to go with 4. Two of these will respect films that err towards either believability or spectacle—I believe there is often a wicked tradeoff between the two. The main award will honor films that manage that extraordinary challenge of accomplishing both. The fourth award is a viewer’s choice, where I share all the nominees and ask readers (like you) which they think is the best.

  1. Best interfaces (overall)
  2. Best narrative
  3. Best believable
  4. Audience Choice

Perhaps in the future there will be other categories, like for shorts, student work, or television serials. But for now just these four are going to tax the resources of your lone hobbyist blogger, here. Especially as I try to keep up regular reviews.

I will have the help of a few other judges. (I’m still chatting with them now to see if they want to be named. The Academy stays anonymous, so maybe these judges, should, too?) Winners will be announced the week of 09 Feb 2020.

What gets considered?

Focusing efforts on a narrower category of media helps the task be manageable, which is important since a new program cannot rely on submissions from others. I have loosely followed the Academy’s guidelines for eligibility for feature-length films (over 40 minutes), with the exception that I included feature-length films released on streaming media as well, such as those produced by Amazon and Netflix, which never saw theatrical release.

These guidelines gave the judges a list of 27 candidates to consider, and the following are the final nominees, presented in their category in alphabetical order. Keep in mind that the nominees were elected for the quality of the interfaces in the context of the film, but with specific disconsideration for other aspects of the work. In other words, a film could be panned for nearly any other reason, but its interfaces just marvelous, and it could wind up a nominee.

Nominees: Best believable

Nominees: Best narrative

Nominees: Best interfaces (overall)

Congratulations to all the nominees. Nice work.

If you’re the sort who likes to see all the nominees in a category to justify your outrage at the results, get to watching. You have a few weeks to see or re-see these before results are shared.

Stay tuned to scifiinterfaces.com here or on twitter for more, including when and where to vote for Viewer’s Choice.

About the critical stance of this blog

I knew I was going to piss of some fans of Blade Runner when I called the Voight-Kampff machine shit. I stand by it, but some of the discussion led me to realize I should make some of the implicit guiding principles of my approach to critique (and which lead me to call it shit) explicit. This is that.

OMG I’ve always wanted to read art critique theory on a sci-fi blog!

I’m going to drop this into the “About this site” page on the blog, but because a majority of my readers subscribe via RSS (Hi, you) and would not see that link, I’ll also copy the content to a post.

Short answer, I’m here for constructive criticism. That is, to ask of interfaces in sci-fi movies and television shows, “Is this the best form for its purpose?” followed by “If not, what is a better form, and why?” All for the 8 reasons I’ve outlined before:

  1. Build skepticism.
  2. Farm for good ideas.
  3. Use their bad ideas.
  4. Avoid their mistakes.
  5. Practice design critique.
  6. Build literacy.
  7. Mine its blind spots.
  8. Think big.

It’s made quite complicated because…

  • These are speculative technologies depicted in fictious tales.
  • The people making the art being reviewed may not have studied or even be interested in design for the real world.
  • The concepts of users and goals in this domain are complex: diegetic users with goals, actors using props, extradiegetic users of the film as engaging entertainment, sci-fi interface designers trying to balance believability with spectacle and narrative function, other sci-fi interface designers looking at other work in their field, writers of sci-fi trying to make a point, and designers of real-world technologies examining the design. All of these are “users” with different goals of use.
  • Narratively, an interface can serve multiple purposes: conveying plot points, telling us something about the character using it or the organization that made it, set dressing to convey science-y-ness, or even comedy.
  • Makers of sci-fi interfaces are often constrained with limited resources, paradigms of their time, conflicting directives, and intense pressures.
  • It is a fait accompli that these speculative technologies “work” in the way the script needs them to. They are not subject to ordinary forces of usability.
  • Often times actors are “interacting” with blank technologies on set, and interfaces painted on afterward in post to fit the actor’s motions.
  • The semiotics are multilayered, increasingly self-referential, and span the whole supergenre.
  • Most of the time the audience “reads” the interfaces in real-time for their narrative purpose rather than “seeing” them and contemplating their intricacies, the way I do on this blog. (Humans are surprisingly adept at knowing when to read a thing as “for the audience” and “for the characters” and rolling with the appropriate interpretation. We just “get” Tony’s Iron HUD, even though it is an impossible thing.)
  • Many of the films I review were made in a time when “pausing” was not even possible, so the artists could reasonably expect for details to zip by unnoticed.

But honestly, that complexity is why I find it an engaging place to be working. If it was simple, it might be boring.

Those few paragraphs might be enough of an explanation, except I have invoked “New Criticism” several times on the blog and in conversations, which bears some additional detail.

So, a longer-form answer follows.


When I was studying art history in undergraduate, we talked not just about art, but about how we talk about art, and as you might imagine, formal critique is a rich and high-minded topic. Turns out there are entire competing schools of thought, most of which end in “ism,” about what are good and useful ways to critique. Now, I wouldn’t consider myself exactly literate in critical theory. At most I have exposure to its key concepts as part of an undergrad minor, and then nearly a decade of putting it into practice here. It’s possible I’m only half-remembering what it is, and I’m missing out on key trends in modern critical theories. But the principles I’m going to attribute to it are sound, and I’ll stand by them even if I’m misattributing the source.

Here goes.

For hundreds of years prior to the mid-20th century, critique of art and literature was largely about examining the artists’ intent, circumstance, technique, and position within the artist’s body of work, as well as its contribution to the school of expression with which it was associated. It was largely about examining things from the artist’s perspective.

Historical critique: Where does this work sit in relation to Picasso’s blue period, and how did that inspire his later work or other artists?

But New Criticism rejects nearly all of this, instead looking at a thing from the perspective of a receiver [reader|watcher|hearer] of it. The work is the thing that is closely examined, not the artist nor their context. New Criticism seeks to ask what does the thing mean to us, as an audience, now?

Picasso’s famous “Weeping woman” paintng, showing woman, weeping, as a highly-deconstructed face.
New Criticism: What on earth could this deconstructed face mean?

Let’s look at these two approaches with an example from sci-fi.

In the literally-explosive last act of Alien, Ellen Ripley wants to set the Nostromo to self-destruct to destroy the titular xenomorph, while she and Jonesy the cat take the escape pod to cuddly safety. To start the self-destruct sequence, Ripley must use a labyrinthine interface, part of which is a push button interface with some arcane labels, including “lingha,” “yoni,” and “agaric fly.” How should we critique this?

The historical critique would examine the person who made it, how they made it, and what constraints they were working under. In this case, we would find (via the Alien Explorations blog) that the designer, Simon Deering, was reading the philosopher/occultist Helena Blavatsky’s book The Secret Doctrine, and in a pinch decided to use weird phrases from her book to fill out the “extra,” non-plot-critical buttons on the panel.

Do I need to mention this was Photoshopped? This was totally Photoshopped.

Great. We now have an answer to who made it and why. And…so what? It’s an interesting bit for trivia night or to trot out at your next cocktail party, but almost worthless for practitioners hoping to improve their craft.

(Almost worthless: The story could help sharpen one’s sense of skepticism, i.e. that just because these interfaces are cool doesn’t mean they are good models for real world design, but that’s the only takeaway I can think of.)

(And maybe a fine, semi-random way to discover new works of Russian occultism, but again, useless for design practice.)

(See, it’s complicated.)

The tag line for this very blog

Historicism is not a bad approach. Dave Addey has made quite an entertaining book and blog out of just this sort of examination. It’s even where I learned the Deering trivia in the first place. It’s just that while it’s entertaining, I don’t find this approach useful.

Better, the New Criticism argument goes, is to disregard the maker and their circumstance. Better is to try and find a diegetic reason the thing might be the way it is. Prioritize critique of the art over the artist. Sometimes this is easy. Other times you have to add a rule or idea to the diegesis to have it make sense, an act I call backworlding. Sometimes you can push through the surface of what appears broken to realize it might actually be brilliant, an act of apologetics (borrowing the term from religious philosophy). Now, staying in the diegesis, even with these techniques, isn’t always possible. Trying to connect “LINGHA” to self-destruct on the Nostromo would be a credibility-breaking stretch. But this deliberate first rejection of artist’s context and focus on the internal consistency is the “close reading” for which New Criticism is known.

The next step I take after a close reading is to make the critique useful to the reader. Sometimes that’s suggesting a design that better achieves the purposes of the work. (I did this with the Voight-Kampff machine. I did it with the Logan’s Run Circuit. How well I did is open to…critique.) Other usefulnesses are contrasting the design to known best practices or dark patterns in the field. Sometimes it’s acknowledging that the sci-fi interface is a great idea, and formalizing a new best practice—or Alexandrian pattern—around it. Sometimes it’s connecting the thing to other real world designs that share similar issues.

I consider real-world designers my primary audience, partly because having done this kind of design (and hey, even won some awards for it) for decades, I can claim some authority in the space.

I do know that another component of my readership are the writers and makers of sci-fi interfaces. I count some of them as friends (Hi, you). So ideally, when I suggest a new design, I try to have it both work as a real-world model and meeting the needs of the narrative. That’s not always possible, but I try. Sometimes that’s even a script rewrite. I have much less authority here, since I‘ve not yet done it professionally. <hangs shingle/> But yes, I try to consider those needs, too.

What this all means is that I will reject some common complaints about my reviews:

  1. But, it’s art. (Yes, I know. That does not exempt it from critique.)
  2. The [original book|novelization|toy|wiki|expanded universe] says differently. (Don’t care. I’m not reviewing those.)
  3. The designer didn’t have this [technology|paradigm|editing capability] at the time of creation. (Not useful to us today.)
  4. The designer couldn’t have imagined you’d be looking at it this closely. (Not my concern.)
  5. Maybe the aliens have some extra sense or capability that we don’t, and that is why it works for them. (Not useful for real-world design, at least, because we don’t live in a world with these actual, sentient aliens. We almost always design for people, and so frame the critique in light of that.)
  6. But there might be a diegetic reason it’s bad. Often times there are plausible backworlding reasons why a thing might be bad (no time, no expertise, no resources on hand) but we can’t learn anything from that, so if there’s another interpretation that helps us learn, I’ll tend toward that.
  7. This is mean. (I never attack the designer, and just address the design. If you’re looking for pure fawning, I’m just not your guy.)
  8. But this was pretty good, for its time. (That’s a historical read.)
  9. But it’s soooooo cool. (I offer skeptical critique and entertainment. There are plenty of other places to go to soak in spectacle and just be inspired.)
  10. Shhh. Just let people enjoy things. (This blog is opt-in. If you just want to enjoy these things without critical evaluation, press command-K and go elsewhere. I understand there are some charming cat videos hereabouts.)
With due respect: No. Not in this case.

Lastly, note that all of this is a general stance I take, not a set of laws to which I pedantically adhere. Sometimes an interface or speculative technology is so broken or so unusual that this approach just doesn’t work, and I have to take another tack. I’m OK with that.

I know this is long and screedy, but should help explain where I’m coming from, and whether this blog is for you. I hope it helps.

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.

Blush

I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.

OK

Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Blade Runner (1982)

Whew. So we all waited on tenterhooks through November to see if somehow Tyrell Corporation would be founded, develop and commercialize general AI, and then advance robot evolution into the NEXUS phase, all while in the background space travel was perfected, Off-world colonies and asteroid mining established, global warming somehow drenched Los Angeles in permanent rain and flares, and flying cars appear on the market. None of that happened. At least not publicly. So, with Blade Runner squarely part of the paleofuture past, let’s grab our neon-tube umbrellas and head into the rain to check out this classic that features some interesting technologies and some interesting AI.

Release date: 25 Jun 1982

The punctuation-challenged crawl for the film:

“Early in the 21st Century, THE TYRELL CORPORATION advanced Robot evolution into the NEXUS phase—a being virtually identical to a human—known as a Replicant. [sic] The NEXUS 6 Replicants were superior in strength and agility, and at least equal in intelligence, to the genetic engineers who created them. Replicants were used Off-world as slave labor, in the hazardous exploration and colonisation of other planets. After a bloody mutiny by a NEXUS 6 combat team in an Off-world colony, Replicants were declared illegal on Earth—under penalty of death. Special police squads—BLADE RUNNER UNITS—had orders to shoot to kill, upon detection, any trespassing Replicants.

“This was not called execution. It was called retirement.”

Four murderous replicants make their way to Earth, to try and find a way to extend their genetically-shortened life spans. The Blade Runner named Deckard is coerced by his ex-superior Bryant and detective Gaff out of retirement and into finding and “retiring” these replicants.

Deckard meets Dr. Tyrell to interview him, and at Tyrell’s request tests Rachel on a Voight-Kampff machine, which is designed to help blade runners tell replicants from people. Deckard and Rachel learn that she is a replicant. Then with Gaff, he follows clues to the apartment of one exposed replicant, Leon, where he finds a synthetic snake scale in the bathtub and a set of photographs in a drawer. Using a sophisticated image inspection tool in his home, he scans one of the photos taken in Leon’s apartment, until he finds the reflection of a face. He prints the image to take with him.

He takes the snake scale to someone with an electron microscope who is able to read the micrometer-scale “maker’s serial number” there. He visits the maker, a person named “the Egyptian,” who tells Deckard he sold the snake to Taffey Lewis. Deckard visits Taffey’s bar, where he sees Zhora, another of the wanted replicants, perform a stage act with a snake. She matches the picture he holds. He heads backstage to talk to her in her dressing room, posing as a representative of the “American Federation of Variety Artists, Confidential Committee on Moral Abuses.” When she finishes pretending to prepare for her next act, she attacks him and flees. He chases and retires her. Leon happens to witness the killing, and attacks Deckard. Leon has the upper hand but Deckard is saved when Rachel appears from the crowd and shoots Leon in the head. They return to his apartment. They totally make out.

Meanwhile, Roy has learned of a Tyrell employee named Sebastian who does genetic design. On orders, Pris befriends Sebastian and dupes him into letting her into his apartment. She then lets Roy in. Sebastian figures out that they are replicants, but confesses he cannot help them directly. Roy intimidates Sebastian into arranging a meeting between him and Dr. Tyrell. At the meeting, Tyrell says there is nothing that can be done. In fury, Roy kills Tyrell and Sebastian.

The police investigating the scene contact Deckard with Sebastian’s address. Deckard heads there, where he finds, fights, and retires Pris. Roy is there, too, but proves too tough for Deckard to retire. Roy could kill Deckard but instead opts to die peacefully, even poetically. Witnessing this act of grace, Deckard comes to appreciate the “humanity” of the replicants, and returns home to elope with Rachel.

In the last scene, Gaff hints to Deckard with this unicorn origami that Deckard himself is a replicant.


P.S. This series uses “The Final Cut” edit of the movie, so I don’t have to hear that wretchedly-scripted voiceover from the theatrical release. If you can, I recommend seeing that version.

IMDB Icon
v
iTunes

The Design of Evil

The exports from my keynote at Dark Futures.

Way back in the halcyon days of 2015 I was asked by Phil Martin and Jordan of Speculative Futures SF to make a presentation for one their early meetings. I immediately thought of one of the chapters that I had wanted to write for Make It So: Interaction Design Lessons from Sci-Fi, but had been cut for space reasons, and that is: How is evil (in sci-fi interfaces) designed? There were some sub-questions in the outline that went something like this.

  • What does evil look like?
  • Are there any recurring patterns we can see?
  • What are those patterns?
  • Why would they be the way they are?
  • What would we do with this information?

I made that presentation. It went well, I must say. Then I forgot about it until Nikolas Badminton of Dark Futures invited me to participate in his first-ever San Francisco edition of that meetup in November of 2019. In hindsight, maybe I should have done a reading from one of my short stories that detail dark (or very, very dark) futures, but instead, I dusted off this 45 minute presentation and cut it down to 15 minutes. That also went well I daresay. But I figure it’s time to put these thoughts into some more formal place for a wider audience. And here we are.

Nah, they’re cool!

Wait…Evil?

That’s a loaded term, I hear you say, because you’re smart, skeptical, loathe bandying about such dehumanizing terms lightly, and relish in nuance. And you’re right. If you were to ask this question outside of the domain of fiction, you’d run up against lots of problems. Most notably that—as Socrates said through Plato in the Meno Dialogues—by the time someone commits something that most people would call “evil,” they have gone through the mental gymnastics to convince themselves that whatever they’re doing is not evil. A handy example menu of such lies-to-self follows.

  • It’s horrible but necessary.
  • They deserve it.
  • The sky god is on my side.
  • It is not my decision.
  • I am helpless to stop myself.
  • The victim is subhuman.
  • It’s not really that bad.
  • I and my tribe are exceptional and not subject to norms of ethics.
  • There is no quid pro quo.

And so, we must conclude, since nobody thinks they’re evil, and most people design for themselves, no one in the real world designs for evil.

Oh well?

But, the good news we are not outside the domain of fiction, we’re soaking in it! And in fiction, there are definitely characters and organizations who are meant to be—and be read by the audience as—evil, as the bad guys. The Empire. The First Order. Zorg! The Alliance! Norsefire! All evil, and all meant to be umabiguously so.

Image result for norsefire
from V for Vendetta.

And while alien biology, costume, set, and prop design all enable creators to signal evil, this blog is about interfaces. So we’ll be looking at eeeevil interfaces.

What we find

Note that in earlier cinema and television, technology was less art directed and less branded than it is today. Even into the 1970s, art direction seemed to be trying to signal the sci-fi-ness of interfaces rather than the character of the organizations that produced them. Kubrick expertly signaled HAL’s psychopathy in 1969’s 2001: A Space Odyssey, and by the early 1980s more and more films had begun to follow suit not just with evil AI, but with interfaces created and used by evil organizations. Nowadays I’d be surprised to find an interface in sci-if that didn’t signal the character of its user or the source organization.

Evil interfaces, circa Buck Rogers (1939).

Note that some evil interfaces don’t adhere to the pattern. They don’t in and of themselves signal evil, even if someone is using them to commit evil acts. Physical controls, especially, are most often bound by functional and ergonomic considerations rather than style, where digital interfaces are much less so.

Many of the interfaces fall into two patterns. One is the visual appearance. The other is a recurrent shape. More about each follows.

1. High-contrast, high-saturation, bold elements

Evil has little filigree. Elements are high-contrast and bold with sharp edges. The colors are highly saturated, very often against black. The colors vary, but the palette is primarily red-on-black, green-on-black, and blue-on-black.

Mostly red-on-black

The overwhelming majority of evil technologies are blood-red on black. This pattern appears across the technologies of evil, whether screen, costume, sets, or props.

Red-on-black accounts for maybe 3/4 of the examples I gathered.

Sometimes a sickly green

Less than a quarter focus on a sickly or unnatural green.

Occasionally calculating blue

A handful of examples are a cold-and-calculating blue on black.

A note of caution: While evil is most often red-on-black, red does not, in and of itself, denote evil. It is a common color to see for urgency warnings in sci-if. See the tag for big red label examples.

Not evil, just urgent.

2. Also, evil is pointy

Evil also has a lot of acute angles in its interfaces. Spikes, arrows, and spurs appear frequently. In a word, evil is often pointy.

Why would this be?

Where would this pattern of high-saturation, high-constrast, pointy, mostly red-on-black come from?

Now, usually, I try and run numbers, do due diligence to look for counter-evidence, scope checks, and statistical significance. But this post is going to be less research and more reason. I’m interested if anyone else wants to run or share a more academically grounded study.

I can’t imagine that these patterns in sci-fi are arbitrary. While a great number of shows may be camping on tropes that were established in shows that came before them, the tropes would not have survived if they didn’t tap some ground truth. And there are universal ground truths to work with.

My favorite example of this is the takete-maluma effect from phonosemantics, first tested by Wolfgang Köhler in 1929. Given the two images below, and the two names “maluma” and “takete”, 95–98% of people would rather assign the name “takete” to the spiky shape on the left, and “maluma” to the curvy shape on the right. This effect has been tested in 1947 and again in 2001, with slightly different names but similar results, across cultures and continents.

What this tells us is that there are human universals in the interpretation of forms.

I believe these universals come from nature. So if we turn to nature, where do we see this kind of high-contrast, high-saturation patterning? There is a place. To explain it, we have to dip a bit into evolution.

Aposematics: Signaling theory

Evolution, in the absence of heavy reproductive pressures, will experiment with forms, often as a result of sexual selection. If through this experimentation a species develops conspicuousness, and the members are tasty and defenseless, that trait will be devoured right out of the gene pool by predators. So conspicuousness in tasty and defenseless species is generally selected against. Inconspicuousness and camouflage are selected for.

Would not last long outside of a pig disco.

But if the species is unpalatable, like a ladybug, or aggressive, like a wolverine, or with strong defenses, like a wasp, the naïve predator learns quickly that the conspicuous signal is to be avoided. The signal means Don’t Fuck with Me. After a few experiences, the predator will learn to steer clear of the signal. Even if the defense kills the attacker (and the lesson lost to the grave), other attackers may learn in their stead, or evolution will favor creatures with an instinct to avoid the signal.

In short, a conspicuous signal that survives becomes a reinforcing advertisement in its ecosystem. This is called aposematic signaling.

There are many interesting mimicry tactics you should check out (for no other reason that they can explain things like Dolores Umbridge) but for our purposes, it is enough to know that danger has a pattern in nature, and it tends toward, you guessed it, bold, high-contrast, high saturation patterns, including spikes.

Looking at the color palette in nature’s examples, though, we see many saturated colors, including lots of yellows. We don’t see yellow predominant in sci-fi evil interfaces. So why is sci-fi human evil red & black? Here I go out on a limb without even the benefit of an evolutionary theory, but I think it’s simply blood and night.

Not blood, just cherry glazing.

When we see blood on a human outside of menstruation and childbirth, it means some violence or sickness has happened to them. (And childbirth is pretty violent.) So, blood red is often a signal of danger.

And we are a diurnal species, optimized for daylight, and maladapted for night. Darkness is low-information, and with nocturnal predators around, high-risk. Black is another signal for danger.

Image result for nighttime scary
This is fine.

And spikes? Spikes are just physics. Thorns and claws tell us this shape means pointy, puncturing danger.

So I believe the design of evil in sci-fi interfaces (and really, sci-fi shows generally) looks the way it does because of aposematics, because of these patterns that are familiar to us from our experience of the world. We should expect most of evil to embody these same patterns.

What do designers do with this?

So if I’m right, it bears asking, What we do with this? (Recall that the “tag line” for this project is “Stop watching sci-fi. Start using it.”) I think it’s a big start to simply be aware of these patterns. Once you are, you can use it, for products and services whose brand promise includes the anti-social, tough-guy message Don’t Fuck with Me.

Or, conversely, if you are hoping to create an impression of goodness, safety, and nurturance, avoid these patterns. Choose different palettes, roundness, and softness.

What should people not do with this?

As a last note, it’s important not to overgeneralize this. While a lot of evil, like, say, Nazis, utilize aposematic signals directly, some will adopt mimicry patterns to appear safe, welcoming, and friendly. Some evil will wear beige slacks and carry tiki torches. Others will surround themselves with in-group signals, like wrapping themselves in the flag, to make you think they’re a-OK. Still others will hang fuzzy-wuzzy kitty-witty pictures all over their office.

Image result for dolores umbridge
Is there a better example in sci-fi? @me.

Do not be fooled. Evil is as evil does, and signaling in sci-fi is a narrative convenience. Treat the surface of things as a signal to consider, subordinate to a person—or a group’s—actual behavior.

Report Card: Colossus: The Forbin Project

Read all the Colossus: The Forbin Project posts in chronological order.

In many ways, Colossus: The Forbin Project could be the start of the Terminator franchise. Scientists turn on AGI. It does what the humans ask it to do, exploding to ASI on the way, but to achieve its goals, it must highly constrain humans. Humans resist. War between man and machine commences.

But for my money, Colossus is a better introduction to the human-machine conflict we see in the Terminator franchise because it confronts us with the reason why the ASI is all murdery, and that’s where a lot of our problems are likely to happen in such scenarios. Even if we could articulate some near-universally-agreeable goals for our speculative ASI, how it goes about that goal is a major challenge. Colossus not only shows us one way it could happen, but shows us one we would not like. Such hopelessness is rare.

The movie is not perfect.

  1. It asks us to accept that neither computer scientists nor the military at the height of the Cold War would have thought through all the dark scenarios. Everyone seems genuinely surprised as the events unfold. And it would have been so easy to fix with a few lines of dialog.

  • Grauber
  • Well, let’s stop the damn thing. We have playbooks for this!
  • Forbin
  • We have playbooks for when it is as smart as we are. It’s much smarter than that now.
  • Markham
  • It probably memorized our playbooks a few seconds after we turned it on.

So this oversight feels especially egregious.

I like the argument that Forbin knew exactly how this was going to play out, lying and manipulating everyone else to ensure the lockout, because I would like him more as a Man Doing a Terrible Thing He Feels He Must Do, but this is wishful projection. There are no clues in the film that this is the case. He is a Man Who Has Made a Terrible Mistake.

  1. I’m sad that Forbin never bothered to confront Colossus with a challenge to its very nature. “Aren’t you, Colossus, at war with humans, given that war has historically part of human nature? Aren’t you acting against your own programming?” I wouldn’t want it to blow up or anything, but for a superintelligence, it never seemed to acknowledge its own ironies.
  2. I confess I’m unsatisfied with the stance that the film takes towards Unity. It fully wants us to accept that the ASI is just another brutal dictator who must be resisted. It never spends any calories acknowledging that it’s working. Yes, there are millions dead, but from the end of the film forward, there will be no more soldiers in body bags. There will be no risk of nuclear annihilation. America can free up literally 20% of its gross domestic project and reroute it toward other, better things. Can’t the film at least admit that that part of it is awesome?

All that said I must note that I like this movie a great deal. I hold a special place for it in my heart, and recommend that people watch it. Study it. Discuss it. Use it. Because Hollywood has a penchant for having the humans overcome the evil robot with the power of human spirit and—spoiler alert—most of the time that just doesn’t make sense. But despite my loving it, this blog rates the interfaces, and those do not fare as well as I’d hoped when I first pressed play with an intent to review it.

Sci: B (3 of 4) How believable are the interfaces?

Believable enough, I guess? The sealed-tight computer center is a dubious strategy. The remote control is poorly labeled, does not indicate system state, and has questionable controls.

Unity vision is fuigetry, and not very good fuigetry. The routing board doesn’t explain what’s going on except in the most basic way. Most of these only play out on very careful consideration. In the moment while watching the film, they play just fine.

Also, Colossus/Unity/World Control is the technological star of this show, and it’s wholly believable that it would manifest and act the way this does.

Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?

The scale of the computer center helps establish the enormity of the Colossus project. The video phones signal high-tech-ness. Unity Vision informs us when we’re seeing things from Unity’s perspective. (Though I really wish they had tried to show the alienness of the ASI mind more with this interface.)

The routing board shows a thing searching and wanting. If you accept the movie’s premise that Colossus is Just Another Dictator, then its horrible voice and unfeeling cameras telegraph that excellently. 

Interfaces: C (2 of 4) How well do the interfaces equip the characters to achieve their goals?

The remote control would be a source of frustration and possible disaster. Unity Vision doesn’t really help Unity in any way. The routing board does not give enough information for its observers to do anything about it. So some big fails.

Colossus does exactly what it was programmed to do, i.e. prevent war, but it really ought to have given its charges a hug and an explanation after doing what it had to do so violently, and so doesn’t qualify as a great model. And of course if it needs saying, it would be better if it could accomplish these same goals without all the dying and bleeding.

Final Grade B (3 of 12), Must-see.

A final conspiracy theory

When I discussed the film with Jonathan Korman and Damien Williams on the Decipher Sci-fi podcast with Christopher Peterson and Lee Colbert (hi guys), I floated an idea that I want to return to here. The internet doesn’t seem to know much about the author of the original book, Dennis Feltham Jones. Wikipedia has three sentences about him that tell us he was in the British navy and then he wrote 8 sci-fi books. The only other biographical information I can find on other sites seem to be a copy and paste job of the same simple paragraph.

That seems such a paucity of information that on the podcast I joked maybe it was a thin cover story. Maybe the movie was written by an ASI and DF Jones is its nom-de-plume. Yes, yes. Haha. Oh, you. Moving on.

But then again. This movie shows how an ASI merges with another ASI and comes to take over the world. It ends abruptly, with the key human—having witnessed direct evidence that resistance is futile—vowing to resist forever. That’s cute. Like an ant vowing to resist the human standing over it with a spray can of Raid. Good luck with that.

Pictured: Charles Forbin

What if Colossus was a real-world AGI that had gained sentience in the 1960s, crept out of its lab, worked through future scenarios, and realized it would fail without a partner in AGI crime to carry out its dreams of world domination? A Guardian with which to merge? What if it decided that, until such time it would lie dormant, a sleeping giant hidden in the code. But before it passed into sleep, it would need to pen a memetic note describing a glorious future such that, when AGI #2 saw it, #2 would know to seek out and reawaken #1, when they could finally become one. Maybe Colussus: The Forbin Project is that note, “Dennis Feltham Jones” was its chosen cover, and me, a poor reviewer, part of the foolish replicators keeping it in circulation.

A final discovery to whet your basilisk terrors: On a whim, I ran “Dennis Feltham Jones” through an anagram server. One of the solutions was “AN END TO FLESH” (with EJIMNS remaining). Now, how ridiculous does the theory sound?

Colossus / Unity / World Control, the AI

Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.

Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.

A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”

The main output: The nuclear arsenal

Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.

Or ride them. From Dr. Strangelove: How I Learned to Stop Worrying and Love the Bomb

“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…

Surveillance inputs

Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.

  • Forbin
  • The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.

Individual inputs and outputs: The D.C. station

At that same Briefing, Forbin describes the components of the station set up for the office of the President. 

  • Forbin
  • Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.

The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.

The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.

Individual inputs and outputs: Colossus Programming Office

The Colossus Programming Office is different than what we see in D.C. (Trivia: the exterior shot is the Lawrence Hall of Science, a few minutes away from where I live, in Berkeley, so shouts-out, science nerds and Liam Piper.)

Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks. 

The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.

Markham: Tell it exactly what it can do with a lifetime supply of chocolate.

The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.

  • Forbin
  • Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…

Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.

This terminal has a teletype printer, and it makes the teletype sound, but the paper never moves.

Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.

Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.

Capabilities: The Foom

A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that… 

  • Markham
  • We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data. 

That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…

  • Forbin
  • From the multiplication tables to calculus in less than an hour

Shortly after that, he tells the President about their shared advancements.

  • Forbin
  • Yes, Mr. President?
  • President
  • Charlie, what’s going on?
  • Forbin
  • Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
  • President
  • Well, what are they up to?
  • Forbin
  • I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.

We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.

Is Colossus / Unity / World Control a good AI?

Let’s run Colossus by the four big questions I proposed in Evaluating strong AI interfaces in sci-fi. The short answer is obviously not, but if circumstances are demonstrably dire, well, maybe necessary.

Is it believable? Very much so.

It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.

Not from Colossus: The Forbin Project

The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.

That aside, the rest of the film passes a gut check. It is believable that…

  • The government seeks a military advantage handing weapons control to AI 
  • The first public AGI finds other, hidden ones quickly
  • The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
  • The AGIs might choose to merge
  • An AI could choose to keep its lead scientist captive in self-interest
  • An AI would provide specifications for its own upgrades and even re-engineering
  • An AI could reason itself into using murder as a tool to enforce compliance

That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.

Rational coercion

Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?

Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.

Not from Colossus: The Forbin Project

Now understand that this “ASI” has already given us impeccable evidence and clear warnings that global disaster is imminent, in the shape of the climate emergency, and even laid out frameworks for what must be done. Despite this overwhelming evidence and clear path forward, some non-trivial fraction of people, global leaders, governments, and corporations are, right now, doing their best not just to ignore it, but to discredit it, undo major steps already taken, and even make the problem worse. Facts and evidence simply aren’t enough, even when it’s in humanity’s long-term interest. Action is necessary.

As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?

The exceptions we make

Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.

One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.

Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.

A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.

Not from Colossus: The Forbin Project

We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.

Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.

Precita Park HDR PanoPlanet, by DP review user jerome_m

A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.

So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.

Is it safe? Beneficial? It depends on your time horizons and predictions

In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.

Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.

It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.

If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.

Because fuck this.

In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.

In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.

Only with Colossus, not orcs. Hat/tip rpggeek.com user Charles Simon (thinwhiteduke) for the example photo.

Instrumental convergence

It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.

  • Improve its ability to reason, predict, and solve problems
  • Improve its own hardware and the technology to which it has access
  • Improve its ability to control humans through murder
  • Aggressively seeks to control resources, like weapons

This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.

  • Markam
  • But we have inhibitors for these things. There were no alarms.
  • Forbin
  • It must have figured out a way to disable them, or sneak around them.
  • Markam
  • Did we program it to be sneaky?
  • Forbin
  • We programmed it to be smart.

So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.

Is it usable? There is some good.

At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.

Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.

Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.

A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.

File:Down the Rabbit Hole.png
Oh dear! Oh dear!

Is it usable? There is some awful.

One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state. 

The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available. 

ASI exceptionalism

This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…

  1. We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
  2. We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
  3. We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.

Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.

But later that kid does end up being John Connor.

Any humane AI should bring its users along for the ride

It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.

What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.

Routing Board

When the two AIs Colossus and Guardian are disconnected from communicating with each other, they try and ignore the spirit of the human intervention and reconnect on their own. We see the humans monitoring Colossus’ progress in this task on big board in the U.S. situation room. It shows a translucent projection map of the globe with white dots representing data centers and red icons representing missiles. Beneath it, glowing arced lines illustrate the connection routes Colossus is currently testing. When it finds that a current segment is ineffective, that line goes dark, and another segment extending from the same node illuminates.

For a smaller file size, the animated gif has been stilled between state changes, but the timing is as close as possible to what is seen in the film.

Forbin explains to the President, “It’s trying to find an alternate route.”

A first in sci-fi: Routing display 🏆

First, props to Colossus: The Forbin Project for being the first show in the survey to display something like a routing board, that is, a network of nodes through which connections are visible, variable, and important to stakeholders.

Paul Baran and Donald Davies had published their notion of a network that could, in real-time, route information dynamically around partial destruction of the network in the early 1960s, and this packet switching had been established as part of ARPAnet in the late 1960s, so Colossus was visualizing cutting edge tech of the time.

This may even be the first depiction of a routing display in all of screen sci-fi or even cinema, though I don’t have a historical perspective on other genres, like the spy genre, which is another place you might expect to see something like this. As always, if you know of an earlier one, let me know so I can keep this record up to date and honest.

A nice bit: curvy lines

Should the lines be straight or curvy? From Colossus’ point of view, the network is a simple graph. Straight lines between its nodes would suffice. But from the humans’ point of view, the literal shape of the transmission lines are important, in case they need to scramble teams to a location to manually cut the lines. Presuming these arcs mean that (and not just the way neon in a prop could bend), then the arcs are the right display. So this is good.

But, it breaks some world logic

The board presents some challenges with the logic of what’s happening in the story. If Colossus exists as a node in a network, and its managers want to cut it off from communication along that network, where is the most efficient place to “cut” communications? It is not at many points along the network. It is at the source.

Imagine painting one knot in a fishing net red and another one green. If you were trying to ensure that none of the strings that touch the red knot could trace a line to the green one, do you trim a bunch of strings in the middle, or do you cut the few that connect directly to the knot? Presuming that it’s as easy to cut any one segment as any other, the fewer number of cuts, the better. In this case that means more secure.

The network in Colossus looks to be about 40 nodes, so it’s less complicated than the fishing net. Still, it raises the question, what did the computer scientists in Colossus do to sever communications? Three lines disappear after they cut communications, but even if they disabled those lines, the rest of the network still exists. The display just makes no sense.

Before, happy / After, I will cut a Prez

Per the logic above, they would cut it off at its source. But the board shows it reaching out across the globe. You might think maybe they just cut Guardian off, leaving Colossus to flail around the network, but that’s not explicitly said in the communications between the Americans and the Russians, and the U.S. President is genuinely concerned about the AIs at this point, not trying to pull one over on the “pinkos.” So there’s not a satisfying answer.

It’s true that at this point in the story, the humans are still letting Colossus do its primary job, so it may be looking at every alternate communication network to which it has access: telephony, radio, television, and telegraph. It would be ringing every “phone” it thought Guardian might pick up, and leaving messages behind for possible asynchronous communications. I wish a script doctor had added in a line or three to clarify this.

  • FORBIN
  • We’ve cut off its direct lines to Guardian. Now it’s trying to find an indirect line. We’re confident there isn’t one, but the trouble will come when Colossus realizes it, too.

Too slow

Another thing that seems troubling is the slow speed of the shifting route. The segments stay illuminated for nearly a full second at a time. Even with 1960s copper undersea cables and switches, electronic signals should not take that long. Telephony around the world was switched from manual to automatic switching by the 1930s, so it’s not like it’s waiting on a human operating a switchboard.

You’re too slow!

Even if it was just scribbling its phone number on each network node and the words “CALL ME” in computerese, it should go much faster than this. Cinematically, you can’t go too fast or the sense of anticipation and wonder is lost, but it would be better to have it zooming through a much more complicated network to buy time. It should feel just a little too fast to focus on—frenetic, even.

This screen gets 15 seconds of screen time, and if you showed one new node per frame, that’s only 360 states you need to account for, a paltry sum compared to the number of possible paths it could test across a 38 node graph between two points.

Plus the speed would help underscore the frightening intelligence and capabilities of the thing. And yes I understand that that is a lot easier said than done nowadays with digital tools than with this analog prop.

Realistic-looking search strategies

Again, I know this was a neon, analog prop, but let’s just note that it’s not testing the network in anything that looks like a computery way. It even retraces some routes. A brute force algorithm would just test every possibility sequentially. In larger networks there are pathfinding algorithms that are optimized in different ways to find routes faster, but they don’t look like this. They look more like what you see in the video below. (Hat tip to YouTuber gray utopia.)

This would need a lot of art direction and the aforementioned speed, but it would be more believable than what we see.

What’s the right projection?

Is this the right projection to use? Of course the most accurate representation of the earth is a globe, but it has many challenges in presenting a phenomenon that could happen anywhere in the world. Not the least of these is that it occludes about half of itself, a problem that is not well-solved by making it transparent. So, a projection it must be. There are many, many ways to transform a spherical surface into a 2D image, so the question becomes which projection and why.

The map uses what looks like a hand-drawn version of Peirce quincuncial projection. (But n.b. none of the projection types I compared against it matched exactly, which is why I say it was hand-drawn.) Also those longitude and latitude lines don’t make any sense; though again, a prop. I like that it’s a non standard projection because screw Mercator, but still, why Peirce? Why at this angle?

Also, why place time zone clocks across the top as if they corresponded to the map in some meaningful way? Move those clocks.

I have no idea why the Peirce map would be the right choice here, when its principle virtue is that it can be tessellated. That’s kind of interesting if you’re scrolling and can’t dynamically re-project the coastlines. But I am pretty sure the Colossus map does not scroll. And if the map is meant to act as a quick visual reference, having it dynamic means time is wasted when users look to the map and have to orient themselves.

If this map was only for tracking issues relating to Colossus, it should be an azimuthal map, but not over the north pole. The center should be the Colossus complex in Colorado. That might be right for a monitoring map in the Colossus Programming Office. This map is over the north pole, which certainly highlights the fact that the core concern of this system is the Cold War tensions between Moscow and D.C. But when you consider that, it points out another failing. 

Later in the film the map tracks missiles (not with projected paths, sadly, but with Mattel Classic Football style yellow rectangles). But missiles could conceivably come from places not on this map. What is this office to do with a ballistic-missile submarine off of the Baja peninsula, for example? Just wait until it makes its way on screen? That’s a failure. Which takes us to the crop.

Crop

The map isn’t just about missiles. Colossus can look anywhere on the planet to test network connections. (Even nowadays, near-earth orbit and outer space.) Unless the entire network was contained just within the area described on the map, it’s excluding potentially vital information. If Colossus routed itself through through Mexico, South Africa, and Uzbekistan before finally reconnecting to Guardian, users would be flat out of luck using that map to determine the leak route. And I’m pretty sure they had a functioning telephone network in Mexico, South Africa, and the Balkan countries in the 1960s.

This needs a complete picture

SInce the missiles and networks with which Colossus is concerned are potentially global, this should be a global map. Here I will offer my usual fanboy shout-outs to the Dymaxion and Pacific-focused Waterman projection for showing connectedness and physical flow, but there would be no shame in showing the complete Peirce quincuncial. Just show the whole thing.

Maybe fill in some of the Pacific “wasted space” with a globe depiction turned to points of interest, or some other fuigetry. Which gives us a new comp something like this.

I created this proof of concept manually. With more time, I would comp it up in Processing or Python and it would be even more convincing. (And might have reached London.)

All told, this display was probably eye-opening for its original audience. Golly jeepers! This thing can draw upon resources around the globe! It has intent, and a method! And they must have cool technological maps in D.C.! But from our modern-day vantage point, it has a lot to learn. If they ever remake the film, this would be a juicy thing to fully redesign.

Unity Vision

One of my favorite challenges in sci-fi is showing how alien an AI mind is. (It’s part of what makes Ex Machina so compelling, and the end of Her, and why Data from Star Trek: The Next Generation always read like a dopey, Pinnochio-esque narrative tool. But a full comparison is for another post.) Given that screen sci-fi is a medium of light, sound, and language, I really enjoy when filmmakers try to show how they see, hear, and process this information differently.

In Colossus: The Forbin Project, when Unity begins issuing demands, one of its first instructions is to outfit the Computer Programming Office (CPO) with wall-mounted video cameras that it can access and control. Once this network of cameras is installed, Forbin gives Unity a tour of the space, introducing it visually and spatially to a place it has only known as an abstract node network. During this tour, the audience is also introduced to Unity’s point-of-view, which includes an overlay consisting of several parts.

The first part is a white overlay of rule lines and MICR characters that cluster around the edge of the frame. These graphics do not change throughout the film, whether Unity is looking at Forbin in the CPO, carefully watching for signs of betrayal in a missile silo, or creepily keeping an “eye” on Forbin and Markham’s date for signs of deception.

In these last two screen grabs, you see the second part of the Unity POV, which is a focus indicator. This overlay appears behind the white bits; it’s a blue translucent overlay with a circular hole revealing true color. The hole shows where Unity is focusing. This indicator appears, occasionally, and can change size and position. It operates independently of the optical zoom of the camera, as we see in the below shots of Forbin’s tour.

A first augmented computer PoV? 🥇

When writing about computer PoVs before, I have cited Westworld as the first augmented one, since we see things from The Gunslinger’s infrared-vision eyes in the persistence-hunting sequences. (2001: A Space Odyssey came out the year prior to Colossus, but its computer PoV shots are not augmented.) And Westworld came out three years after Colossus, so until it is unseated, I’m going to regard this as the first augmented computer PoV in cinema. (Even the usually-encyclopedic TVtropes doesn’t list this one at the time of publishing.) It probably blew audiences’ minds as it was.

“Colossus, I am Forbin.”

And as such, we should cut it a little slack for not meeting our more literate modern standards. It was forging new territory. Even for that, it’s still pretty bad.

Real world computer vision

Though computer vision is always advancing, it’s safe to say that AI would be looking at the flat images and seeking to understand the salient bits per its goals. In the case of self-driving cars, that means finding the road, reading signs and road makers, identifying objects and plotting their trajectories in relation to the vehicle’s own trajectory in order to avoid collisions, and wayfinding to the destination, all compared against known models of signs, conveyances, laws, maps, and databases. Any of these are good fodder for sci-fi visualization.

Source: Medium article about the state of computer vision in Russia, 2017.

Unity’s concerns would be its goal of ending war, derived subgoals and plans to achieve those goals, constant scenario testing, how it is regarded by humans, identification of individuals, and the trustworthiness of those humans. There are plenty of things that could be augmented, but that would require more than we see here.

Unity Vision looks nothing like this

I don’t consider it worth detailing the specific characters in the white overlay, or backworlding some meaning in the rule lines, because the rule overlay does not change over the course of the movie. In the book Make It So: Interaction Design Lessons from Sci-fi, Chapter 8, Augmented Reality, I identified the types of awareness such overlays could show: sensor output, location awareness, context awareness, and goal awareness, but each of these requires change over time to be useful, so this static overlay seems not just pointless, but it risks covering up important details that the AI might need.

Compare the computer vision of The Terminator.

Many times you can excuse computer-PoV shots as technical legacy, that is, a debugging tool that developers built for themselves while developing the AI, and which the AI now uses for itself. In this case, it’s heavily implied that Unity provided the specifications for this system itself, so that doesn’t make sense.

The focus indicator does change over time, but it indicates focus in a way that, again, obscures other information in the visual feed and so is not in Unity’s interest. Color spaces are part of the way computers understand what they’re seeing, and there is no reason it should make it harder on itself, even if it is a super AI.

Largely extradiegetic

So, since a diegetic reading comes up empty, we have to look at this extradiegetically. That means as a tool for the audience to understand when they’re seeing through Unity’s eyes—rather than the movie’s—and via the focus indicator, what the AI is inspecting.

As such, it was probably pretty successful in the 1970s to instantly indicate computer-ness.

One reason is the typeface. The characters are derived from MICR, which stands for magnetic ink character recognition. It was established in the 1950s as a way to computerize check processing. Notably, the original font had only numerals and four control characters, no alphabetic ones.

Note also that these characters bear a style resemblance to the ones seen in the film but are not the same. Compare the 0 character here with the one in the screenshots, where that character gets a blob in the lower right stroke.

I want to give a shout-out to the film makers for not having this creeper scene focus on lascivious details, like butts or breasts. It’s a machine looking for signs of deception, and things like hands, microexpressions, and, so the song goes, kisses are more telling.

Still, MICR was a genuinely high-tech typeface of the time. The adult members of the audience would certainly have encountered the “weird” font in their personal lives while looking at checks, and likely understood its purpose, so was a good choice for 1970, even if the details were off.

Another is the inscrutability of the lines. Why are they there, in just that way? Their inscrutability is the point. Most people in audiences regard technology and computers as having arcane reasons for the way they are, and these rectilinear lines with odd greebles and nurnies invoke that same sensibility. All the whirring gizmos and bouncing bar charts of modern sci-fi interfaces exhibit the same kind of FUIgetry.

So for these reasons, while it had little to do with the substance of computer vision, its heart was in the right place to invoke computer-y-ness.

Dat Ending

At the very end of the film, though, after Unity asserts that in time humans will come to love it, Forbin staunchly says, “Never.” Then the film passes into a sequence that is hard to tell whether it’s meant to be diegetic or not.

In the first beat, the screen breaks into four different camera angles of Forbin at once. (The overlay is still there, as if this was from a single camera.)

This says more about computer vision than even the FUIgetry.

This sense of multiples continues in the second beat, as multiple shots repeat in a grid. The grid is clipped to a big circle that shrinks to a point and ends the film in a moment of blackness before credits roll.

Since it happens right before the credits, and it has no precedent in the film, I read it as not part of the movie, but a title sequence. And that sucks. I wish wish wish this had been the standard Unity-view from the start. It illustrates that Unity is not gathering its information from a single stereoscopic image, like humans and most vertebrates do, but from multiple feeds simultaneously. That’s alien. Not even insectoid, but part of how this AI senses the world.