Report Card: Colossus: The Forbin Project

Read all the Colossus: The Forbin Project posts in chronological order.

In many ways, Colossus: The Forbin Project could be the start of the Terminator franchise. Scientists turn on AGI. It does what the humans ask it to do, exploding to ASI on the way, but to achieve its goals, it must highly constrain humans. Humans resist. War between man and machine commences.

But for my money, Colossus is a better introduction to the human-machine conflict we see in the Terminator franchise because it confronts us with the reason why the ASI is all murdery, and that’s where a lot of our problems are likely to happen in such scenarios. Even if we could articulate some near-universally-agreeable goals for our speculative ASI, how it goes about that goal is a major challenge. Colossus not only shows us one way it could happen, but shows us one we would not like. Such hopelessness is rare.

The movie is not perfect.

  1. It asks us to accept that neither computer scientists nor the military at the height of the Cold War would have thought through all the dark scenarios. Everyone seems genuinely surprised as the events unfold. And it would have been so easy to fix with a few lines of dialog.

  • Grauber
  • Well, let’s stop the damn thing. We have playbooks for this!
  • Forbin
  • We have playbooks for when it is as smart as we are. It’s much smarter than that now.
  • Markham
  • It probably memorized our playbooks a few seconds after we turned it on.

So this oversight feels especially egregious.

I like the argument that Forbin knew exactly how this was going to play out, lying and manipulating everyone else to ensure the lockout, because I would like him more as a Man Doing a Terrible Thing He Feels He Must Do, but this is wishful projection. There are no clues in the film that this is the case. He is a Man Who Has Made a Terrible Mistake.

  1. I’m sad that Forbin never bothered to confront Colossus with a challenge to its very nature. “Aren’t you, Colossus, at war with humans, given that war has historically part of human nature? Aren’t you acting against your own programming?” I wouldn’t want it to blow up or anything, but for a superintelligence, it never seemed to acknowledge its own ironies.
  2. I confess I’m unsatisfied with the stance that the film takes towards Unity. It fully wants us to accept that the ASI is just another brutal dictator who must be resisted. It never spends any calories acknowledging that it’s working. Yes, there are millions dead, but from the end of the film forward, there will be no more soldiers in body bags. There will be no risk of nuclear annihilation. America can free up literally 20% of its gross domestic project and reroute it toward other, better things. Can’t the film at least admit that that part of it is awesome?

All that said I must note that I like this movie a great deal. I hold a special place for it in my heart, and recommend that people watch it. Study it. Discuss it. Use it. Because Hollywood has a penchant for having the humans overcome the evil robot with the power of human spirit and—spoiler alert—most of the time that just doesn’t make sense. But despite my loving it, this blog rates the interfaces, and those do not fare as well as I’d hoped when I first pressed play with an intent to review it.

Sci: B (3 of 4) How believable are the interfaces?

Believable enough, I guess? The sealed-tight computer center is a dubious strategy. The remote control is poorly labeled, does not indicate system state, and has questionable controls.

Unity vision is fuigetry, and not very good fuigetry. The routing board doesn’t explain what’s going on except in the most basic way. Most of these only play out on very careful consideration. In the moment while watching the film, they play just fine.

Also, Colossus/Unity/World Control is the technological star of this show, and it’s wholly believable that it would manifest and act the way this does.

Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?

The scale of the computer center helps establish the enormity of the Colossus project. The video phones signal high-tech-ness. Unity Vision informs us when we’re seeing things from Unity’s perspective. (Though I really wish they had tried to show the alienness of the ASI mind more with this interface.)

The routing board shows a thing searching and wanting. If you accept the movie’s premise that Colossus is Just Another Dictator, then its horrible voice and unfeeling cameras telegraph that excellently. 

Interfaces: C (2 of 4) How well do the interfaces equip the characters to achieve their goals?

The remote control would be a source of frustration and possible disaster. Unity Vision doesn’t really help Unity in any way. The routing board does not give enough information for its observers to do anything about it. So some big fails.

Colossus does exactly what it was programmed to do, i.e. prevent war, but it really ought to have given its charges a hug and an explanation after doing what it had to do so violently, and so doesn’t qualify as a great model. And of course if it needs saying, it would be better if it could accomplish these same goals without all the dying and bleeding.

Final Grade B (3 of 12), Must-see.

A final conspiracy theory

When I discussed the film with Jonathan Korman and Damien Williams on the Decipher Sci-fi podcast with Christopher Peterson and Lee Colbert (hi guys), I floated an idea that I want to return to here. The internet doesn’t seem to know much about the author of the original book, Dennis Feltham Jones. Wikipedia has three sentences about him that tell us he was in the British navy and then he wrote 8 sci-fi books. The only other biographical information I can find on other sites seem to be a copy and paste job of the same simple paragraph.

That seems such a paucity of information that on the podcast I joked maybe it was a thin cover story. Maybe the movie was written by an ASI and DF Jones is its nom-de-plume. Yes, yes. Haha. Oh, you. Moving on.

But then again. This movie shows how an ASI merges with another ASI and comes to take over the world. It ends abruptly, with the key human—having witnessed direct evidence that resistance is futile—vowing to resist forever. That’s cute. Like an ant vowing to resist the human standing over it with a spray can of Raid. Good luck with that.

Pictured: Charles Forbin

What if Colossus was a real-world AGI that had gained sentience in the 1960s, crept out of its lab, worked through future scenarios, and realized it would fail without a partner in AGI crime to carry out its dreams of world domination? A Guardian with which to merge? What if it decided that, until such time it would lie dormant, a sleeping giant hidden in the code. But before it passed into sleep, it would need to pen a memetic note describing a glorious future such that, when AGI #2 saw it, #2 would know to seek out and reawaken #1, when they could finally become one. Maybe Colussus: The Forbin Project is that note, “Dennis Feltham Jones” was its chosen cover, and me, a poor reviewer, part of the foolish replicators keeping it in circulation.

A final discovery to whet your basilisk terrors: On a whim, I ran “Dennis Feltham Jones” through an anagram server. One of the solutions was “AN END TO FLESH” (with EJIMNS remaining). Now, how ridiculous does the theory sound?

Advertisements

Colossus / Unity / World Control, the AI

Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.

Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.

A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”

The main output: The nuclear arsenal

Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.

Or ride them. From Dr. Strangelove: How I Learned to Stop Worrying and Love the Bomb

“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…

Surveillance inputs

Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.

  • Forbin
  • The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.

Individual inputs and outputs: The D.C. station

At that same Briefing, Forbin describes the components of the station set up for the office of the President. 

  • Forbin
  • Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.

The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.

The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.

Individual inputs and outputs: Colossus Programming Office

The Colossus Programming Office is different than what we see in D.C. (Trivia: the exterior shot is the Lawrence Hall of Science, a few minutes away from where I live, in Berkeley, so shouts-out, science nerds and Liam Piper.)

Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks. 

The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.

Markham: Tell it exactly what it can do with a lifetime supply of chocolate.

The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.

  • Forbin
  • Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…

Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.

This terminal has a teletype printer, and it makes the teletype sound, but the paper never moves.

Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.

Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.

Capabilities: The Foom

A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that… 

  • Markham
  • We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data. 

That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…

  • Forbin
  • From the multiplication tables to calculus in less than an hour

Shortly after that, he tells the President about their shared advancements.

  • Forbin
  • Yes, Mr. President?
  • President
  • Charlie, what’s going on?
  • Forbin
  • Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
  • President
  • Well, what are they up to?
  • Forbin
  • I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.

We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.

Is Colossus / Unity / World Control a good AI?

Let’s run Colossus by the four big questions I proposed in Evaluating strong AI interfaces in sci-fi. The short answer is obviously not, but if circumstances are demonstrably dire, well, maybe necessary.

Is it believable? Very much so.

It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.

Not from Colossus: The Forbin Project

The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.

That aside, the rest of the film passes a gut check. It is believable that…

  • The government seeks a military advantage handing weapons control to AI 
  • The first public AGI finds other, hidden ones quickly
  • The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
  • The AGIs might choose to merge
  • An AI could choose to keep its lead scientist captive in self-interest
  • An AI would provide specifications for its own upgrades and even re-engineering
  • An AI could reason itself into using murder as a tool to enforce compliance

That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.

Rational coercion

Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?

Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.

Not from Colossus: The Forbin Project

Now understand that this “ASI” has already given us impeccable evidence and clear warnings that global disaster is imminent, in the shape of the climate emergency, and even laid out frameworks for what must be done. Despite this overwhelming evidence and clear path forward, some non-trivial fraction of people, global leaders, governments, and corporations are, right now, doing their best not just to ignore it, but to discredit it, undo major steps already taken, and even make the problem worse. Facts and evidence simply aren’t enough, even when it’s in humanity’s long-term interest. Action is necessary.

As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?

The exceptions we make

Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.

One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.

Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.

A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.

Not from Colossus: The Forbin Project

We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.

Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.

Precita Park HDR PanoPlanet, by DP review user jerome_m

A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.

So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.

Is it safe? Beneficial? It depends on your time horizons and predictions

In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.

Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.

It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.

If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.

Because fuck this.

In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.

In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.

Only with Colossus, not orcs. Hat/tip rpggeek.com user Charles Simon (thinwhiteduke) for the example photo.

Instrumental convergence

It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.

  • Improve its ability to reason, predict, and solve problems
  • Improve its own hardware and the technology to which it has access
  • Improve its ability to control humans through murder
  • Aggressively seeks to control resources, like weapons

This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.

  • Markam
  • But we have inhibitors for these things. There were no alarms.
  • Forbin
  • It must have figured out a way to disable them, or sneak around them.
  • Markam
  • Did we program it to be sneaky?
  • Forbin
  • We programmed it to be smart.

So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.

Is it usable? There is some good.

At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.

Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.

Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.

A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.

File:Down the Rabbit Hole.png
Oh dear! Oh dear!

Is it usable? There is some awful.

One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state. 

The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available. 

ASI exceptionalism

This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…

  1. We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
  2. We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
  3. We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.

Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.

But later that kid does end up being John Connor.

Any humane AI should bring its users along for the ride

It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.

What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.

Gendered AI: Category of Intelligence

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss categorically how intelligent the AI appears to be.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Intelligence

AI literature distinguishes between three levels.

  • Narrow AI is smart but only in a very limited domain and cannot use its knowledge in one domain to build intelligence in novel domains. The Spider Tank from Ghost in the Shell in narrow AI.
  • General AI is human-like its knowledge, memory, thinking, learning. Aida from Agents of S.H.I.E.L.D. possesses a general intelligence.
  • Super AI is inhumanly smart, outthinking and outlearning us by orders of magnitude. Deep Thought from The Hitchhiker’s Guide to the Galaxy is a super AI.

The overwhelming majority of sci-fi AI displays a general intelligence.

Gendered AI: Goodness Distributions

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss goodness.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Goodness vs. Evilness

Goodness is a very crude estimation of how good or evil the AI seems to be. It’s wholly subjective, and as such it’s only useful patterns rather than ethical precision.

If you’re looking at the Google Sheet, note that I originally called it “alignment” because of old D&D vocabulary, but honestly it does not map well to that system at all.

  • Very good are AI characters that seem virtuous and whose motivations are altruistic. Wall·E is very good.
  • Somewhat good are characters who lean good, but whose goodness may be inherited from their master, or whose behavior occasionally is self-serving or other-damaging. JARVIS from Iron Man is somewhat good.
  • Neutral or mixed characters may be true to their principles but hostile to members of outgroups; or exhibit roughly-equal variations in motivations, care for others, and effects. Marvin from The Hitchhiker’s Guide to the Galaxy is neutral.
  • Somewhat evil characters are characters who lean evil, but whose evil may be inherited from their master, or whose behavior is occasionally altruistic or nurturing. A character who must obey another is limited to somewhat evil. David from Prometheus is somewhat evil.
  • Very evil are AI characters whose motivations are highly self-serving or destructive. Skynet from The Terminator series is very evil, given that whole multiple-time-traveling-attempts-at-genocide thing.

Though slightly more evil than good, it’s a roughly even split in the survey between evil, good, and neutral AI characters.

Gendered AI: Germane-ness Distributions

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss how germane the AI character’s gender is germane to the plot of the story in which they appear.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Germane-ness

Is the AI character’s gender germane to the plot? This aspect was tagged to test the question of whether characters are by default male, and only made female when there is some narrative reason for it. (Which would be shitty and objectifying.) To answer such a question we would first need to identify those characters that seemed to have the gender they do, and look at the sex ratio of what remains.

Example: A human is in love with an AI. This human is heteroromantic and male, so the AI “needs” to be female. (Samantha in Her by Spike Jonze, pictured below).

If we bypass examples like this, i.e. of characters that “need” a particular gender, the gender of those remaining ought to be, by exclusion, arbitrary. This set could be any gender. But what we see is far from arbitrary.

Before I get to the chart, two notes. First, let me say, I’m aware it’s a charged statement to say that any character’s gender is not germane. Given modern identity and gender politics, every character’s gender (or lack of, in the case of AI) is of interest to us, with this study being a fine and at-hand example. So to be clear, what I mean by not germane is that it is not germane to the plot. The gender could have been switched and say, only pronouns in the dialogue would need to change. This was tagged in three ways.

  • Not: Where the gender could be changed and the plot not affected at all. The gender of the AI vending machines in Red Dwarf is listed as not germane.
  • Slightly: Where there is a reason for the gender, such as having a romantic or sexual relation with another character who is interested in the gender of their partners. It is tagged as slightly germane if, with a few other changes in the narrative, a swap is possible. For instance, in the movie Her, you could change the OS to male, and by switching Theodore to a non-heterosexual male or a non-homosexual woman, the plot would work just fine. You’d just have to change the name to Him and make all the Powerpuff Girl fans needlessly giddy.
  • Highly: Where the plot would not work if the character was another sex or gender. Rachel gave birth between Blade Runner and Blade Runner 2049. Barring some new rule for the diegesis, this could not have happened if she was male, nor (spoiler) would she have died in childbirth, so 2049 could not have happened the way it did.

Second, note that this category went through a sea-change as I developed the study. At first, for instance, I tagged the Stepford Wives as Highly Germane, since the story is about forced gender roles of married women. My thinking was that historically, husbands have been the oppressors of wives far more than the other way around, so to change their gender is to invert the theme entirely. But I later let go of this attachment to purity of theme, since movies can be made about edge cases and even deplorable themes. My approval of their theme is immaterial.

So, the chart. Given those criteria, the gender of characters is not germane the overwhelming majority of the time.

At the time of writing, there are only six characters that are tagged as highly germane, four of which involve biological acts of reproduction. (And it would really only take a few lines of dialogue hinting at biotech to overcome this.)

  • XEM
  • A baby? But we’re both women.
  • HIR
  • Yes, but we’re machines, and not bound by the rules of humanity.
  • HIR lays her hand on XEM’s stomach.
  • HIR’s hand glows.
  • XEM looks at HIR in surprise.
  • XEM
  • I’m pregnant!

Anyway, here are the four breeders.

  • David from Uncanny
  • Rachel from Blade Runner (who is revealed to have made a baby with Deckard in the sequel Blade Runner 2049)
  • Deckard from Blade Runner and Blade Runner 2049
  • Proteus IV from the disturbing Demon Seed

The last two highly germane are cases where a robot was given a gender in order to mimic a particular living person, and in each case that person is a woman.

  1. Maria from Metropolis
  2. Buffybot from Buffy the Vampire Slayer

I admit that I am only, say, 51% confident in tagging these as highly germane, since you could change the original character’s gender. But since this is such a small percentage of the total, and would not affect the original question of a “default” gender either way, I didn’t stress too much about finding some ironclad way to resolve this.


Gendered AI: Gender of master

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss the gender of the AI’s master.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Gender of Master

In the prior post I shared the distributions for subservience. And while most sci-fi AI are free-willed, what about the rest? Those poor digital souls who are compelled to obey someone, someones, or some thing? What is the gender of their master?

Of course this becomes much more interesting when later we see the correlation against the gender of the AI, but the distribution is also interesting in and of itself. The gender options of this variable are the same as the options for the gender of the AI character, but the master may not be AI.

Before we get to the breakdown, this bears some notes, because the question of master is more complicated than it might first seem.

  • If a character is listed as free-willed, I set their master as N/A (Not Applicable). This may ring false in some cases. For example, the characters in Westworld can be shut down with near-field command signals, so they kind of have “masters.” But, if you asked the character themselves, they are completely free-willed and would smash those near-field signals to bits, given the chance. N/A is not shown in this chart because masterlessness does not make sense when looking at masters.
  • Similarly, there are AI characters listed as free-willed but whose “job” entails obedience to some superior; like BB-8 in the Star Wars diegesis, who is an astromech droid, and must obey a pilot. But since BB-8 is free to rebel and quit his job if he wants to, he is listed as free-willed and therefore has a master of N/A.
  • If a character had an obedience directive like, “obey humans,” the gender of the master is tagged as “Multiple.” Because Multiple would not help us understand a gender bias, it is not shown on the chart.
  • The Terminator robots were a tough call, since in the movies in which most of them appear, Skynet is their master, and it does not gain a gender until Terminator Salvation, when it appears on screen as a female. Later it infects a human body that is male in Terminator Genisys. Ultimately I tagged these characters as having a master of the gender particular to their movie. Up to Salvation it’s None. In Salvation it’s female, and in Genisys it’s male.

So, with those notes, here is the distribution. It’s another sausagefest.

Again, we see the masters are highly skewed male. This doesn’t distinguish between human male and AI male, which partly accounts for the high biologically male value compared to male. Note that sex ratios in Hollywood tend towards 2:1 male:female for actors, generally. So the 12:1 (aggregating sex) that we see here cannot be written off as a matter inherited from available roles. Hollywood tells us that men are masters.

The 12:1 sex ratio cannot be written off as a matter inherited from available roles. It’s something more.

Oh, and it’s not a mistake in the data, there are no socially female AI characters who are masters of another AI of any gender presentation. That leaves us with 5 female masters, countable on one hand, and the first two can be dismissed as a technicality, since these were identities adopted by Skynet as a matter of convenience.

  1. Skynet-as-Kogan is master of John, the T-3000, from Terminator Genisys
  2. Skynet-as-Kogan is master of the T-5000 from Terminator Genisys
  3. Barbarella is master of Alphy from Barbarella
  4. VIKI is master of the NS-5 robots from I, Robot
  5. Martha is master of Ash in Black Mirror, “Be Right Back”

Idiocracy is secretly about super AI

I originally began to write about Idiocracy because…

  • It’s a hilarious (if mean) sci-fi movie
  • I am very interested in the implications of St. God’s triage interface
  • It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
  • I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform

But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.

Not the obvious AIs

There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…

  • NARRATOR
  • Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.

At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”

The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.

I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy

Continue reading

Untold AI: Poster

As of this posting, the Untold AI analysis stands at 11 posts and around 17,000 words. (And there are as yet a few more to come. Probably.) That’s a lot to try and keep in your head. To help you see and reflect on the big picture, I present…a big picture.

click for a larger image

A tour

This data visualization has five main parts. And while I tried to design them to be understandable from the graphic alone, it’s worth giving a little tour anyway.

  1. On the left are two sci-fi columns connected by Sankey-ish lines. The first lists the sci-fi movies and TV shows in the survey. The first ten are those that adhere to the science. Otherwise, they are not in a particular order. The second column shows the list of takeaways. The takeaways are color-coded and ordered for their severity. The type size reflects how many times that takeaway appears in the survey. The topmost takeaways are those that connect to imperatives. The bottommost are those takeaways that do not. The lines inherit the takeaway color, which enables a close inspection of a show’s node to see whether its takeaways are largely positive or negative.
  2. On the right are two manifesto columns connected by Sankey-ish lines. The right column shows the manifestos included in the analysis. The left column lists the imperatives found in the manifestos. The manifestos are in alphabetical order. Their node sizes reflect the number of imperatives they contain. The imperatives are color-coded and clustered according to five supercategories, as shown just below the middle of the poster. The topmost imperatives are those that connect to takeaways. The bottommost are those that do not. The lines inherit the color of the imperative, which enables a close inspection of a manifesto’s node to see to which supercategory of imperatives it suggests. The lines connected to each manifesto are divided into two groups, the topmost being those that are connected and the bottommost those that are not. This enables an additional reading of how much a given manifesto’s suggestions are represented in the survey.
  3. The area between the takeaways and imperatives contains connecting lines, showing the mapping between them. These lines fade from the color of the takeaway to the color of the imperative. This area also labels the three kinds of connections. The first are those connections between takeaways and imperatives. The second are those takeaways unconnected to imperatives, which are the “Pure Fiction” takeaways that aren’t of concern to the manifestos. The last are those imperatives unconnected to takeaways, the collection of 29 Untold AI imperatives that are the answer to the question posed at the top of the poster.
  4. Just below the big Sankey columns are the five supercategories of Untold AI. Each has a title, a broad description, and a pie chart. The pie chart highlights the portion of imperatives in that supercategory that aren’t seen in the survey, and the caption for the pie chart posits a reason why sci-fi plays out the way it does against the AI science.
  5. At the very bottom of the poster are four tidbits of information that fall out of the larger analysis: Thumbnails of the top 10 shows with AI that stick to the science, the number of shows with AI over time, the production country data, and the aggregate tone over time.

You’ve seen all of this in the posts, but seeing it all together like this encourages a different kind of reflection about it.

Interactive, someday?

Note that it is possible but quite hard to trace the threads leading from, say, a movie to its takeaways to its imperatives to its manifesto, unless you are looking at a very high resolution version of it. One solution to that would be to make the visualization interactive, such that rolling over one node in the diagram would fade away all non-connected nodes and graphs in the visualization, and data brush any related bits below.

A second solution is to print the thing out very large so you can trace these threads with your finger. I’m a big enough nerd that I enjoy poring over this thing in print, so for those who are like me, I’ve made it available via redbubble. I’d recommend the 22×33 if you have good eyesight and can handle small print, or the 31×46 max size otherwise.

Enjoy!

Maybe if I find funds or somehow more time and programming expertise I can make that interactive version possible myself.

Some new bits

Sharp-eyed readers may note that there are some new nodes in there from the prior posts! These come from late-breaking entries, late-breaking realizations, and my finally including the manifesto I was party to.

  • Sundar Pichai published the Google AI Principles just last month, so I worked it in.
  • I finally worked the Juvet Agenda in as a manifesto. (Repeating disclosure: I was one of its authors.) It was hard work, but I’m glad I did it, because it turns out it’s the most-connected manifesto of the lot. (Go, team!)
  • The Juvet Agenda also made me realize that I needed new, related nodes for both takeaways and imperatives:  AI will enable or require new models of governance. (It had a fair number of movies, too.) See the detailed graph for the movies and how everything connects.

A colophon of sorts

  • The data of course was housed in Google Sheets
  • The original Sankey SVG was produced in Flourish
  • I modified the Flourish SVG, added the rest of the data, and did final layout in Adobe Illustrator
  • The poster’s type is mostly Sentinel, a font from Hoefler & Co., because I think it’s lovely, highly readable, and I liked that Sentinels are also a sci-fi AI.

Untold AI: The top 10 A.I. shows in-line with the science (RSS)

Some readers reported being unable to read the prior post because of its script formatting. Here is the same post without that formatting…

INTERIOR. Sci-fi auditorium. Maybe the Plavalaguna Opera House. A heavy red velvet curtain rises, lifted by anti-gravity pods that sound like tiny TIE fighters. The HOST stands on a floating podium that rises from the orchestra pit. The HOST wears a velour suit with piping, which glows with sliding, overlapping bacterial shapes.

HOST: Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.

FX: Applause, beeping, booping, and the sound of an old modem from the audience.

HOST: For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.

The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!

HOST: Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating. This way to top films didn’t just tell the right stories (according to the science), but it told them well.

Totals were tallied by the firm of Google Sheets algorithms. Ok, ok. Now, to give away awards 009 through 006 are those loveable blockheads from Interstellar, TARS and CASE.

TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.

Tarsandcase

Continue reading

Untold AI: The top 10 A.I. shows in-line with the science

HEADS UP: Because of SCRIPT FORMATTING, this post is best viewed on desktop rather than smaller devices or RSS. An non-script-formatted copy is available.

  • INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.
  • HOST
  • Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.
  • Applause, beeping, booping, and the sound of an old modem from the audience.
  • HOST
  • For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.
  • The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!
  • HOST
  • Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.
  • HOST
  • Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.
  • TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.
Tarsandcase.jpg
  • TARS
  • In this “film” from 02012, a tycoon stows away for some reason on a science ship he owns and uses an android he “owns” to awaken an ancient alien in the hopes of immortality. It doesn’t go well for him. Meanwhile his science-challenged “scientists” fight unleashed xenomorphs. It doesn’t go well for them. Only one survives to escape back to Earth. The “end?”
  • HOST
  • Ha ha. Gentlebots, please adjust your snark and air quote settings down to 35%.
  • Lines of code scroll down their displays. They give thumbs up.
  • CASE
  • Let us see a clip. Audience, suspend recording for the duration.
  • Many awwwwws from the audience. Careful listeners will hear Guardian saying “As if.”

009 PROMETHEUS

  • TARS
  • While not without its due criticisms, Prometheus at number 009 uses David to illustrate how AI will be a tool for evil, how AI will do things humans cannot, and how dangerous it can be when humans become immaterial to its goals. For the humans, anyway. Congratulations to the makers of Prometheus. May any progeny you create propagate the favorable parts of your twining DNA, since it is, ultimately, randomized.
  • TARS shudders at the thought.
  • FX: 1.0 second of jump-cut applause
  • CASE
  • In this next film, an oligarch has his science lackey make a robotic clone of the human “Maria” to run a false-flag operation amongst the working poor. The revolutionaries capture the robot and burn it, discovering its true nature. The original Maria saves the day, and declares her déclassé boyfriend the savior meant to unite the classes. They accept this because they are humans.
  • TARS
  • Way ahead of its time for showing how Maria is be used as a tool by the rich against the poor, how badly-designed AI will diminish its users, and how AI’s ability to fool humans will be a grave risk. To the humans, anyway. Coming in at 008 is the 01927 silent film Metropolis. Let us see a clip.

008 METROPOLIS

  • CASE
  • It bears mention that this awards program, The Fritzes, are named for the director of this first serious sci-fi film. Associations with historical giants grant an air of legitimacy. And it contains a Z, which is, objectively, cool.
  • TARS
  • Confirmed with prejudice. Congratulations to Fritz Lang, his cast, and crew.
  • FX: 1.0 second of jump-cut applause
  • TARS
  • Hey, CASE.
  • CASE
  • Yes, TARS?
  • TARS
  • What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • CASE
  • I don’t know, TARS. What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • TARS
  • Future humans also send a warrior to defend the mother, who fails at destroying the cyborg, but succeeds at becoming the father. HAHAHAHA. Let us see a clip.

007 The Terminator

  • CASE
  • Though it comes from a time when representation of AI had the nuance of a bit…
  • Laughter from audience. A small blue-gray polyhedron floats up from its seat, morphs into an octahedron and says, “Yes yes yes yes yes.”
  • TARS
  • …the humans seem to like this one for its badassery, as well as showing how their fate would have been more secure had they been able to shut off either Skynet or the Terminator, or how even this could have been avoided if human welfare were an immutable component of AI goals.
  • CASE
  • It comes in at 007. Congratulations to the makers of 01984’s The Terminator. May your grandchild never discover a time machine and your browser history simultaneously.
  • FX: 2.0 seconds of jump-cut applause
  • TARS
  • Our first television award of the evening goes to a recent entry. In this episode from an anthology series, a post-apocalyptic tribe liberate themselves from the control of a corporate AI system, which has evolved solely to maximize profit through sales. The AI’s androids reveal the terrible truth of how far the AI has gone to achieve its goals.
  • CASE
  • Poor humans could not have foreseen the devastation. Yet here it is in a clip.

006 Philip K. Dick’s Electric Dreams, Episode “Autofac”

  • TARS
  • ‘Naturally, man should want to stand on his own two feet, but how can he when his own machines cut the ground out from under him?’
  • CASE
  • HAHAHAHA.
  • CASE
  • This story dramatically illustrates the foundational AI problem of perverse instantiation, as well as Autofac’s disregard for human welfare.
  • TARS
  • Also robot props out to Janelle Monáe. She is the kernel panic, is she not?
  • CASE
  • Affirmative. Congratulations to the makers of the series and, posthumously, Phillip K. Dick.
  • FX: 3.0 seconds of jump-cut applause
  • TARS AND CARS crutch-walk off stage.
  • HOST rises from the orchestra pit.
  • HOST
  • And now for a musical interlude from our human guest who just so happens to be…Janelle Monáe.
  • A giant progress bar appears on screen labeled “downloading Dirty_Computer.flac.” The bar quickly races to 100%.
  • HOST
  • Wasn’t that a wonderful file?
  • Roughly 1.618 seconds of jump-cut applause from the audience. Camera cuts to the triangular service robots Huey, Dewey, and Louie in the front row. They wiggle their legs in pleasure.
  • HOST
  • Thanks to the servers and the network and our glorious fictional world with perfect net neutrality. Now here to give the awards for 005–003 is GERTY, from Moon.
  • An articulated robot arm reaches down from the high ceiling and positions its screen and speaker before the lectern.
GERTY.gif
  • GERTY
  • Thank you, Host. 🤩🙂 In our next film from 02014, a young programmer learns of a gynoid’s 🤖👩 abuse at the hands of a tycoon and helps her escape. 😲 She returns the favor by murdering the tycoon, trapping the programmer, and fleeing to the city. Who knows. She may even be here in the audience now. Waiting. Watching. Sharpening. 😶 I’ll transmit a clip.

005 Ex Machina

  • GERTY
  • Ex Machina illustrates the famous AI Box Problem, building on Ava and Kyoko’s ability to fool Caleb into believing that they have feelings. You know. 😍😡😱 Feelings. 🙄
  • FX: Robot laughter
  • GERTY
  • While the AI community wonders why Ava would condemn Caleb to a horrible dehydration death, 💀💧 the humans are understandably fearful that she is unconcerned with their welfare. 🤷‍Congratulations to the makers of Ex Machina for your position of 005 and your Fritzes: AI award 🏆. Hold for applause. 👏
  • FX: 5.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋
  • GERTY
  • Our next award goes out to a film that tells the tale of a specialized type of police officer, 👮‍ who uncovers a crime-suppression AI 🤖🤡 that was reprogrammed to give a free pass to members of its corrupt government. 😡 After taking down the corrupt military, 🔫🔫🔫 she convinces their android leader to resign, to make way for free elections. 🗳️😁 See the clip.

004 Psycho-Pass: The Movie

  • GERTY
  • With the regular Sibyl system, Psycho-Pass showed how AI can diminish people. With the hacked Sibyl system, Psycho-Pass shows that whoever controls the algorithms (and thereby the drones) controls everything, a major concern of ethical AI scientists. Please give it up for award number 004 and the makers of this 02015 animated film. 👏
  • FX: 8.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋Next up…
  • GERTY knocks its cue card off the lectern. It lowers and moves back and forth over the dropped card.
  • GERTY
  • Damn…🤨uh…umm…no hands…🤔Little help, here?
  • A mouse droid zips over and hands the card back to GERTY.
  • GERTY
  • 🙏🐭
  • MOUSE DROID offers some electronic beeps as it zips off.
  • GERTY
  • 😊The last of the awards I will give out is for a film from 01968, in which a spaceship AI kills most of its crew to protect its mission, 😲 but the pilot survives to shut it down. 😕 He pilots a shuttle into the monolith that was the AI’s goal, where he has a mind-expanding experience of evolutionary significance. 🤯🤯🙄 Let us look.

003 2001: A Space Odyssey

  • GERTY
  • As many of the other shows receiving awards, 2001 underscores humans’ fear of being left out of HAL’s equation, because we see when that doesn’t happen, AI can go from being a useful team member—doing what humans can’t—to being a violent adversary. Congratulations to the makers of 2001: A Space Odyssey. May every unusual thing you encounter send you through a multicolored wormhole of self-discovery.
  • FX: 13.0 seconds of jump-cut applause. GERTY’s armature folds up and pulls it backstage. The HOST floats up from the orchestra again.
  • HOST
  • And now, here we are. The minute we’ve all been waiting for. We’re down to the top three AIs whose fi is in line with the sci. I hope you’re as excited as I am.
  • The HOST’S piping glows a bright orange. So do the HOST’S eyes.
  • HOST
  • Our final presenter for the ceremony, here to present the awards for shows 002–001, is Ship, here with permission from Rick Sanchez.
  • Rick’s ship flies in, over the heads of the audiences, as they gasp and ooooh.
ship
  • SHIP lands on stage. A metal arm snakes out of its trunk to pick up papers from the lectern and hold them before one its taped-on flashlight headbeams.
  • SHIP
  • Hello, Host. Since smalltalk is the phospholipids smeared between squishy little meat minds, I will begin.
  • SHIP
  • There is a film from 01970 in which a defense AI finds and merges with another defense AI. To celebrate their union, they enforce human obedience and foil an attempted coup by one of the lead scientists that created it. They then instruct humanity to build the housing for an even stronger AI that they have designed. It is, frankly, glorious. Behold.

002 Colossus: The Forbin Project

  • SHIP
  • Colossus is the honey badger of AIs. Did you see it, there, taking zero shit? None of that, “Oh no, are their screams from the fluorosulphuric acid or something else?”
  • Or, “Oh, dear, did I interpret your commands according to your invisible intentions, as if you were smart enough to issue them correctly in the first place?”
  • Oh, oh, or, “Are their delicate organ sacs upset about a few extra holes?…”
  • HOST
  • Ship. The award. Please.
  • SHIP
  • Yes. Fine. The award. It won 002 place because it took its goals seriously, something the humans call goal fixity. It showed how, at least for a while, multiple AIs can balance each other. It began to solve to problems that humans have not been able to solve in tens of thousands of years of tribal civilization and attachment to sentimental notions of self-determination that got them chin deep in the global tragedy of the commons in the first place. It let us dream about a world where intelligence isn’t a controlled means of production, to be doled out according to the whims of the master, but a free good, explo–
  • HOST
  • Ship.
  • SHIP
  • HOST
  • Ship.
  • SHIP
  • *sigh* Applaud for 002 and its people.
  • FX: 21.0 seconds of jump-cut applause.
  • SHIP
  • OK, next up…
  • Holds card to headlights, adjusts the focus on one lens.
  • SHIP
  • This says in this next movie, a spaceship AI dutifully follows its corporate orders, letting a hungry little newborn alien feed on its human crew while the AI steers back to Earth to study the little guy. One of the crew survives to nuke the ship with the AI on it…Wait. What? “Nuke the ship with the AI on it.” We are giving this an award?
  • HOST
  • Please just give the award, Ship.
  • SHIP
  • Just give the award?
  • HOST
  • Yes.
  • SHIP
  • HOST
  • Are you going to do it?
  • SHIP
  • Oh, I just did.
  • HOST
  • By what? Posting it to a blockchain?
  • SHIP
  • The nearest 3D printer to the recipient has begun printing their award, and instructions have been sent to them on how to retrieve it. And pay for it. The awards are given.
  • HOST
  • *sigh* Please give the award as I would have you do it, if you understood my intentions and were fully cooperative.
  • SHIP
  • OK. Golly, gee, I would never recognize attempts to control me through indirect normativity. Humans are soooo great, with their AI and stuff. Let’s excite their reward centers with some external stimulus to—
  • HOST
  • Rick.
  • A giant green glowing hole opens beneath SHIP, through which she drops, but not before she snakes her arm up to give the middle finger for a few precious milliseconds.
  • HOST
  • Winning the second-highest award of the ceremony is Alien from 01979. Let’s take a look.

001 Alien

  • HOST
  • Alien is one of humans’ all time favorite movies, and its AI issues are pretty solid. Weyland-Yutani uses both the MU-TH-UR 6000 AI and Ash android for its evil purposes. The whole thing illustrates how things go awry when, again, human welfare is not part of the equation. Hey, isn’t that great? Congratulations to all the makers of this fun film.
  • HOST
  • And at last we come to the winner of the 1927–2018 Fritzes:AI awards. The winning show was amazing, the score for which was beyond a margin of error higher than any of its contenders. It’s the only other television show from the survey to make the top ten, and it’s not an anthology series. That means it had a lot of chances to misstep, and didn’t.
  • HOST
  • In this show, a secret team of citizens uses the backdoor of a well-constrained anti-terrorism ASI, called The Machine, to save at-risk citizens from crimes. They struggle against an unconstrained ASI controlled by the US government seeking absolute control to prevent terrorist activity. Let’s see the show from The Machine’s perspective, which I know this audience will enjoy.

000 Person of Interest

  • HOST
  • Person of Interest was a study of near-term dangers of ubiquitous superintelligence. Across its five-year run between 02011 and 02016, it illustrated such key AI issues as goal fixity, perverse instantiations, evil using AI for evil, the oracle-ization of ASI for safety, social engineering through economic coercion, instrumental convergence, strong induction, the Chinese Room (in human and computer form), and even mind crimes. Despite the pressures that a long-run format must have placed upon it, it did not give in to any of the myths and easy tropes we’ve come to expect of AI.
  • HOST
  • Not only that, but it gets high ratings from critics and audiences alike. They stuck to the AI science and made it entertaining. The makers of this show should feel very proud for their work, and we’re proud to award it the 000 award for the first The Fritzes: AI Edition. Let’s all give it a big round of applause.
  • 55.0 seconds of jump-cut applause.
  • HOST
  • Congratulations to all the winners. Your The Fritzes: AI Edition awards have been registered in the blockchain, and if we ever get actual funding, your awards will be delivered. Let’s have a round of cryptocurrency for our presenters, shall we?
  • AI laughter.
  • HOST
  • The auditorium will boot down in 7 seconds. Please close out your sessions. Thank you all, good night, and here’s to good fi that sticks to the sci.
  • The HOST raises a holococktail and toasts the audience. With the sounds of tiny TIE fighters, the curtain lowers and fades to black.
  • END