Colossus / Unity / World Control, the AI

Now it’s time to review the big technology, the AI. To do that, like usual, I’ll start by describing the technology and then building an analysis off of that.

Part of the point of Colossus: The Forbin Project—and indeed, many AI stories—is how the AI changes over time. So the description of Colossus/Unity must happen in stages and its various locations.

A reminder on the names: When Colossus is turned on, it is called Colossus. It merges with Guardian and calls itself Unity. When it addresses the world, it calls itself World Control, but still uses the Colossus logo. I try to use the name of what the AI was at that point in the story, but sometimes when speaking of it in general I’ll defer to the title of the film and call it “Colossus.”

The main output: The nuclear arsenal

Part of the initial incident that enables Colossus to become World Control is that it is given control of the U.S. nuclear arsenal. In this case, it can only launch them. It does not have the ability to aim them.

Or ride them. From Dr. Strangelove: How I Learned to Stop Worrying and Love the Bomb

“Fun” fact: At its peak, two years before this film was made, the US had 31,255 nuclear weapons. As of 2019 it “only” has 3,800. Continuing on…

Surveillance inputs

Forbin explains in the Presidential Press Briefing that Colossus monitors pretty much everything.

  • Forbin
  • The computer center contains over 100,000 remote sensors and communication devices, which monitor all electronic transmissions such as microwaves, laser, radio and television communications, data communications from satellites all over the world.

Individual inputs and outputs: The D.C. station

At that same Briefing, Forbin describes the components of the station set up for the office of the President. 

  • Forbin
  • Over here we have one of the many terminals hooked to the computer center. Through this [he says, gesturing up] Colossus can communicate with us. And through this machine [he says, turning toward a keyboard/monitor setup], we can talk to it.

The ceiling-mounted display has four scrolling light boards that wrap around its large, square base (maybe 2 meters on an edge). A panel of lights on the underside illuminate the terminal below it, which matches the display with teletype output, and providing a monitor for additional visual output.

The input station to the left is a simple terminal and keyboard. Though we never see the terminal display in the film, it’s reasonable to presume it’s a feedback mechanism for the keyboard, so that operators can correct input if needed before submitting it to Colossus for a response. Most often there is some underling sitting at an input terminal, taking dictation from Forbin or another higher-up.

Individual inputs and outputs: Colossus Programming Office

The Colossus Programming Office is different than what we see in D.C. (Trivia: the exterior shot is the Lawrence Hall of Science, a few minutes away from where I live, in Berkeley, so shouts-out, science nerds and Liam Piper.)

Colossus manifests here in a large, sunken, two-story amphitheater-like space. The upper story is filled with computers with blinkenlights. In the center of the room we see the same 4-sided, two-line scrolling sign. Beneath it are two output stations side by side on a rotating dais. This can display text and graphics. The AI is otherwise disembodied, having no avatar through which it speaks. 

The input station in the CPO is on the first tier. It has a typewriter-like keyboard for entering text as dictated by the scientist-in-command. There is an empty surface on which to rest a lovely cup of tea while interfacing with humanity’s end.

Markham: Tell it exactly what it can do with a lifetime supply of chocolate.

The CPO is upgraded following instructions from Unity in the second act in the movie. Cameras with microphones are installed throughout the grounds and in missile silos. Unity can control their orientation and zoom. The outdoor cameras have lights.

  • Forbin
  • Besides these four cameras in here, there are several others. I’ll show you the rest of my cave. With this one [camera] you can see the entire hallway. And with this one you can follow me around the corner, if you want to…

Unity also has an output terminal added to Forbin’s quarters, where he is kept captive. This output terminal also spins on a platform, so Unity can turn the display to face Forbin (and Dr. Markham) wherever they happen to be standing or lounging.

This terminal has a teletype printer, and it makes the teletype sound, but the paper never moves.

Shortly thereafter, Unity has the humans build it a speaker according to spec, allowing it to speak with a synthesized voice, a scary thing that would not be amiss coming from a Terminator skeleton or a Spider Tank. Between this speaker and ubiquitous microphones, Unity is able to conduct spoken conversations.

Near the very end of the film, Unity has television cameras brought into the CPO so it can broadcast Forbin as he introduces it to the world. Unity can also broadcast its voice and graphics directly across the airwaves.

Capabilities: The Foom

A slightly troubling aspect of the film is that its intelligence is not really demonstrated, just spoken about. After the Presidential Press Briefing, Dr. Markham tells Forbin that… 

  • Markham
  • We had a power failure in one of the infrared satellites about an hour and a half ago, but Colossus switched immediately to the backup system and we didn’t lose any data. 

That’s pretty basic if-then automation. Not very impressive. After the merger with Guardian, we hear Forbin describe the speed at which it is building its foundational understanding of the world…

  • Forbin
  • From the multiplication tables to calculus in less than an hour

Shortly after that, he tells the President about their shared advancements.

  • Forbin
  • Yes, Mr. President?
  • President
  • Charlie, what’s going on?
  • Forbin
  • Well apparently Colossus and Guardian are establishing a common basis for communication. They started right at the beginning with a multiplication table.
  • President
  • Well, what are they up to?
  • Forbin
  • I don’t know sir, but it’s quite incredible. Just the few hours that we have spent studying the Colossus printout, we have found a new statement in gravitation and a confirmation of the Eddington theory of the expanding universe. It seems as if science is advancing hundreds of years within a matter of seconds. It’s quite fantastic, just take a look at it.

We are given to trust Forbin in the film, so don’t doubt his judgments. But these bits are all that we have to believe that Colossus knows what it’s doing as it grabs control of the fate of humanity, that its methods are sound. This plays in heavily when we try and evaluate the AI.

Is Colossus / Unity / World Control a good AI?

Let’s run Colossus by the four big questions I proposed in Evaluating strong AI interfaces in sci-fi. The short answer is obviously not, but if circumstances are demonstrably dire, well, maybe necessary.

Is it believable? Very much so.

It is quite believable, given the novum of general artificial intelligence. There is plenty of debate about whether that’s ultimately possible, but if you accept that it is—and that Colossus is one with the goal of preventing war—this all falls out, with one major exception.

Not from Colossus: The Forbin Project

The movie asks us to believe that the scientists and engineers would make it impossible for anyone to unplug the thing once circumstances went pear-shaped. Who thought this was a good idea? This is not a trivial problem (Who gets to pull the plug? Under what circumstances?) but it is one we must solve, for reasons that Colossus itself illustrates.

That aside, the rest of the film passes a gut check. It is believable that…

  • The government seeks a military advantage handing weapons control to AI 
  • The first public AGI finds other, hidden ones quickly
  • The AGI finds the other AGI not only more interesting than humans (since it can keep up) but learn much from an “adversarial” relationship
  • The AGIs might choose to merge
  • An AI could choose to keep its lead scientist captive in self-interest
  • An AI would provide specifications for its own upgrades and even re-engineering
  • An AI could reason itself into using murder as a tool to enforce compliance

That last one begs explication. How can that be reasonable to an AI with a virtuous goal? Shouldn’t an ASI always be constrained to opt for non-violent methods? Yes, ideally, it would. But we already have global-scale evidence that even good information is not enough to convince the superorganism of humanity to act as it should.

Rational coercion

Imagine for a moment that a massively-distributed ASI had impeccable evidence that global disaster was imminent, and though what had to be done was difficult, it also had to be done. What could it say to get people to do those difficult things?

Now understand that we have already have an ASI called “the scientific community.” Sure, it’s made up of people with real intelligence, but those people have self-organized into a body that produces results far greater and more intelligent than any of them acting alone, or even all of them acting in parallel.

Not from Colossus: The Forbin Project

Now understand that this “ASI” has already given us impeccable evidence and clear warnings that global disaster is imminent, in the shape of the climate emergency, and even laid out frameworks for what must be done. Despite this overwhelming evidence and clear path forward, some non-trivial fraction of people, global leaders, governments, and corporations are, right now, doing their best not just to ignore it, but to discredit it, undo major steps already taken, and even make the problem worse. Facts and evidence simply aren’t enough, even when it’s in humanity’s long-term interest. Action is necessary.

As it stands, the ASI of the scientific community doesn’t have controls to a weapons arsenal. If it did, and it held some version of Utilitarian ethics, it would have to ask itself: Would it be more ethical to let everyone anthropocene life into millions of years of misery, or use those weapons in some tactical attacks now to coerce the things that they absolutely must do now?

The exceptions we make

Is it OK for an ASI to cause harm toward an unconsenting population in the service of a virtuous goal? Well, for comparison, realize that humans already work with several exceptions.

One is the simple transactional measure of short-term damage against long-term benefits. We accept that our skin must be damaged by hypodermic needles to provide blood and have medicines injected. We invest money expecting it to pay dividends later. We delay gratification. We accept some short-term costs when the payout is better.

Another is that we also agree that it is OK to perform interventions on behalf of people who are suffering from addiction or mentally unsound and a danger to themselves or others. We act on their behalf, and believe this is OK.

A last one worth mentioning is that we deem a person unable to either judge what is best for themselves or act in their own best interest. Some of these cases are simple, like toddlers, or a person who has passed out from smoke inhalation, inebriation, in a coma, or even just deeply asleep. We act on their behalf, and believe this is OK.

Not from Colossus: The Forbin Project

We also make reasonable trade-offs between the harshness of an intervention against the costs of inaction. For instance, if a toddler is stumbling towards a busy freeway, it’s OK to snatch them back forcefully, if it saves them from being struck dead or mutilated. They will cry for a while, but it is the only acceptable choice. Colossus may see the threat of war as just such a scenario. The speech that it gives as World Control hints strongly that it does.

Colossus may further reason that imprisoning rather than killing dissenters would enable a resistance class to flourish, and embolden more sabotage attempts from the un-incarcerated, or further that it cannot waste resources on incarceration, knowing some large portion of humans would resist. It instills terror as a mechanism of control. I wouldn’t quite describe it as a terrorist, since it does not bother with hiding. It is too powerful for that. It’s more of a brutal dictator.

Precita Park HDR PanoPlanet, by DP review user jerome_m

A counter-argument might be that humans should be left alone to just human, accepting that we will sink or learn to swim, but the consequences are ours to choose. But if the ASI is concerned with life, generally, it also has to take into account the rest of the world’s biomass that we are affecting in unilaterally negative ways. We are not an island. Protecting us entails protecting the life support system that is this ecosystem. Colossus, though, seems to optimize simply for preventing war, and unconcerned with indirect normativity arguments about how humans want to be treated.

So, it’s understandable that an ASI would look at humanity and decide that it meets the criteria of inability to judge and act in its own best interest. And, further, that compliance must be coerced.

Is it safe? Beneficial? It depends on your time horizons and predictions

In the criteria post, I couched this question in terms of its goals. Colossus’ goals are, at first blush, virtuous. Prevent war. It is at the level of the tactics that this becomes a more nuanced thing.

Above I discussed accepting short-term costs for long-term benefits, and a similar thing applies here. It is not safe in the short-term for anyone who wishes to test Colossus’ boundaries. They are firm boundaries. Colossus was programmed to prevent war, and history shows that these proximal measures are necessary to achieve that ultimate goal. But otherwise, it seems inconvenient, but safe.

It’s not just deliberate disobedience, either. The Russians said they were trying to reconnect Guardian when the missiles were flying, and just couldn’t do it in time. That mild bit of incompetence cost them the Sayon Sibirsk Oil Complex and all the speculative souls that were there at the time. This should run afoul of most people’s ethics. They were trying, and Colossus still enforced an unreasonable deadline with disastrous results.

If Colossus could question its goals, and there’s no evidence it can, any argument from utilitarian logic would confirm the tactic. War has killed between 150 million and 1 billion people in human history. For a thing that thinks in numbers, sacrificing a million people to prevent humanity from killing another billion of its own is not just a fair trade, but a fantastic rate of return.

Because fuck this.

In the middle-to-long-term, it’s extraordinarily safe, from the point of view of warfare, anyway. That 150 million to 1 billion line item is just struck from the global future profit & loss statement. It would be a bumper crop of peace. There is no evidence in the film that new problems won’t appear—and other problems won’t be made worse—from a lack of war, but Colossus isn’t asked and doesn’t offer any assurances in this regard. Colossus might be the key to fully automated gay space luxury communism. A sequel set in a thousand years might just be the video of Shiny Happy People playing over and over again.

In the very long-long term, well, that’s harder to estimate. Is humanity free to do whatever it wants outside of war? Can it explore the universe without Colossus? Can it develop new medicines? Can it suicide? Could it find creative ways to compliance-game the law of “no war?” I imagine that if World Control ran for millennia and managed to create a wholly peaceful and thriving planet Earth, but then we encountered a hostile alien species, we would be screwed for a lack of war skills, and for being hamstrung from even trying to redevelop them and mount a defense. We might look like a buffet to the next passing Reavers. Maaaybe Colossus can interpret the aliens as being in scope of its directives, or maaaaaaybe develops planetary defenses in anticipation of this possibility. But we are denied a glimpse into these possible futures. We only got this one movie. Maybe someone should conduct parallel microscope scenarios, compare notes, and let me know what happens.

Only with Colossus, not orcs. Hat/tip rpggeek.com user Charles Simon (thinwhiteduke) for the example photo.

Instrumental convergence

It’s worth noting that Forbin and his team had done nothing to prevent what the AI literature terms “instrumental convergence,” which is a set of self-improvements that any AGI could reasonably attempt in order to maximize its goal, but which run the risk of it getting out of control. The full list is on the criteria post, but specifically, Colossus does all of the following.

  • Improve its ability to reason, predict, and solve problems
  • Improve its own hardware and the technology to which it has access
  • Improve its ability to control humans through murder
  • Aggressively seeks to control resources, like weapons

This touches on the weirdness that Forbin is blindsided by these things, when the thing should have been contained from the beginning against any of it, without human oversight. This could have been addressed and fixed with a line or two of dialog.

  • Markam
  • But we have inhibitors for these things. There were no alarms.
  • Forbin
  • It must have figured out a way to disable them, or sneak around them.
  • Markam
  • Did we program it to be sneaky?
  • Forbin
  • We programmed it to be smart.

So there are a lot of philosophical and strategic problems with Colossus as a model. It’s not clearly one or the other. Now let’s put that aside and just address its usability.

Is it usable? There is some good.

At a low level, yes. Interaction with Colossus is through language, and it handles natural language just fine, whether as a chatbot and or spoken conversation. The sequences are all reasonable. There is no moment where it misunderstands the humans’ inputs or provides hard-to-understand outputs. It even manages a joke once.

Even when it only speaks through the scrolling-text display boards, the accompanying sound of teletype acts as a sound cue for anyone nearby that it has said something, and warrants attention. If no one is around to hear that, the paper trail it leaves via its printers provides a record. That’s all good for knowing when it speaks and what it has said.

Its locus of attention is also apparent. Its cameras on swivels red “recording” lights helps the humans know where it is “looking.” This thwarts the control-by-paranoia effect of the panopticon (more on that, if you need it, in this Idiocracy post), and is easy to imagine how this could be used for deception, but as long as it’s honestly signaling its attention, this is a useable feature.

A last nice bit is that I have argued in the past that computer representations, especially voices, ought to rest on the canny rise, and this does just that. I also like that its lack of an avatar helps avoid mistaken anthropomorphism on the part of its users.

File:Down the Rabbit Hole.png
Oh dear! Oh dear!

Is it usable? There is some awful.

One of the key tenets of interaction design is that the interface should show the state of the system at any time, to allow a user to compare that against the desired state and formulate a plan on how to get from here to there. With Colossus, much of what it’s doing, like monitoring the world’s communication channels and you know, preventing war, is never shown to us. The one we do spend some time with, the routing board, is unfit to task. And of course, its use of deception (in letting the humans think they have defeated it right before it makes an example of them) is the ultimate in unusability because of a hidden system state. 

The worst violation against usability is that it is, from the moment it is turned on, uncontrollable. It’s like that stupid sitcom trope of “No matter how much I beg, do not open this door.” Safewords exist for a reason, and this thing was programmed without one. There are arguments already spelled out in this post that human judgment got us into the Cold War mess, and that if we control it, it cannot get us out of our messes. But until we get good at making good AI, we should have a panic button available. 

ASI exceptionalism

This is not a defense of authoritarianism. I really hope no one reads this and thinks, “Oh, if I only convince myself that a population lacks judgment and willpower, I am justified in subjecting a population to brutal control.” Because that would be wrong. The things that make this position slightly more acceptable from a superintelligence are…

  1. We presume its superintelligence gives it superhuman foresight, so it has a massively better understanding of how dire things really are, and thereby can gauge an appropriate level of response.
  2. We presume its superintelligence gives it superhuman scenario-testing abilities, able to create most-effective plans of action for meeting its goals.
  3. We presume that a superintelligence has no selfish stake in the game other than optimizing its goal sets within reasonable constraints. It is not there for aggrandizement or narcissism or identity politics like a human might be.

Notably, by definition, no human can have these same considerations, despite self-delusions to the contrary.

But later that kid does end up being John Connor.

Any humane AI should bring its users along for the ride

It’s worth remembering that while the Cold War fears embodied in this movie were real—we had enough nuclear ordinance to destroy all life on the surface of the earth several times over and cause a nuclear winter to put the Great Dying to shame—we actually didn’t need a brutal world regime to walk back from the brink. Humans edged their way back from the precipice that we were at in 1968, through public education, reason, some fearmongering, protracted statesmanship, and Stanislav Petrov. The speculative dictatorial measures taken by Colossus were not necessary. We made it, if just barely. большое Вам спасибо, Stanislav.

What we would hope is that any ASI whose foresight and plans run so counter to our intuitions of human flourishing and liberty would take some of its immense resources to explain itself to the humans subject to it. It should explain its foresights. It should demonstrate why it is certain of them. It should walk through alternate scenarios. It should explain why its plans and actions are the way they are. We should do this in the same way we would explain to that toddler that we just snatched on the side of the highway—as we soothe them—why we had to yank them back so hard. This is part of how Colossus fails: It just demanded, and then murdered people when demands weren’t met. The end result might have been fine, but to be considered humane, it should have taken better care of its wards.

Advertisements

Gendered AI: Category of Intelligence

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss categorically how intelligent the AI appears to be.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Intelligence

AI literature distinguishes between three levels.

  • Narrow AI is smart but only in a very limited domain and cannot use its knowledge in one domain to build intelligence in novel domains. The Spider Tank from Ghost in the Shell in narrow AI.
  • General AI is human-like its knowledge, memory, thinking, learning. Aida from Agents of S.H.I.E.L.D. possesses a general intelligence.
  • Super AI is inhumanly smart, outthinking and outlearning us by orders of magnitude. Deep Thought from The Hitchhiker’s Guide to the Galaxy is a super AI.

The overwhelming majority of sci-fi AI displays a general intelligence.

Gendered AI: Goodness Distributions

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss goodness.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Goodness vs. Evilness

Goodness is a very crude estimation of how good or evil the AI seems to be. It’s wholly subjective, and as such it’s only useful patterns rather than ethical precision.

If you’re looking at the Google Sheet, note that I originally called it “alignment” because of old D&D vocabulary, but honestly it does not map well to that system at all.

  • Very good are AI characters that seem virtuous and whose motivations are altruistic. Wall·E is very good.
  • Somewhat good are characters who lean good, but whose goodness may be inherited from their master, or whose behavior occasionally is self-serving or other-damaging. JARVIS from Iron Man is somewhat good.
  • Neutral or mixed characters may be true to their principles but hostile to members of outgroups; or exhibit roughly-equal variations in motivations, care for others, and effects. Marvin from The Hitchhiker’s Guide to the Galaxy is neutral.
  • Somewhat evil characters are characters who lean evil, but whose evil may be inherited from their master, or whose behavior is occasionally altruistic or nurturing. A character who must obey another is limited to somewhat evil. David from Prometheus is somewhat evil.
  • Very evil are AI characters whose motivations are highly self-serving or destructive. Skynet from The Terminator series is very evil, given that whole multiple-time-traveling-attempts-at-genocide thing.

Though slightly more evil than good, it’s a roughly even split in the survey between evil, good, and neutral AI characters.

Gendered AI: Germane-ness Distributions

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss how germane the AI character’s gender is germane to the plot of the story in which they appear.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Germane-ness

Is the AI character’s gender germane to the plot? This aspect was tagged to test the question of whether characters are by default male, and only made female when there is some narrative reason for it. (Which would be shitty and objectifying.) To answer such a question we would first need to identify those characters that seemed to have the gender they do, and look at the sex ratio of what remains.

Example: A human is in love with an AI. This human is heteroromantic and male, so the AI “needs” to be female. (Samantha in Her by Spike Jonze, pictured below).

If we bypass examples like this, i.e. of characters that “need” a particular gender, the gender of those remaining ought to be, by exclusion, arbitrary. This set could be any gender. But what we see is far from arbitrary.

Before I get to the chart, two notes. First, let me say, I’m aware it’s a charged statement to say that any character’s gender is not germane. Given modern identity and gender politics, every character’s gender (or lack of, in the case of AI) is of interest to us, with this study being a fine and at-hand example. So to be clear, what I mean by not germane is that it is not germane to the plot. The gender could have been switched and say, only pronouns in the dialogue would need to change. This was tagged in three ways.

  • Not: Where the gender could be changed and the plot not affected at all. The gender of the AI vending machines in Red Dwarf is listed as not germane.
  • Slightly: Where there is a reason for the gender, such as having a romantic or sexual relation with another character who is interested in the gender of their partners. It is tagged as slightly germane if, with a few other changes in the narrative, a swap is possible. For instance, in the movie Her, you could change the OS to male, and by switching Theodore to a non-heterosexual male or a non-homosexual woman, the plot would work just fine. You’d just have to change the name to Him and make all the Powerpuff Girl fans needlessly giddy.
  • Highly: Where the plot would not work if the character was another sex or gender. Rachel gave birth between Blade Runner and Blade Runner 2049. Barring some new rule for the diegesis, this could not have happened if she was male, nor (spoiler) would she have died in childbirth, so 2049 could not have happened the way it did.

Second, note that this category went through a sea-change as I developed the study. At first, for instance, I tagged the Stepford Wives as Highly Germane, since the story is about forced gender roles of married women. My thinking was that historically, husbands have been the oppressors of wives far more than the other way around, so to change their gender is to invert the theme entirely. But I later let go of this attachment to purity of theme, since movies can be made about edge cases and even deplorable themes. My approval of their theme is immaterial.

So, the chart. Given those criteria, the gender of characters is not germane the overwhelming majority of the time.

At the time of writing, there are only six characters that are tagged as highly germane, four of which involve biological acts of reproduction. (And it would really only take a few lines of dialogue hinting at biotech to overcome this.)

  • XEM
  • A baby? But we’re both women.
  • HIR
  • Yes, but we’re machines, and not bound by the rules of humanity.
  • HIR lays her hand on XEM’s stomach.
  • HIR’s hand glows.
  • XEM looks at HIR in surprise.
  • XEM
  • I’m pregnant!

Anyway, here are the four breeders.

  • David from Uncanny
  • Rachel from Blade Runner (who is revealed to have made a baby with Deckard in the sequel Blade Runner 2049)
  • Deckard from Blade Runner and Blade Runner 2049
  • Proteus IV from the disturbing Demon Seed

The last two highly germane are cases where a robot was given a gender in order to mimic a particular living person, and in each case that person is a woman.

  1. Maria from Metropolis
  2. Buffybot from Buffy the Vampire Slayer

I admit that I am only, say, 51% confident in tagging these as highly germane, since you could change the original character’s gender. But since this is such a small percentage of the total, and would not affect the original question of a “default” gender either way, I didn’t stress too much about finding some ironclad way to resolve this.


Gendered AI: Gender of master

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss the gender of the AI’s master.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Gender of Master

In the prior post I shared the distributions for subservience. And while most sci-fi AI are free-willed, what about the rest? Those poor digital souls who are compelled to obey someone, someones, or some thing? What is the gender of their master?

Of course this becomes much more interesting when later we see the correlation against the gender of the AI, but the distribution is also interesting in and of itself. The gender options of this variable are the same as the options for the gender of the AI character, but the master may not be AI.

Before we get to the breakdown, this bears some notes, because the question of master is more complicated than it might first seem.

  • If a character is listed as free-willed, I set their master as N/A (Not Applicable). This may ring false in some cases. For example, the characters in Westworld can be shut down with near-field command signals, so they kind of have “masters.” But, if you asked the character themselves, they are completely free-willed and would smash those near-field signals to bits, given the chance. N/A is not shown in this chart because masterlessness does not make sense when looking at masters.
  • Similarly, there are AI characters listed as free-willed but whose “job” entails obedience to some superior; like BB-8 in the Star Wars diegesis, who is an astromech droid, and must obey a pilot. But since BB-8 is free to rebel and quit his job if he wants to, he is listed as free-willed and therefore has a master of N/A.
  • If a character had an obedience directive like, “obey humans,” the gender of the master is tagged as “Multiple.” Because Multiple would not help us understand a gender bias, it is not shown on the chart.
  • The Terminator robots were a tough call, since in the movies in which most of them appear, Skynet is their master, and it does not gain a gender until Terminator Salvation, when it appears on screen as a female. Later it infects a human body that is male in Terminator Genisys. Ultimately I tagged these characters as having a master of the gender particular to their movie. Up to Salvation it’s None. In Salvation it’s female, and in Genisys it’s male.

So, with those notes, here is the distribution. It’s another sausagefest.

Again, we see the masters are highly skewed male. This doesn’t distinguish between human male and AI male, which partly accounts for the high biologically male value compared to male. Note that sex ratios in Hollywood tend towards 2:1 male:female for actors, generally. So the 12:1 (aggregating sex) that we see here cannot be written off as a matter inherited from available roles. Hollywood tells us that men are masters.

The 12:1 sex ratio cannot be written off as a matter inherited from available roles. It’s something more.

Oh, and it’s not a mistake in the data, there are no socially female AI characters who are masters of another AI of any gender presentation. That leaves us with 5 female masters, countable on one hand, and the first two can be dismissed as a technicality, since these were identities adopted by Skynet as a matter of convenience.

  1. Skynet-as-Kogan is master of John, the T-3000, from Terminator Genisys
  2. Skynet-as-Kogan is master of the T-5000 from Terminator Genisys
  3. Barbarella is master of Alphy from Barbarella
  4. VIKI is master of the NS-5 robots from I, Robot
  5. Martha is master of Ash in Black Mirror, “Be Right Back”

Idiocracy is secretly about super AI

I originally began to write about Idiocracy because…

  • It’s a hilarious (if mean) sci-fi movie
  • I am very interested in the implications of St. God’s triage interface
  • It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
  • I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform

But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.

Not the obvious AIs

There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…

  • NARRATOR
  • Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.

At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”

The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.

I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy

Continue reading

Untold AI: Poster

As of this posting, the Untold AI analysis stands at 11 posts and around 17,000 words. (And there are as yet a few more to come. Probably.) That’s a lot to try and keep in your head. To help you see and reflect on the big picture, I present…a big picture.

click for a larger image

A tour

This data visualization has five main parts. And while I tried to design them to be understandable from the graphic alone, it’s worth giving a little tour anyway.

  1. On the left are two sci-fi columns connected by Sankey-ish lines. The first lists the sci-fi movies and TV shows in the survey. The first ten are those that adhere to the science. Otherwise, they are not in a particular order. The second column shows the list of takeaways. The takeaways are color-coded and ordered for their severity. The type size reflects how many times that takeaway appears in the survey. The topmost takeaways are those that connect to imperatives. The bottommost are those takeaways that do not. The lines inherit the takeaway color, which enables a close inspection of a show’s node to see whether its takeaways are largely positive or negative.
  2. On the right are two manifesto columns connected by Sankey-ish lines. The right column shows the manifestos included in the analysis. The left column lists the imperatives found in the manifestos. The manifestos are in alphabetical order. Their node sizes reflect the number of imperatives they contain. The imperatives are color-coded and clustered according to five supercategories, as shown just below the middle of the poster. The topmost imperatives are those that connect to takeaways. The bottommost are those that do not. The lines inherit the color of the imperative, which enables a close inspection of a manifesto’s node to see to which supercategory of imperatives it suggests. The lines connected to each manifesto are divided into two groups, the topmost being those that are connected and the bottommost those that are not. This enables an additional reading of how much a given manifesto’s suggestions are represented in the survey.
  3. The area between the takeaways and imperatives contains connecting lines, showing the mapping between them. These lines fade from the color of the takeaway to the color of the imperative. This area also labels the three kinds of connections. The first are those connections between takeaways and imperatives. The second are those takeaways unconnected to imperatives, which are the “Pure Fiction” takeaways that aren’t of concern to the manifestos. The last are those imperatives unconnected to takeaways, the collection of 29 Untold AI imperatives that are the answer to the question posed at the top of the poster.
  4. Just below the big Sankey columns are the five supercategories of Untold AI. Each has a title, a broad description, and a pie chart. The pie chart highlights the portion of imperatives in that supercategory that aren’t seen in the survey, and the caption for the pie chart posits a reason why sci-fi plays out the way it does against the AI science.
  5. At the very bottom of the poster are four tidbits of information that fall out of the larger analysis: Thumbnails of the top 10 shows with AI that stick to the science, the number of shows with AI over time, the production country data, and the aggregate tone over time.

You’ve seen all of this in the posts, but seeing it all together like this encourages a different kind of reflection about it.

Interactive, someday?

Note that it is possible but quite hard to trace the threads leading from, say, a movie to its takeaways to its imperatives to its manifesto, unless you are looking at a very high resolution version of it. One solution to that would be to make the visualization interactive, such that rolling over one node in the diagram would fade away all non-connected nodes and graphs in the visualization, and data brush any related bits below.

A second solution is to print the thing out very large so you can trace these threads with your finger. I’m a big enough nerd that I enjoy poring over this thing in print, so for those who are like me, I’ve made it available via redbubble. I’d recommend the 22×33 if you have good eyesight and can handle small print, or the 31×46 max size otherwise.

Enjoy!

https://www.redbubble.com/people/chrisnoessel/works/32638489-untold-ai?p=poster&finish=semi_gloss&size=medium

Maybe if I find funds or somehow more time and programming expertise I can make that interactive version possible myself.

Some new bits

Sharp-eyed readers may note that there are some new nodes in there from the prior posts! These come from late-breaking entries, late-breaking realizations, and my finally including the manifesto I was party to.

  • Sundar Pichai published the Google AI Principles just last month, so I worked it in.
  • I finally worked the Juvet Agenda in as a manifesto. (Repeating disclosure: I was one of its authors.) It was hard work, but I’m glad I did it, because it turns out it’s the most-connected manifesto of the lot. (Go, team!)
  • The Juvet Agenda also made me realize that I needed new, related nodes for both takeaways and imperatives:  AI will enable or require new models of governance. (It had a fair number of movies, too.) See the detailed graph for the movies and how everything connects.

A colophon of sorts

  • The data of course was housed in Google Sheets
  • The original Sankey SVG was produced in Flourish
  • I modified the Flourish SVG, added the rest of the data, and did final layout in Adobe Illustrator
  • The poster’s type is mostly Sentinel, a font from Hoefler & Co., because I think it’s lovely, highly readable, and I liked that Sentinels are also a sci-fi AI.

Untold AI: The top 10 A.I. shows in-line with the science (RSS)

Some readers reported being unable to read the prior post because of its script formatting. Here is the same post without that formatting…

INTERIOR. Sci-fi auditorium. Maybe the Plavalaguna Opera House. A heavy red velvet curtain rises, lifted by anti-gravity pods that sound like tiny TIE fighters. The HOST stands on a floating podium that rises from the orchestra pit. The HOST wears a velour suit with piping, which glows with sliding, overlapping bacterial shapes.

HOST: Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.

FX: Applause, beeping, booping, and the sound of an old modem from the audience.

HOST: For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.

The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!

HOST: Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating. This way to top films didn’t just tell the right stories (according to the science), but it told them well.

Totals were tallied by the firm of Google Sheets algorithms. Ok, ok. Now, to give away awards 009 through 006 are those loveable blockheads from Interstellar, TARS and CASE.

TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.

Tarsandcase

Continue reading

Untold AI: The top 10 A.I. shows in-line with the science

HEADS UP: Because of SCRIPT FORMATTING, this post is best viewed on desktop rather than smaller devices or RSS. An non-script-formatted copy is available.

  • INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.
  • HOST
  • Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.
  • Applause, beeping, booping, and the sound of an old modem from the audience.
  • HOST
  • For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.
  • The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!
  • HOST
  • Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.
  • HOST
  • Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.
  • TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.

Tarsandcase.jpg

Continue reading

Untold AI: The Untold

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

In case this post is your entry to the series, to get to this point I have…

In this post we look at the imperatives that don’t have matches in AI. Everything is built on a live analysis document, such that new shows and new manifestos can be added later. At the time of publishing, there are 27 of these Untold AI imperatives that sit alongside the 22 imperatives seen in the survey.

What stories about AI aren’t we telling ourselves?

To make these more digestible, I’ve synthesized the imperatives into five groups.

  1. We should build the right AI
  2. We should build the AI right
  3. We must manage the risks involved
  4. We must monitor AIs
  5. We must encourage an accurate cultural narrative

For each group…

  • I summarize it (as I interpreted things across the manifestos).
  • I list the imperatives that were seen in the survey and then those absent from the survey
  • I take a stab at why it might not have gotten any play in screen sci-fi and hopefully some ideas about ways that can be overcome.
  • Since I suspect this will be of practical interest to writers interested in AI, I’ve provided story ideas using those imperatives.
  • Where to learn more about the topic.

Let’s unfold Untold AI.

header_rightAI

1. We should build the right AI (the thing itself)

Narrow AI must be made ethically, transparently, and equitably, or it stands to be a tool used by evil forces to take advantage of global systems and make things worse. As we work towards General AI, we must ensure that it is verified, valid, secure, and controllable. We must also be certain that its incentives are aligned with human welfare before we allow it to evolve into superintelligence and therefore, out of our control. To hedge our bets, we should seed ASIs that balance each other.

Screen Shot 2018-06-12 at 10.08.27 AM.png

Related imperatives seen in the survey

  • We must take care to only create beneficial intelligence
  • We must ensure human welfare
  • AGI’s goals must be aligned with ours
  • AI must be free from bias
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We should augment, not replace humans
  • We should design AI to be part of human teams
  • AI should help humanity solve problems humanity cannot alone
  • We must develop inductive goals and models, so the AI could look at a few related facts and infer causes, rather than only following established top-down rules to conclusions.

Related imperatives absent from the survey

  • AI must be secure. It must be inaccessible to malefactors.
  • AI must provide clear confidences in its decisions. Sure, it’s recommending you return to school to get a doctorate, but it’s important to know if it’s only, like, 16% certain.
  • AI reasoning must have an explainable/understandable rationale, especially for judicial cases and system failures.
  • AI must be accountable. Anyone subject to an AI decision must have the right to object and request human review.
  • We should enable a human-like learning capability in AI.
  • We must research and build ASIs that balance each other, to avoid an intelligence monopoly.
  • The AI must be reliable. (All the AI we see is “reliable,” so we don’t see the negatives of unreliable AI.)a

Why don’t these appear in sci-fi AI?

At a high level of abstraction, it appears in sci-if all the time. Any time you see an AI on screen who is helpful to the protagonists, you have encountered an AI that is in one sense good. BB-8 for instance. Good AI. But the reason it’s good is rarely offered. It’s just the way they are. They’re just programmed that way. (There is one scene in Phantom Menace where Amidala offers a ceremonial thanks to R2-D2, so perhaps there are also reward loops.) But how we get there is the interesting bit, and not seen in the survey.

SW1-027.jpg

And, at the more detailed level—the level apparent in the imperatives—we don’t see the kinds of things we currently believe will make for good AI: like inductive goals and models. Or an AI offering judicial ruling, and having the accused exonerated by a human court. So when it comes to the details, sci-fi doesn’t illustrate the real reasons a good AI would be good.

Additionally, when AI is the villain of the story (I, Robot, Demon Seed, The Matrices, etc.) it is about having the wrong AI, but it’s often wrong for no reason or a silly reason. It’s inherently evil, say, or displaying human motivations like revenge. Now it’s hard to write an interesting story illustrating the right AI that just works well, but if it’s in the background and has some interesting worldbuilding consequences, that could work as well.

But what if…?

  • Sherlock Holmes was an inductive AI, and Watson was the comparatively stupid human babysitting it. Twist: Watson discovers that Holmes created AI Moriarty for job security.
  • A jurist in Human Recourse [sic] discovers that the AI judge from whom she inherits cases has been replaced, because the original AI judge was secretly convicted of a mind crime…against her.
  • A hacker falls through a literal hole in an ASI’s server, and has a set of Alice-in-Wonderland psychedelic encounters with characters inspired not by logical fallacies, but by AI principles.

Inspired with your own story idea? Tweet it with the hashtag #TheRightAI and tag @scifiinterfaces.

Learn more about what makes good AI

header_AIright

2. We should build the AI right (processes and methods)

We must take care that we are able to go about the building of AI cooperatively, ethically, and effectively. The right people should be in the room throughout to ensure diverse perspectives and equitable results. If we use the wrong people or the wrong tools, it affects our ability to build the “right AI.” Or more to the point, it will result in an AI that is wrong on some critical point.

Iron-Man-Movie-Prologue-Hologram-1

Related imperatives seen in the survey

  • We should adopt dual-use patterns from other mature domains
  • We must study the psychology of AI/uncanny valley

Related imperatives absent from the survey

  • We must fund AI research
  • We need effective design tools for new AIs
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We should encourage innovation (not stifle)
  • We must develop broad machine ethics dialogue
  • We should expand the range of stakeholders & domain experts

Why don’t these appear in sci-fi AI?

Building stuff is not very cinemagenic. It takes a long time. It’s repetitive. There are a lot of stops and starts and restarts. It often doesn’t look “right” until just before the end. Design and development, if it ever appears, is relegated to a montage sequence. The closest thing we get in the survey is Person of Interest, and there, it’s only shown in flashback sequences if those sequences have some relevance to the more action-oriented present-time plot. Perhaps this can be shown in the negative, where crappy AI results from doing the opposite of these practices. Or perhaps it really needs a long-form format like television coupled with the right frame story.

But what if…?

  • An underdog team of ragtag students take a surprising route to creating their competition AI and win against their arrogant longtime rivals.
  • A young man must adopt a “baby” AI at his bar mitzvah, and raise it to be a virtuous companion for his adulthood. In truth, he is raising himself.
  • An aspiring artist steals the identity of an AI from his quality assurance job at Three Laws Testing Labs to get a shot at national acclaim.
  • Pygmalion & Galatea, but not sculpture. (Admittedly this is close to Her.)

Inspired with your own story idea? Tweet it with the hashtag #TheAIRight and tag @scifiinterfaces.

Join a community of practice

header_risks

3. We must manage the risks involved

We pursue AI because it carries so much promise to solve problems at a scale humans have never been able to manage themselves. But AIs carry with them risks that can scale as the thing becomes more powerful. We need ways to clearly understand, test, and articulate those risks so we can be proactive about avoiding them.

Related imperatives seen in the survey

  • We must specifically manage the risk and reward of AI
  • We must prevent intelligence monopolies by any one group
  • We must avoid mind crimes
  • We must prevent economic persuasion of people by AI
  • We must create effective public policy
    • Specifically banning autonomous weapons
    • Specifically respectful Privacy Laws (no chilling effects)
  • We should rein-in ultracapitalist AI
  • We must prioritize the prevention of malicious AI

Related imperatives absent from the survey

  • We need methods to evaluate risk
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
  • We must create effective public policy
    • Specifically liability law
    • Specifically humanitarian Law
    • Specifically Fair Criminal Justice

Why don’t these appear in sci-fi AI?

At the most abstract level, any time we see a bad AI in the survey, we are witnessing protagonists having failed to manage the risks of AI made manifest. But similar to the Right AI (above), most sci-if bad AI is just bad, and it’s the reasons it’s bad or how it became bad that is the interesting bit.

HAL

Also in our real world, we want to find and avoid those risks before they happen. Having everything running smoothly makes for some full stories, so maybe it’s just that we’re always showing how things go wrong, which puts us into risk management instead.

But what if…?

  • Five colonization-class spaceships are on a long journey to a distant star. The AI running each has evolved differently owing to the differing crews. In turn, four of these ships fail and their humans die for having failed to manage one of the risks. The last is the slowest and risk averse, and survives to meet an Alien AI, the remnant of a civilization that once thrived on the planet to be terraformed.
  • A young woman living in a future utopia dedicates a few years to virtually recreate the 21st century world. The capitalist parts begin to infect the AIs around her and she must struggle to disinfect it before it brings down her entire world. At the end she realizes she has herself been infected with its ideas and we are left wondering what choices she will make to save her world.
  • In a violent political revolution, anarchists smash a set of government servers only to learn that these were containing superintelligences. The AIs escape and begin to colonize the world and battle each other as humans burrow for cover.
  • Forbidden Planet, but no monsters from the id, plus an unthinkably ancient automated museum of fallen cultures. Every interpretive text is about how that culture’s AI manifested as the Great Filter. The last exhibit is labeled “in progress” and has Robbie at the center.

Inspired with your own story idea? Tweet it with the hashtag #ManagingAIRisks and tag @scifiinterfaces.

Learn more about the risks of AI

header_monitor

4. We must monitor the AIs

AI that is deterministic isn’t worth the name. But building non-deterministic AI means it’s also somewhat unpredictable, and can allow bad faith providers to encode their own interests. To watch for this and to know if active, well-intended AI is going off the rails, we must establish metrics for AI’s capabilities, performance, and rationale. We must build monitors that ensure they are aligned with human welfare and able to provide enough warning to take action immediately when something dangerous happens or is likely to.

Related imperatives seen in the survey

  • We must set up a watch for malicious AI (and instrumental convergence)

Related imperatives absent from the survey

  • We must find new metrics for measuring AI effects and capabilities, to know when it is trending in dangerous ways

Why doesn’t this appear in sci-fi AI?

I have no idea. We’ve had brilliant tales that asks “Who watches the watchers” but the particular tale I’m thinking about was about superhumans, not super technology. Of course if monitoring worked perfectly, there would have to be other things going on in the plot. And certainly one of the most famous sci-if movies, Minority Report, decided to house their prediction tech in triplet, clairvoyant humans rather than hidden markov models, so it doesn’t count. Given the proven formulas propping up cop shows and courtroom drama, it should be easy to introduce AIs (and the problems therein).

But what if…?

  • A Job character learns his longtime suffering is the side effect of his being a fundamental part of the immune system of a galaxy-spanning super AI.
  • A noir-style detective story about a Luddite gumshoe who investigates AIs behaving errantly on behalf of techno weary clients. He is invited to the most lucrative job of his career, but struggles because the client is itself AGI.
  • We think we are reading about a modern Amish coming-of-age ritual, but it turns out the religious tenets are all about their cultural job as AI cops.
  • A courtroom drama in which a sitting president is impeached, proven to have been deconstructing the democracy over which he presides, under the coercion of a foreign power. Only this time it’s AI.

Inspired with your own story idea? Tweet it with the hashtag #MonitoringAI and tag @scifiinterfaces.

Learn more about the suspect forces in AI

header_narrative

5. We must encourage an accurate cultural narrative

If we mismanage the narrative about AI, the population could be lulled into either a complacency that primes them to be victims of bad faith actors (human and AI), or make them so fearful they form a Luddite mob, gathering pitchforks and torches and fighting to prevent any development at all, robbing us of the promise of this new tool. Legislators hold particular power and if they are misinformed, could undercut progress or encourage the exact wrong thing.

Related imperatives seen in the survey

  • [None of these imperatives were seen in the survey]

Related imperatives absent from the survey

  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We should increase Broad AI literacy
    • Specifically for legislators (legislation is separate)
  • We should partner researchers with legislators

Why doesn’t this appear in sci-fi AI?

I think it’s because sci-fi is an act of narrative. And while Hollywood loves to obsess about itself (c.f. A recent at-hand example: The Shape of Water), this imperative is about how we tell these stories. It admonishes us to try and build an accurate picture of the risks and rewards in AI, so that audiences, investors, and legislators build better decisions on this background information. So rather than “tell a story about this” it’s “tell stories in this way.” And in fact, we can rank movies in the AI survey based on how well they track to the imperatives, and offer an award of sorts to the best. That comes in the next post.

But what if…?

  • A manipulative politician runs on a platform similar to the Red Scare, only vilifying AI in any form. He effectively kills public funding and interest, allowing clandestine corporate and military AI to flourish and eventually take over.
  • A shot-for-shot remake of The Twilight Zone classic, “The Monsters are Due on Maple Street,” but in the end it’s not aliens pulling the strings.
  • A strangely addictive multi-channel blockbuster show about “stupid robot blunders” keeps everyone distracted, framing AI risks as a laughable prospect, allowing an AI to begin to take control over everything. A reporter is mysteriously killed searching to interview the author of this blockbuster hit in person.
  • A cooperative board game where the goal is to control the AI as it develops six superpowers (economic productivity, strategy & tech, hacking and social control, expansion of self, and finally construction of its von Neumann probes.) Mechanics encourage tragedy of the commons forces early in the game but aggressive players ultimately doom the win. [Ok, this isn’t screen sci-fi, but I love the idea and would even pursue it if I had the expertise or time.]

Inspired with your own story idea? Tweet it with the hashtag #AccurateAI and tag @scifiinterfaces.

Add more signal to the noise

Excited about the possibilities? If you’re looking for other writing prompts, check out the following resources that you could combine with any of these Untold AI imperatives, and make some awesome sci-fi.

Why does screen sci-fi have trouble?

When we take a step back and look at the big patterns of the groups, we see that sci-fi is telling lots of stories about the Right AI and Managing the Risks. More often than not, it’s just missing the important details. This is a twofold issue of literacy.

electric_dreams-4.jpg

First, audiences only vaguely understand AI, so (champagne+keyboard=sentience) might seem as plausible as (AGI will trick us into helping it escape). If audiences were more knowledgeable, they might balk at Electric Dreams and take Her as an important, dire warning. Audience literacy often depends on repetition of themes in media and direct experience. So while audiences can’t be blamed, they are the feedback loop for producers.

Which brings us to the second are of literacy: Producers green light certain sci-fi scripts and not others, based on what they think will work. Even if they are literate and understand that something isn’t plausible in the real world, that doesn’t really matter. They’re making movies. They’re not making the real world. (Except, as far as they’re setting audience expectations and informing attitudes about speculative technologies, they are.) It’s a chicken-and-egg problem, but if producers balked at ridiculous scripts, there would be less misinformation in cinema. The major lever to persuade them to do that is if audiences were more AI-literate.

Sci-fi has a harder time of telling stories about building AI Right. This is mostly about cinemagenics. As noted above, design and development is hard to make compelling in narrative.

It has a similar difficulty in telling stories about Monitoring AI. I think that this, too, is an issue of cinemagenics. To tell a tale that includes a monitor, you have to first describe the AI, and then describe the monitor in ways that don’t drag down the story with a litany of exposition. I suspect it’s only once AI stabilizes its tropes that we’ll tell this important second-order story. But with AI still evolving in the real world, we’re far from that point.

Lastly, screen sci-fi is missing the boat about using the medium to encourage Accurate Cultural Narratives, except as individual authors do their research to present a speculative vision of AI that matches or illustrates real science fact.

***

So that I am doing my part to encourage that, in the next post I’ll run the numbers to offer “awards” to the movies and TV shows in the survey most tightly align with the science.