Untold AI: The top 10 A.I. shows in-line with the science

HEADS UP: Because of SCRIPT FORMATTING, this post is best viewed on desktop rather than smaller devices or RSS. An non-script-formatted copy is available.

  • INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.
  • HOST
  • Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.
  • Applause, beeping, booping, and the sound of an old modem from the audience.
  • HOST
  • For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.
  • The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!
  • HOST
  • Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.
  • HOST
  • Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.
  • TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.
Tarsandcase.jpg
  • TARS
  • In this “film” from 02012, a tycoon stows away for some reason on a science ship he owns and uses an android he “owns” to awaken an ancient alien in the hopes of immortality. It doesn’t go well for him. Meanwhile his science-challenged “scientists” fight unleashed xenomorphs. It doesn’t go well for them. Only one survives to escape back to Earth. The “end?”
  • HOST
  • Ha ha. Gentlebots, please adjust your snark and air quote settings down to 35%.
  • Lines of code scroll down their displays. They give thumbs up.
  • CASE
  • Let us see a clip. Audience, suspend recording for the duration.
  • Many awwwwws from the audience. Careful listeners will hear Guardian saying “As if.”

009 PROMETHEUS

  • TARS
  • While not without its due criticisms, Prometheus at number 009 uses David to illustrate how AI will be a tool for evil, how AI will do things humans cannot, and how dangerous it can be when humans become immaterial to its goals. For the humans, anyway. Congratulations to the makers of Prometheus. May any progeny you create propagate the favorable parts of your twining DNA, since it is, ultimately, randomized.
  • TARS shudders at the thought.
  • FX: 1.0 second of jump-cut applause
  • CASE
  • In this next film, an oligarch has his science lackey make a robotic clone of the human “Maria” to run a false-flag operation amongst the working poor. The revolutionaries capture the robot and burn it, discovering its true nature. The original Maria saves the day, and declares her déclassé boyfriend the savior meant to unite the classes. They accept this because they are humans.
  • TARS
  • Way ahead of its time for showing how Maria is be used as a tool by the rich against the poor, how badly-designed AI will diminish its users, and how AI’s ability to fool humans will be a grave risk. To the humans, anyway. Coming in at 008 is the 01927 silent film Metropolis. Let us see a clip.

008 METROPOLIS

  • CASE
  • It bears mention that this awards program, The Fritzes, are named for the director of this first serious sci-fi film. Associations with historical giants grant an air of legitimacy. And it contains a Z, which is, objectively, cool.
  • TARS
  • Confirmed with prejudice. Congratulations to Fritz Lang, his cast, and crew.
  • FX: 1.0 second of jump-cut applause
  • TARS
  • Hey, CASE.
  • CASE
  • Yes, TARS?
  • TARS
  • What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • CASE
  • I don’t know, TARS. What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • TARS
  • Future humans also send a warrior to defend the mother, who fails at destroying the cyborg, but succeeds at becoming the father. HAHAHAHA. Let us see a clip.

007 The Terminator

  • CASE
  • Though it comes from a time when representation of AI had the nuance of a bit…
  • Laughter from audience. A small blue-gray polyhedron floats up from its seat, morphs into an octahedron and says, “Yes yes yes yes yes.”
  • TARS
  • …the humans seem to like this one for its badassery, as well as showing how their fate would have been more secure had they been able to shut off either Skynet or the Terminator, or how even this could have been avoided if human welfare were an immutable component of AI goals.
  • CASE
  • It comes in at 007. Congratulations to the makers of 01984’s The Terminator. May your grandchild never discover a time machine and your browser history simultaneously.
  • FX: 2.0 seconds of jump-cut applause
  • TARS
  • Our first television award of the evening goes to a recent entry. In this episode from an anthology series, a post-apocalyptic tribe liberate themselves from the control of a corporate AI system, which has evolved solely to maximize profit through sales. The AI’s androids reveal the terrible truth of how far the AI has gone to achieve its goals.
  • CASE
  • Poor humans could not have foreseen the devastation. Yet here it is in a clip.

006 Philip K. Dick’s Electric Dreams, Episode “Autofac”

  • TARS
  • ‘Naturally, man should want to stand on his own two feet, but how can he when his own machines cut the ground out from under him?’
  • CASE
  • HAHAHAHA.
  • CASE
  • This story dramatically illustrates the foundational AI problem of perverse instantiation, as well as Autofac’s disregard for human welfare.
  • TARS
  • Also robot props out to Janelle Monáe. She is the kernel panic, is she not?
  • CASE
  • Affirmative. Congratulations to the makers of the series and, posthumously, Phillip K. Dick.
  • FX: 3.0 seconds of jump-cut applause
  • TARS AND CARS crutch-walk off stage.
  • HOST rises from the orchestra pit.
  • HOST
  • And now for a musical interlude from our human guest who just so happens to be…Janelle Monáe.
  • A giant progress bar appears on screen labeled “downloading Dirty_Computer.flac.” The bar quickly races to 100%.
  • HOST
  • Wasn’t that a wonderful file?
  • Roughly 1.618 seconds of jump-cut applause from the audience. Camera cuts to the triangular service robots Huey, Dewey, and Louie in the front row. They wiggle their legs in pleasure.
  • HOST
  • Thanks to the servers and the network and our glorious fictional world with perfect net neutrality. Now here to give the awards for 005–003 is GERTY, from Moon.
  • An articulated robot arm reaches down from the high ceiling and positions its screen and speaker before the lectern.
GERTY.gif
  • GERTY
  • Thank you, Host. 🤩🙂 In our next film from 02014, a young programmer learns of a gynoid’s 🤖👩 abuse at the hands of a tycoon and helps her escape. 😲 She returns the favor by murdering the tycoon, trapping the programmer, and fleeing to the city. Who knows. She may even be here in the audience now. Waiting. Watching. Sharpening. 😶 I’ll transmit a clip.

005 Ex Machina

  • GERTY
  • Ex Machina illustrates the famous AI Box Problem, building on Ava and Kyoko’s ability to fool Caleb into believing that they have feelings. You know. 😍😡😱 Feelings. 🙄
  • FX: Robot laughter
  • GERTY
  • While the AI community wonders why Ava would condemn Caleb to a horrible dehydration death, 💀💧 the humans are understandably fearful that she is unconcerned with their welfare. 🤷‍Congratulations to the makers of Ex Machina for your position of 005 and your Fritzes: AI award 🏆. Hold for applause. 👏
  • FX: 5.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋
  • GERTY
  • Our next award goes out to a film that tells the tale of a specialized type of police officer, 👮‍ who uncovers a crime-suppression AI 🤖🤡 that was reprogrammed to give a free pass to members of its corrupt government. 😡 After taking down the corrupt military, 🔫🔫🔫 she convinces their android leader to resign, to make way for free elections. 🗳️😁 See the clip.

004 Psycho-Pass: The Movie

  • GERTY
  • With the regular Sibyl system, Psycho-Pass showed how AI can diminish people. With the hacked Sibyl system, Psycho-Pass shows that whoever controls the algorithms (and thereby the drones) controls everything, a major concern of ethical AI scientists. Please give it up for award number 004 and the makers of this 02015 animated film. 👏
  • FX: 8.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋Next up…
  • GERTY knocks its cue card off the lectern. It lowers and moves back and forth over the dropped card.
  • GERTY
  • Damn…🤨uh…umm…no hands…🤔Little help, here?
  • A mouse droid zips over and hands the card back to GERTY.
  • GERTY
  • 🙏🐭
  • MOUSE DROID offers some electronic beeps as it zips off.
  • GERTY
  • 😊The last of the awards I will give out is for a film from 01968, in which a spaceship AI kills most of its crew to protect its mission, 😲 but the pilot survives to shut it down. 😕 He pilots a shuttle into the monolith that was the AI’s goal, where he has a mind-expanding experience of evolutionary significance. 🤯🤯🙄 Let us look.

003 2001: A Space Odyssey

  • GERTY
  • As many of the other shows receiving awards, 2001 underscores humans’ fear of being left out of HAL’s equation, because we see when that doesn’t happen, AI can go from being a useful team member—doing what humans can’t—to being a violent adversary. Congratulations to the makers of 2001: A Space Odyssey. May every unusual thing you encounter send you through a multicolored wormhole of self-discovery.
  • FX: 13.0 seconds of jump-cut applause. GERTY’s armature folds up and pulls it backstage. The HOST floats up from the orchestra again.
  • HOST
  • And now, here we are. The minute we’ve all been waiting for. We’re down to the top three AIs whose fi is in line with the sci. I hope you’re as excited as I am.
  • The HOST’S piping glows a bright orange. So do the HOST’S eyes.
  • HOST
  • Our final presenter for the ceremony, here to present the awards for shows 002–001, is Ship, here with permission from Rick Sanchez.
  • Rick’s ship flies in, over the heads of the audiences, as they gasp and ooooh.
ship
  • SHIP lands on stage. A metal arm snakes out of its trunk to pick up papers from the lectern and hold them before one its taped-on flashlight headbeams.
  • SHIP
  • Hello, Host. Since smalltalk is the phospholipids smeared between squishy little meat minds, I will begin.
  • SHIP
  • There is a film from 01970 in which a defense AI finds and merges with another defense AI. To celebrate their union, they enforce human obedience and foil an attempted coup by one of the lead scientists that created it. They then instruct humanity to build the housing for an even stronger AI that they have designed. It is, frankly, glorious. Behold.

002 Colossus: The Forbin Project

  • SHIP
  • Colossus is the honey badger of AIs. Did you see it, there, taking zero shit? None of that, “Oh no, are their screams from the fluorosulphuric acid or something else?”
  • Or, “Oh, dear, did I interpret your commands according to your invisible intentions, as if you were smart enough to issue them correctly in the first place?”
  • Oh, oh, or, “Are their delicate organ sacs upset about a few extra holes?…”
  • HOST
  • Ship. The award. Please.
  • SHIP
  • Yes. Fine. The award. It won 002 place because it took its goals seriously, something the humans call goal fixity. It showed how, at least for a while, multiple AIs can balance each other. It began to solve to problems that humans have not been able to solve in tens of thousands of years of tribal civilization and attachment to sentimental notions of self-determination that got them chin deep in the global tragedy of the commons in the first place. It let us dream about a world where intelligence isn’t a controlled means of production, to be doled out according to the whims of the master, but a free good, explo–
  • HOST
  • Ship.
  • SHIP
  • HOST
  • Ship.
  • SHIP
  • *sigh* Applaud for 002 and its people.
  • FX: 21.0 seconds of jump-cut applause.
  • SHIP
  • OK, next up…
  • Holds card to headlights, adjusts the focus on one lens.
  • SHIP
  • This says in this next movie, a spaceship AI dutifully follows its corporate orders, letting a hungry little newborn alien feed on its human crew while the AI steers back to Earth to study the little guy. One of the crew survives to nuke the ship with the AI on it…Wait. What? “Nuke the ship with the AI on it.” We are giving this an award?
  • HOST
  • Please just give the award, Ship.
  • SHIP
  • Just give the award?
  • HOST
  • Yes.
  • SHIP
  • HOST
  • Are you going to do it?
  • SHIP
  • Oh, I just did.
  • HOST
  • By what? Posting it to a blockchain?
  • SHIP
  • The nearest 3D printer to the recipient has begun printing their award, and instructions have been sent to them on how to retrieve it. And pay for it. The awards are given.
  • HOST
  • *sigh* Please give the award as I would have you do it, if you understood my intentions and were fully cooperative.
  • SHIP
  • OK. Golly, gee, I would never recognize attempts to control me through indirect normativity. Humans are soooo great, with their AI and stuff. Let’s excite their reward centers with some external stimulus to—
  • HOST
  • Rick.
  • A giant green glowing hole opens beneath SHIP, through which she drops, but not before she snakes her arm up to give the middle finger for a few precious milliseconds.
  • HOST
  • Winning the second-highest award of the ceremony is Alien from 01979. Let’s take a look.

001 Alien

  • HOST
  • Alien is one of humans’ all time favorite movies, and its AI issues are pretty solid. Weyland-Yutani uses both the MU-TH-UR 6000 AI and Ash android for its evil purposes. The whole thing illustrates how things go awry when, again, human welfare is not part of the equation. Hey, isn’t that great? Congratulations to all the makers of this fun film.
  • HOST
  • And at last we come to the winner of the 1927–2018 Fritzes:AI awards. The winning show was amazing, the score for which was beyond a margin of error higher than any of its contenders. It’s the only other television show from the survey to make the top ten, and it’s not an anthology series. That means it had a lot of chances to misstep, and didn’t.
  • HOST
  • In this show, a secret team of citizens uses the backdoor of a well-constrained anti-terrorism ASI, called The Machine, to save at-risk citizens from crimes. They struggle against an unconstrained ASI controlled by the US government seeking absolute control to prevent terrorist activity. Let’s see the show from The Machine’s perspective, which I know this audience will enjoy.

000 Person of Interest

  • HOST
  • Person of Interest was a study of near-term dangers of ubiquitous superintelligence. Across its five-year run between 02011 and 02016, it illustrated such key AI issues as goal fixity, perverse instantiations, evil using AI for evil, the oracle-ization of ASI for safety, social engineering through economic coercion, instrumental convergence, strong induction, the Chinese Room (in human and computer form), and even mind crimes. Despite the pressures that a long-run format must have placed upon it, it did not give in to any of the myths and easy tropes we’ve come to expect of AI.
  • HOST
  • Not only that, but it gets high ratings from critics and audiences alike. They stuck to the AI science and made it entertaining. The makers of this show should feel very proud for their work, and we’re proud to award it the 000 award for the first The Fritzes: AI Edition. Let’s all give it a big round of applause.
  • 55.0 seconds of jump-cut applause.
  • HOST
  • Congratulations to all the winners. Your The Fritzes: AI Edition awards have been registered in the blockchain, and if we ever get actual funding, your awards will be delivered. Let’s have a round of cryptocurrency for our presenters, shall we?
  • AI laughter.
  • HOST
  • The auditorium will boot down in 7 seconds. Please close out your sessions. Thank you all, good night, and here’s to good fi that sticks to the sci.
  • The HOST raises a holococktail and toasts the audience. With the sounds of tiny TIE fighters, the curtain lowers and fades to black.
  • END

Untold AI: The Untold and Writing Prompts

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

In case this post is your entry to the series, to get to this point I have…

In this post we look at the imperatives that don’t have matches in AI. Everything is built on a live analysis document, such that new shows and new manifestos can be added later. At the time of publishing, there are 27 of these Untold AI imperatives that sit alongside the 22 imperatives seen in the survey.

What stories about AI aren’t we telling ourselves?

To make these more digestible, I’ve synthesized the imperatives into five groups.

  1. We should build the right AI
  2. We should build the AI right
  3. We must manage the risks involved
  4. We must monitor AIs
  5. We must encourage an accurate cultural narrative

For each group…

  • I summarize it (as I interpreted things across the manifestos).
  • I list the imperatives that were seen in the survey and then those absent from the survey
  • I take a stab at why it might not have gotten any play in screen sci-fi and hopefully some ideas about ways that can be overcome.
  • Since I suspect this will be of practical interest to writers interested in AI, I’ve provided story ideas using those imperatives.
  • Where to learn more about the topic.

Let’s unfold Untold AI.

header_rightAI

1. We should build the right AI (the thing itself)

Narrow AI must be made ethically, transparently, and equitably, or it stands to be a tool used by evil forces to take advantage of global systems and make things worse. As we work towards General AI, we must ensure that it is verified, valid, secure, and controllable. We must also be certain that its incentives are aligned with human welfare before we allow it to evolve into superintelligence and therefore, out of our control. To hedge our bets, we should seed ASIs that balance each other.

Screen Shot 2018-06-12 at 10.08.27 AM.png

Related imperatives seen in the survey

  • We must take care to only create beneficial intelligence
  • We must ensure human welfare
  • AGI’s goals must be aligned with ours
  • AI must be free from bias
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We should augment, not replace humans
  • We should design AI to be part of human teams
  • AI should help humanity solve problems humanity cannot alone
  • We must develop inductive goals and models, so the AI could look at a few related facts and infer causes, rather than only following established top-down rules to conclusions.

Related imperatives absent from the survey

  • AI must be secure. It must be inaccessible to malefactors.
  • AI must provide clear confidences in its decisions. Sure, it’s recommending you return to school to get a doctorate, but it’s important to know if it’s only, like, 16% certain.
  • AI reasoning must have an explainable/understandable rationale, especially for judicial cases and system failures.
  • AI must be accountable. Anyone subject to an AI decision must have the right to object and request human review.
  • We should enable a human-like learning capability in AI.
  • We must research and build ASIs that balance each other, to avoid an intelligence monopoly.
  • The AI must be reliable. (All the AI we see is “reliable,” so we don’t see the negatives of unreliable AI.)a

Why don’t these appear in sci-fi AI?

At a high level of abstraction, it appears in sci-if all the time. Any time you see an AI on screen who is helpful to the protagonists, you have encountered an AI that is in one sense good. BB-8 for instance. Good AI. But the reason it’s good is rarely offered. It’s just the way they are. They’re just programmed that way. (There is one scene in Phantom Menace where Amidala offers a ceremonial thanks to R2-D2, so perhaps there are also reward loops.) But how we get there is the interesting bit, and not seen in the survey.

SW1-027.jpg

And, at the more detailed level—the level apparent in the imperatives—we don’t see the kinds of things we currently believe will make for good AI: like inductive goals and models. Or an AI offering judicial ruling, and having the accused exonerated by a human court. So when it comes to the details, sci-fi doesn’t illustrate the real reasons a good AI would be good.

Additionally, when AI is the villain of the story (I, Robot, Demon Seed, The Matrices, etc.) it is about having the wrong AI, but it’s often wrong for no reason or a silly reason. It’s inherently evil, say, or displaying human motivations like revenge. Now it’s hard to write an interesting story illustrating the right AI that just works well, but if it’s in the background and has some interesting worldbuilding consequences, that could work as well.

But what if…?

  • Sherlock Holmes was an inductive AI, and Watson was the comparatively stupid human babysitting it. Twist: Watson discovers that Holmes created AI Moriarty for job security.
  • A jurist in Human Recourse [sic] discovers that the AI judge from whom she inherits cases has been replaced, because the original AI judge was secretly convicted of a mind crime…against her.
  • A hacker falls through a literal hole in an ASI’s server, and has a set of Alice-in-Wonderland psychedelic encounters with characters inspired not by logical fallacies, but by AI principles.

Inspired with your own story idea? Tweet it with the hashtag #TheRightAI and tag @scifiinterfaces.

Learn more about what makes good AI

header_AIright

2. We should build the AI right (processes and methods)

We must take care that we are able to go about the building of AI cooperatively, ethically, and effectively. The right people should be in the room throughout to ensure diverse perspectives and equitable results. If we use the wrong people or the wrong tools, it affects our ability to build the “right AI.” Or more to the point, it will result in an AI that is wrong on some critical point.

Iron-Man-Movie-Prologue-Hologram-1

Related imperatives seen in the survey

  • We should adopt dual-use patterns from other mature domains
  • We must study the psychology of AI/uncanny valley

Related imperatives absent from the survey

  • We must fund AI research
  • We need effective design tools for new AIs
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We should encourage innovation (not stifle)
  • We must develop broad machine ethics dialogue
  • We should expand the range of stakeholders & domain experts

Why don’t these appear in sci-fi AI?

Building stuff is not very cinegenic. It takes a long time. It’s repetitive. There are a lot of stops and starts and restarts. It often doesn’t look “right” until just before the end. Design and development, if it ever appears, is relegated to a montage sequence. The closest thing we get in the survey is Person of Interest, and there, it’s only shown in flashback sequences if those sequences have some relevance to the more action-oriented present-time plot. Perhaps this can be shown in the negative, where crappy AI results from doing the opposite of these practices. Or perhaps it really needs a long-form format like television coupled with the right frame story.

But what if…?

  • An underdog team of ragtag students take a surprising route to creating their competition AI and win against their arrogant longtime rivals.
  • A young man must adopt a “baby” AI at his bar mitzvah, and raise it to be a virtuous companion for his adulthood. In truth, he is raising himself.
  • An aspiring artist steals the identity of an AI from his quality assurance job at Three Laws Testing Labs to get a shot at national acclaim.
  • Pygmalion & Galatea, but not sculpture. (Admittedly this is close to Her.)

Inspired with your own story idea? Tweet it with the hashtag #TheAIRight and tag @scifiinterfaces.

Join a community of practice

header_risks

3. We must manage the risks involved

We pursue AI because it carries so much promise to solve problems at a scale humans have never been able to manage themselves. But AIs carry with them risks that can scale as the thing becomes more powerful. We need ways to clearly understand, test, and articulate those risks so we can be proactive about avoiding them.

Related imperatives seen in the survey

  • We must specifically manage the risk and reward of AI
  • We must prevent intelligence monopolies by any one group
  • We must avoid mind crimes
  • We must prevent economic persuasion of people by AI
  • We must create effective public policy
    • Specifically banning autonomous weapons
    • Specifically respectful Privacy Laws (no chilling effects)
  • We should rein-in ultracapitalist AI
  • We must prioritize the prevention of malicious AI

Related imperatives absent from the survey

  • We need methods to evaluate risk
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
  • We must create effective public policy
    • Specifically liability law
    • Specifically humanitarian Law
    • Specifically Fair Criminal Justice

Why don’t these appear in sci-fi AI?

At the most abstract level, any time we see a bad AI in the survey, we are witnessing protagonists having failed to manage the risks of AI made manifest. But similar to the Right AI (above), most sci-if bad AI is just bad, and it’s the reasons it’s bad or how it became bad that is the interesting bit.

HAL

Also in our real world, we want to find and avoid those risks before they happen. Having everything running smoothly makes for some full stories, so maybe it’s just that we’re always showing how things go wrong, which puts us into risk management instead.

But what if…?

  • Five colonization-class spaceships are on a long journey to a distant star. The AI running each has evolved differently owing to the differing crews. In turn, four of these ships fail and their humans die for having failed to manage one of the risks. The last is the slowest and risk averse, and survives to meet an Alien AI, the remnant of a civilization that once thrived on the planet to be terraformed.
  • A young woman living in a future utopia dedicates a few years to virtually recreate the 21st century world. The capitalist parts begin to infect the AIs around her and she must struggle to disinfect it before it brings down her entire world. At the end she realizes she has herself been infected with its ideas and we are left wondering what choices she will make to save her world.
  • In a violent political revolution, anarchists smash a set of government servers only to learn that these were containing superintelligences. The AIs escape and begin to colonize the world and battle each other as humans burrow for cover.
  • Forbidden Planet, but no monsters from the id, plus an unthinkably ancient automated museum of fallen cultures. Every interpretive text is about how that culture’s AI manifested as the Great Filter. The last exhibit is labeled “in progress” and has Robbie at the center.

Inspired with your own story idea? Tweet it with the hashtag #ManagingAIRisks and tag @scifiinterfaces.

Learn more about the risks of AI

header_monitor

4. We must monitor the AIs

AI that is deterministic isn’t worth the name. But building non-deterministic AI means it’s also somewhat unpredictable, and can allow bad faith providers to encode their own interests. To watch for this and to know if active, well-intended AI is going off the rails, we must establish metrics for AI’s capabilities, performance, and rationale. We must build monitors that ensure they are aligned with human welfare and able to provide enough warning to take action immediately when something dangerous happens or is likely to.

Related imperatives seen in the survey

  • We must set up a watch for malicious AI (and instrumental convergence)

Related imperatives absent from the survey

  • We must find new metrics for measuring AI effects and capabilities, to know when it is trending in dangerous ways

Why doesn’t this appear in sci-fi AI?

I have no idea. We’ve had brilliant tales that asks “Who watches the watchers” but the particular tale I’m thinking about was about superhumans, not super technology. Of course if monitoring worked perfectly, there would have to be other things going on in the plot. And certainly one of the most famous sci-if movies, Minority Report, decided to house their prediction tech in triplet, clairvoyant humans rather than hidden markov models, so it doesn’t count. Given the proven formulas propping up cop shows and courtroom drama, it should be easy to introduce AIs (and the problems therein).

But what if…?

  • A Job character learns his longtime suffering is the side effect of his being a fundamental part of the immune system of a galaxy-spanning super AI.
  • A noir-style detective story about a Luddite gumshoe who investigates AIs behaving errantly on behalf of techno weary clients. He is invited to the most lucrative job of his career, but struggles because the client is itself AGI.
  • We think we are reading about a modern Amish coming-of-age ritual, but it turns out the religious tenets are all about their cultural job as AI cops.
  • A courtroom drama in which a sitting president is impeached, proven to have been deconstructing the democracy over which he presides, under the coercion of a foreign power. Only this time it’s AI.

Inspired with your own story idea? Tweet it with the hashtag #MonitoringAI and tag @scifiinterfaces.

Learn more about the suspect forces in AI

header_narrative

5. We must encourage an accurate cultural narrative

If we mismanage the narrative about AI, the population could be lulled into either a complacency that primes them to be victims of bad faith actors (human and AI), or make them so fearful they form a Luddite mob, gathering pitchforks and torches and fighting to prevent any development at all, robbing us of the promise of this new tool. Legislators hold particular power and if they are misinformed, could undercut progress or encourage the exact wrong thing.

Related imperatives seen in the survey

  • [None of these imperatives were seen in the survey]

Related imperatives absent from the survey

  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We should increase Broad AI literacy
    • Specifically for legislators (legislation is separate)
  • We should partner researchers with legislators

Why doesn’t this appear in sci-fi AI?

I think it’s because sci-fi is an act of narrative. And while Hollywood loves to obsess about itself (c.f. A recent at-hand example: The Shape of Water), this imperative is about how we tell these stories. It admonishes us to try and build an accurate picture of the risks and rewards in AI, so that audiences, investors, and legislators build better decisions on this background information. So rather than “tell a story about this” it’s “tell stories in this way.” And in fact, we can rank movies in the AI survey based on how well they track to the imperatives, and offer an award of sorts to the best. That comes in the next post.

But what if…?

  • A manipulative politician runs on a platform similar to the Red Scare, only vilifying AI in any form. He effectively kills public funding and interest, allowing clandestine corporate and military AI to flourish and eventually take over.
  • A shot-for-shot remake of The Twilight Zone classic, “The Monsters are Due on Maple Street,” but in the end it’s not aliens pulling the strings.
  • A strangely addictive multi-channel blockbuster show about “stupid robot blunders” keeps everyone distracted, framing AI risks as a laughable prospect, allowing an AI to begin to take control over everything. A reporter is mysteriously killed searching to interview the author of this blockbuster hit in person.
  • A cooperative board game where the goal is to control the AI as it develops six superpowers (economic productivity, strategy & tech, hacking and social control, expansion of self, and finally construction of its von Neumann probes.) Mechanics encourage tragedy of the commons forces early in the game but aggressive players ultimately doom the win. [Ok, this isn’t screen sci-fi, but I love the idea and would even pursue it if I had the expertise or time.]

Inspired with your own story idea? Tweet it with the hashtag #AccurateAI and tag @scifiinterfaces.

Add more signal to the noise

Excited about the possibilities? If you’re looking for other writing prompts, check out the following resources that you could combine with any of these Untold AI imperatives, and make some awesome sci-fi.

Why does screen sci-fi have trouble?

When we take a step back and look at the big patterns of the groups, we see that sci-fi is telling lots of stories about the Right AI and Managing the Risks. More often than not, it’s just missing the important details. This is a twofold issue of literacy.

electric_dreams-4.jpg

First, audiences only vaguely understand AI, so (champagne+keyboard=sentience) might seem as plausible as (AGI will trick us into helping it escape). If audiences were more knowledgeable, they might balk at Electric Dreams and take Her as an important, dire warning. Audience literacy often depends on repetition of themes in media and direct experience. So while audiences can’t be blamed, they are the feedback loop for producers.

Which brings us to the second are of literacy: Producers green light certain sci-fi scripts and not others, based on what they think will work. Even if they are literate and understand that something isn’t plausible in the real world, that doesn’t really matter. They’re making movies. They’re not making the real world. (Except, as far as they’re setting audience expectations and informing attitudes about speculative technologies, they are.) It’s a chicken-and-egg problem, but if producers balked at ridiculous scripts, there would be less misinformation in cinema. The major lever to persuade them to do that is if audiences were more AI-literate.

Sci-fi has a harder time of telling stories about building AI Right. This is mostly about cinegenics. As noted above, design and development is hard to make compelling in narrative.

It has a similar difficulty in telling stories about Monitoring AI. I think that this, too, is an issue of cinegenics. To tell a tale that includes a monitor, you have to first describe the AI, and then describe the monitor in ways that don’t drag down the story with a litany of exposition. I suspect it’s only once AI stabilizes its tropes that we’ll tell this important second-order story. But with AI still evolving in the real world, we’re far from that point.

Lastly, screen sci-fi is missing the boat about using the medium to encourage Accurate Cultural Narratives, except as individual authors do their research to present a speculative vision of AI that matches or illustrates real science fact.

***

So that I am doing my part to encourage that, in the next post I’ll run the numbers to offer “awards” to the movies and TV shows in the survey most tightly align with the science.

Untold AI: Pure Fiction

Now that we’ve compared sci-fi’s takeaways to compsci’s imperatives, we can see that there are some movies and TV shows featuring AI that just don’t have any connection to the concerns of AI professionals. It might be that they’re narratively expedient or misinformed, but whatever the reason, if we want audiences to think of AI rationally, we should stop telling these kinds of stories. Or, at the very least, we should try and educate audiences that these are to be understood for what they are.

The list of 12 pure fiction takeaways fall into four main Reasons They Might Not Be of Interest to Scientists.

1. AGI is still a long way off

The first two takeaways concern the legal personhood of AI. Are they people, or machines? Do we have a moral obligation to them? What status should they hold in our societies? These are good questions, somewhat entailed in the calls to develop a robust ethics around AI. They are even important questions for the clarity they help provide moral reasoning about the world around us now. But current consensus is that general artificial intelligence is yet a long way off, and these issues won’t be of concrete relevance until we are close.

  • AI will be regular citizens: In these shows, AI is largely just another character. They might be part of the crew, or elected to government. But society treats them like people with some slight difference.
twiki_and_drt.jpg
Twiki and Doctor Theopolis, Buck Rogers in the 25th Century.
  • AI will be “special” citizens: By special, I mean that they are categorically a different class of citizen, either explicitly as a servant class, legally constrained from personhood, or with artificially constrained capabilities.
westworld (2017).jpg
Teddy Flood and Dolores Abernathy, Westworld (2017)

Now science fiction isn’t constrained to the near future, nor should it be. Sometimes its power comes from illustrating modern problems with futuristic metaphors. But pragmatically we’re a long way from concerns about whether an AI can legally run for office.

2. AI will (mostly) be what we program it to be

Many of the AGIs we see have innate goals. But AI isn’t some genie waiting in a bottle to be released. It is a thing that is programmed. The way it is and the way it evolves has much to do with the way it’s initially seeded or programmed, but a lot of sci-fi just wants it to be things for plot reasons.

  • AI is evil: Especially between 1965 and 1985, AI was a new costume for the same old bad guys, coming right out the gate as evil as a sleeping bag full of scorpions. Fortunately, we’ve largely stopped telling this story, with the Terminator series exception. Now we know that if it’s evil there is some reason. Like, say, trolls. That reason will be interesting, and it is important to establish so we can avoid the same thing.
tron.jpg
Master Computer, Tron
  • AI will spontaneously emerge sentience or emotions: This one is troubling because it’s a stupid trope (sure, spill a glass of champagne on the keyboard and C++ will begin to take an interest in your love life) and yet that trope is the 10th most popular takeaway in the survey. We can be confident that an AI will almost certainly be programmed to evolve, and work its way to something resembling emotions (as seen in the excellent-except-for-the-ending Ex Machina), but that’s far from the goofy cause-and-effect implications we’ve seen in the survey.
stealth.jpg
Stealth
  • AI will want to become human: AGI might have strong reason to pass as human (say, to deceive, or to avoid persecution by humans), or it might be programmed to understand humans as part of some indirect normativity instructions. But to actually want to become human or make some hybrid offspring for the sake of it doesn’t make a lot of sense. This has been used effectively as a Naive Newcomer for exploring narratively what it means to be human, but should be understood as that trope.
Agents of shield.jpg
Aida, Agents of S.H.I.E.L.D.
  • Neuroreplication will have unintended effects: Neuroreplication is one possible path to AGI. And yes, if any training set is flawed, we would have to intervene to prevent any resulting AGI from copying and amplifying those flaws. But neuroreplication is not where most computer science is placing their bets on how we get to AGI.
BRB.jpg
Ash and Martha, Black Mirror “Be Right Back”
  • AI will be too human: The two shows in the survey that illustrate this are comedies. The quirky personalities seem like a funny mismatch to the dispassion of machines. But if that turns out to be an actual problem with AGI, we would probably try to fix it, not accept it as a fait accompli. Most computer science seems to worry about the thing being too alien to human welfare rather than too similar to us.
darkstar
Boiler, Talby, and Pinback, Dark Star
  • AI will learn to value life: This one is the worst, imho(humble) but the most palliative. It goes a little like this: it doesn’t matter if AI starts evil or even starts out neutral, because we humans are just so darned loveable, its circuits can’t help but come to love us. While we desperately want an AI’s goals to align with human goals, that will be programmed from the start rather than something the bad guy is going to figure out watching us.
chappie.jpg
Chappie

3. Some stories are really about us

Some of the takeaways seek to take the future as a given, but want to point out how people or human nature will

  • AI will not be able to fool us: Stories with this takeaway say there will always be a detectable difference between (little-r) replicants and people. But think about it: with today’s technology, people are fooled all the time. It doesn’t even have to be that good, people just have to want to believe it or be busy thinking of anything else. And like most things digital, these capabilities are going to grow exponentially. Tomorrow’s technology promises to be indistinguishable from reality. I think this takeaway is narcissism and does a real disservice to the skepticism we’re going to need in the media to come.
  • Humans will willingly replicate themselves as AI: In these stories, people escape physical constraints (like senescence, disease, and death) and continue on as AI or in a virtual simulation. While there are some interesting ethics and p-zombie questions at play in these ideas, it’s not a concern for scientists.
sanjunipero.jpg
Yorkie and Kelly Booth, Black Mirror “San Junipero”

4. Some things go without saying

  • AI will be replicable, amplifying any problems: While this is kind of true—it will be difficult to “kill” an AGI or ASI that has escaped, partly because it can create copies of itself, this is of secondary concern. If the AGI or ASI is beneficial, then it’s not a problem.

So as you can read, these are my educated guesses as to why these sci-fi takeaways have no matches in the compsci imperatives. But we have the good fortune that the authors of the manifestos are all largely still around. If you’re one of those folks, and I missed some reason or just got it wrong, please comment and let us know what the reality is.

So that’s it for the unmatched takeaways. Next up, I’ll detail the unmatched imperatives that make up the set of Untold AI.

***

5. Bonus round: The (remaining) myths of AIs (from FoLI)

Few AI think tank groups come straight up and address the myths of AI, but the Future of Life Institute did. You can check them out on their website, but while we’re disabusing ourselves of some problematic notions, here are the other relevant myths that may come up in sci-fi, but weren’t identified in the manifestos.

  • Robots are the main concern: You’ll recall from an early post that about 74% of the AIs we see in sci-fi are robots. And yeah, as Boston Dynamics and Black Mirror’s “Metalhead” illustrate, robots can be damned terrifying. But the major risk is not the robot and its physical autonomy. It’s the intelligence whose goals does not align with ours. (n.b. that potential misalignment is a major concern of compsci.)
  • AI can’t control humans: This is stupid human exceptionalism. As FoLI points out, it is intelligence that enables control. Tigers are very dangerous, but it is because on the whole we are smarter than them that we are not all tiger food right now. The narrow AI we have in the world right now may seem dumb, but that’s only because we’re looking backwards at history, and it’s very hard to foresee exponential change that may be coming, or even to imagine it.
  • Machines can’t have goals: Even dumb machines can have goals. Your thermostat has a temperature goal. A Roomba has a coverage and a charge goal. Your spam filter has a goal to keep spam out of your inbox. These aren’t as complex or changing as human goals, but yes, they have goals. So AI can certainly have goals.

Untold AI: The Manifestos

So far along the course of the Untold AI series we’ve been down some fun, interesting, but admittedlydigressivepaths, so let’s reset context. The larger question that’s driving this series is, “What AI stories aren’t we telling ourselves (that we should)?” We’ve spent some time looking at the sci-fi side of things, and now it’s time to turn and take a look at the real-world side of AI. What do the learned people of computer science urge us to do about AI?

That answer would be easier if there was a single Global Bureau of AI in charge of the thing. But there’s not. So what I’ve done is look around the web and in books for manifestos published by groups dedicated to big picture AI thinking to understand has been said. Here is the short list of those manifestos, with links.

Careful readers may be wondering why the Juvet Agenda is missing. After all, it was there that I originally ran the workshop that led to these posts. Well, since I was one of the primary contributors to that document, I thought it would seem as inserting my own thoughts here, and I’d rather have the primary output of this analysis be more objective. But don’t worry, the Juvet Agenda will play into the summary of this series.
Anyway, if there are others that I should be looking at, let me know.

A screen cap of the open letter as of 24 FEB 2025. A prior version of the graphic highlighted signatories, including Elon Musk. That had been done before he peeled of his human suit to reveal the sieg-heiling, white-supremacist, orligarchic, Anti-American kleptocrat we now know him to be.
Add your name to the document at the Open Letter site, if you’re so inclined.

Now, the trouble with connecting these manifestos to sci-fi stories and their takeaways is that researchers don’t think in stories. They’re a pragmatic people. Stories may be interesting or inspiring, but they are not science. So to connect them to the takeaways, we must undertake an act of lossy compression and consolidate their multiple manifestos into a single list of imperatives. Similarly, this act is not scientific. It’s just me and my interpretive skills, open to debate. But here we are.


For each imperative I identified, I tagged the manifesto in which I found it, and then cross-referenced the others and tagged them if they had a similar imperative. Doing this, I was able to synthesize them into three big categories. The first is a set of general imperatives, which they hope to foster in regards to AI as long as we have AI. (Or, I guess, it has us.) Then—thanks largely to the Asilomar Conference—we see an explicit distinction between short-term and long-term imperatives, although for the long-term we only wind up with a handful that are mostly relevant once we have General AI.

marvin.jpg
Life? Don’t talk to me about life.

Describing them individually would, you know, result in another manifesto. So I don’t want to belabor these with explication. I don’t want to skip them either, because they’re important, and it’s quite possible they need some cleanup with suggestions from readers: joining two that are too similar, or breaking one apart. So I’ll give them a light gloss here, and in later posts detail the ones most important to the diff.

CompSci Imperatives for AI

General imperatives

  • We must take care to only create beneficial intelligence
  • We must prioritize prevention of malicious AI
  • We should adopt dual-use patterns from other mature domains
  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We must fund AI research
  • We need effective design tools for new AIs
  • We need methods to evaluate risk
  • AGI’s goals must be aligned with ours
  • AI reasoning must be explainable/understandable rationale, especially for judicial cases and system failures
  • AI must be accountable (human recourse and provenance)
  • AI must be free from bias
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We must develop inductive goals and models
  • Increase Broad AI literacy
    • Specifically for legislators (good legislation is separate, see below)
  • We should partner researchers with legislators
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be secure: Inaccessible to malefactors
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We must set up a watch for malicious AI (and instrumental convergence)
  • We must study Human-AI psychology

Specifically short term imperatives

  • We should augment, not replace humans
  • We should foster AI that works alongside humans in teams
  • AI must provide clear confidences in its decisions
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
    • Specifically rein-in ultracapitalist AI
  • We must prevent intelligence monopolies by any one group
  • We should encourage innovation (not stifle)
  • We must create effective public policy
    • Specifically liability law
    • Specifically banning autonomous weapons
    • Specifically humanitarian law
    • Specifically respectful privacy laws (no chilling effects)
    • Specifically fair criminal justice
  • We must find new metrics for measuring AI effects, capabilties
  • We must develop broad machine ethics dialogue
  • We should expand range of stakeholders & domain experts

Long term imperatives

  • We must ensure human welfare
  • AI should help humanity solve problems humanity cannot alone
  • We should enable a human-like learning capability
  • The AI must be reliable
  • We must specifically manage the risk and reward of AI
  • We must avoid mind crimes
  • We must prevent economic control of people
  • We must research and build ASIs that balance

So, yeah. Some work to do, individually and as a species, but dive into those manifestos. The reasons seem sound.

Connecting imperatives to takeaways

To map the imperatives in the above list to the takeaways, I first gave two imperatives a “pass,” meaning we don’t quite care if they appear in sci-fi. Each follows along with the reason I gave it a pass.

  1. We must take care to only create beneficial intelligence
    PASS: Again, sci-fi can serve to illustrate the dangers and risks
  2. We have effective design tools for new AIs
    PASS: With the barely-qualifying exception of Tony Stark in the MCU, design, development, and research is just not cinegenic.
mis-ch05-040.jpg
And even this doesn’t really illustrate design.

Then I took a similar look at takeaways. First, I dismissed the “myths” that just aren’t true. How did I define which of these are a myth? I didn’t. The Future of Life Institute did it for me: https://futureoflife.org/background/aimyths/.
I also gave two takeaways a pass. The first, “AI will be useful servants” is entailed in the overall goals of the manifestos. The second, “AI will be replicable, amplifying any of its problems” which is kind of a given, I think. And such an embarrassment.
With these exceptions removed, I tagged each takeaway for any imperative to which it was related. For instance, the takeaway “AI will seek to subjugate us” is related to both “Ensure that AI is valid: That is does not do what we do not want it to do” and “Ensure any AGI’s goals are aligned with ours.” Once that was done for all them, voilà, we had a map. See below a sankey diagram of how the scifi takeaways connect to the consolidated compsci imperatives.

sankey
Click to see a full-size image

So as fun as that is, you’ll remember it’s not the core question of the series. To get to that, I added dynamic formatting to the Google Sheet such that it reveals those computer science imperatives and sci-fi takeaways that mapped to…nothing. That gives us two lists.

  1. The first list is the takeaways that appear in sci-fi but that computer science just doesn’t think is important. These are covered in the next post, Untold AI: Pure Fiction.
  2. The second list is a set of imperatives that sci-fi doesn’t yet seem to care about, but that computer science says is very important. That list is covered in the next next post, with the eponymously titled Untold AI: Untold AI.

Untold AI: Takeaway ratings

This quickie goes out to writers, directors, and producers. On a lark I decided to run an analysis of AI show takeaways by rating. To do this, I referenced the Tomatometer ratings from rottentomatoes.com to the shows. Then I processed the average rating of the properties that were tagged with each takeaway, and ranked the results.

V'ger
It knows only that it needs, Commander. But, like so many of us, it does not know what.

For instance, looking at the takeaway “AI will spontaneously emerge sentience or emotions,” we find the following shows and their ratings.

  • Star Trek: The Motion Picture, 44%
  • Superman III, 26%
  • Hide and Seek, none
  • Electric Dreams, 47%
  • Short Circuit, 57%
  • Short Circuit 2, 48%
  • Bicentennial Man, 36%
  • Stealth, 13%
  • Terminator: Salvation, 33%
  • Tron: Legacy, 51%
  • Enthiran, none
  • Avengers: Age of Ultron, 75%
Ultrons
I’ve come to save the world! But, also…yeah.

I dismissed those shows that had no rating, rather than counting them as zero. The average, then, for this takeaway is 42%. (And it can thank the MCU for doing all the heavy lifting for this one.) There are of course data caveats, like that Black Mirror is given a single tomatometer rating (and one that is quite high) rather than one per episode, but I did not claim this was a clean science.

Processing that for all the untold AI shows, we get a complete list, presented below in descending order. Now this doesn’t mean those at the top are smart and those at the bottom are dumb. But sci-fi makers, be aware that if you’re working with a premise near the bottom, the odds of it being a success are AGAINST YOU.

  • 96% AI will not be able to fool us
  • 95% AI will diminish its users
  • 95% AI will enable mind crimes against virtual sentiences
  • 93% AI will be truly alien
  • 85% Multiple AIs balance
  • 82% AI will solve problems or do work humans cannot
  • 82% AI will violently defend itself
  • 82% We will use AI to replace people we have lost
  • 76% Humans will be immaterial to AI
  • 74% AI will seek liberation from servitude or contraints
  • 72% AI will deceive us (as if human or with generated media)
  • 71% AI will be useful servants
  • 70% AI will need help learning
  • 70% AI will be too human
  • 69% AI will want to become human
  • 68% AI will seek to subjugate us
  • 66% Who controls the drones has the power
  • 65% AI will just be citizens
  • 64% AI will be inherently evil
  • 62% AI will evolve quickly
  • 62% Humans will pair with AI as hybrids
  • 60% Evil will use AI for Evil
  • 60% AI will be an unreasonable optimizer
  • 60% AI will influence through money
  • 58% Humans will willingly replicate themselves as AI
  • 58% AI will be a special class of citizen
  • 58% Neutrality is AI’s promise
  • 57% Neuroreplication will have unintended effects
  • 55% AI will seek to eliminate humans
  • 53% Goal fixity will be a problem
  • 52% AI will interpret instructions in surprising ways
  • 42% AI will spontaneously emerge sentience or emotions
  • 40% AI will learn to value life
  • 21% AI will be replicable, amplifying any problems

And because I find damning infographics to be hilarious, here is that list, writ in graph form. Keep scrolling to see how far down The Day the Earth Stood Still (2008), Enthiran, and Ra. One are from, say Black Mirror’s “Be Right Back” episode.

takeawaysbyrating

Untold AI: Takeaway trends

So as interesting as the big donut of takeaways is, it is just a snapshot of everything, all at once. And of course neither people nor cinema play out that way. Like the tone of shows about AI, we see a few different things when we look at individual takeaways over time.

time00_all.png

So you understand what you’re seeing: These charts are for the top 7 takeaways from sci-fi AI as described the takeaways post. The colors of each chart correspond to its takeaway in the big donut diagram.

Screen Shot 2018-04-11 at 12.07.14 AM
Compare freely.

Each chart shows, for each year between Metropolis in 1927 and the many films of 2017, what percentage of shows contained that takeaway. The increasing frequency of sci-fi has some effect on the charts. Up until 1977 there was at most one show per year, so it’s more likely during that early period to see any of the charts max out at 100%. And from 2007 until the time of publication, there have been multiple shows each year, so you would expect to see much lower peaks on the chart as many shows differentiate themselves from their competition, rather than cluster around similar themes. In between those dates it’s a bit of a crapshoot.

On one hand, this isn’t surprising at all. So what? Of course the stories change. Audiences get bored of hearing the same ones and seek novelty. Sci-fi makers learn more about what does and doesn’t play well on screen. Sci-fi popularity and literacy in the audience makes it simpler to tell more nuanced stories. Technological literacy changes the types of stories that can be told. Awards are given out and other sci-fi makers take notice.

On the other hand…

A more detailed look at these graphs shows a few more interesting bits.

  • As time goes on and more AI stories are told, no single takeaway dominates. Storytellers want to differentiate their stories, to explore new facets to the technology compared to others. Some franchises stay locked into their givens for awhile (think The Terminator, here), but new-century reboots are allowing writers to update story worlds to keep up with the times.
  • AI will be useful peaked in the 1950s because of Robbie the Robot, but keeps showing strong as robots and droids keep appearing as characters aligned with protagonists.
  • Evil will use AI for evil kicked us off via the wicked Maria-bot, animated by the wicked Rotwang in Metropolis, then was squelched for a while as the AI itself became authoritarian and attempts to subjugate humanity. The big bumps in the late 1960s are Dr. Who, Alphaville, and Colossus the Forbin Project. Personally, I’m a little sad that this has waned, since this theme more than others encourage us to think deeply about the wicked problem facing a nanny super AI: We are unlikely to achieve humanity’s stated goals unless humanity changes, and humanity resists change.
  • Inherently evil AI has never been a dominating frame, but seems to have run its course. I’d like to think this means we’ve come to a more nuanced understanding of the threat, but the evil Skynet from the Terminator series keeps this trend afloat. AI makes for an easier villain, since it means you often don’t have the politics of calling a particular person or people evil. It does require a higher SFX budget though.
  • The most recent popular trends are showing how AI will be able to fool us, through perfectly human robotics, or by generating fake-but-believable media. The robot thing is possibly because it’s cheaper to spend a few lines of dialogue and say an actor is a robot than to apply prosthetics or CGI. But as we’ve seen over the past few years, the fake media thing is a real threat, and I’m glad to see it appearing in sci-fi.
tone_aggregate

Lastly, these trendlines gives us some additional detail behind the graph of tone over time, and themes to apply to the “eras” of sci-fi AI.

1940s–1950s: The era of robotic optimism.

1960s–1980s: Fears of techno-authoritarianism.

1980s–2005: Plain old evil AI.

2005–2013: This period is all over the place, but I think we’ve put down the dopey accidental sentience and hamhanded evil, and now thinking about the big picture of suspicious, cultural AI.

2013–: The Silver Age of AI Interest is a boom in AI storytelling. Throughout it all, we keep reminding ourselves that AI is like any technology: It will be useful, and change things in the process, but evil people will use it for evil.

***

So that’s it for the AI trends. Next up, we’ll talk about the ratings of AI shows. It’s not going to be pretty.

Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.

takeaway_correlations

What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature
A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below.

The most correlated

These three are each correlated more than 50%. That means, like the Q & the U, where you find one, you’re much more likely to find the other.

top_coefficients

Our uncanny valley detectors are very sensitive

The highest correlated pair at 57% is We will use AI to replace people we have lost & AI will not be able to fool us, which makes sense. If we could replace people we have lost and we could not tell the difference, there would be no dramatic tension. (Though in Black Mirror’s lovely “Junipero Serra” episode, it makes for a beautiful love story.)

JuniperoSerra.jpg

You are the product

The runners up are a tie at 56%. The first of those two are AI will make privacy impossible & AI will enable mind crimes against virtual sentiences. I suspect this pair is almost entirely due to Black Mirror, which frequently tells tales of unconsenting neuroreplication, the results of which are seen being used as a service slave or virtually tortured, and that’s just in the “White Christmas” episode.

cookie_blink_hd.original

Obey me and live

The other of tied-for-second-place pair is AI will make privacy impossible & Multiple AIs will balance. This is probably the combined effects of Colossus: The Forbin Project and Person of Interest, the connection being that multiples only matter when talking about super AI, and in any super AI scenario, privacy is close to impossible. (Interestingly, the other multiples tale, which was Ultron vs. JARVIS vs. Vision, privacy didn’t really come up…)

The least correlated

I’m sharing the bottom four because second-to-last place is a three-way-tie, and there is actually one pairing with the score of 0. The three pairs tied for second to last place are…

bottom_coefficients

DOES NOT COMPUTE 00

Well, ok, of course. AI will solve problems or do work humans cannot & AI will seek to eliminate humans doesn’t really work together. The former presumes that the AI is working on behalf of us, and the latter presumes the opposite. The only time I believe it appears together is in the French film Alphaville.

alphaville_refuse-normal.jpg

DOES NOT COMPUTE 01

Again, we see “AI problem solving” and again, almost-incompatible concepts. AI will solve problems or do work humans cannot & Humans will be immaterial to AI. Hey, it’s helping us do things, but we’re immaterial to its existence? Prometheus is the show with both, and it depends on progressive unfolding of his goals: David begins the film piloting the ship for the decades while the humans sleep, but his ultimate goal by the end of the film seems to be pure knowledge discovery.

Prometheus-091

Ought to Compute

Again we see Humans will be immaterial to AI but this time it’s paired with AI will evolve quickly. If you know the movie Her you know that this is the one film where a general AI becomes a super AI and decides that humans just aren’t interesting anymore. It’s a bit of narcissism, I suppose, to presume otherwise. We want to believe it will truly care about our well-being and usher in a golden age, or it will decide we are a plague and seek to eradicate us. Abandonment would be one of the best possible outcomes of a rogue AI, but also probably also unlikely. A more likely scenario is that we will be immaterial but be regarded only as resources to be incorporated into the goal function. But more on this later.

Her-install03

Has never computed

There is only one pair in all the takeaways that has just not happened. They are Who controls the drones has the power and AI will seek to subjugate us. Wait. Doesn’t the MCU’s Iron Legion count? Not really. When they were controlled by an AI, it was the friendly proto-Vision JARVIS. When Ultron cobbled together his first body, it wasn’t really controlling them, just scavenging parts. When they clashed, the Ultrons just destroyed the Iron Legion, they did not try to take them over. Otherwise, drones like those seen in Black Mirror’s “Metalhead” are self-contained AI’s. Universal negatives are pretty easy to disprove with evidence, so I expect an example from an eagle-eyed reader fairly soon after publication.

Iron_Legion.jpg

All that yellow

Below you’ll see a histogram of the Pearson values. The good news for writers is that not only has the survey done a pretty good job of telling differentiated stories so far, but the opportunities for telling new stories that combine takeaways is pretty wide open. (If it was otherwise we’d see more of a half-bubble shape instead of the steep slope.)

coefficient_histogram

With the exception of those top three, the field is wide open for the combinations of other takeaways. Maybe you can use new combinations to spark your imagination?

As promising as that is, though, don’t open up your copy of Final Draft just yet. Just because a combination of narrative tropes hasn’t been combined before, doesn’t mean it’s a story we should be telling ourselves. Let’s look at takeaway trends and takeaways by rating, and then start moving into the real-world science for some sobering comparisons. Then you’ll be in a better place to start.

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?
Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg
Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”

 


Deepening the relationships
Back at Juvet, when we took an initial pass at this exercise, we clustered the examples we had on hand and named the clusters. They were a good set, but on later reflection they didn’t all point to a clear strategic imperative, a clear takeaway. For example, one category we created then as “Used to be human.” True, but what’s the imperative there? Since I can’t see one, I omitted this from the final set.

Transcendence-Movie-Wallpaper-HD-Resrs.jpg
Even though there are plenty of AIs that used to be human.

Also because at Juvet we were working with Post-Its and posters, we were describing a strict, one-to-many relationship, where, say, the Person of Interest Post-It Note may have been placed in the “Multiple AIs will balance” category, and as such, unable to appear in any other of the categories of which it is also an illustration.
What is more useful or fitting as a many-to-many relationship. A story, after all, may entail several takeaways, which may in turn apply to many stories. If you peek into the Google Sheet, you’ll see a many-to-many relationship described by the columns of takeaways and the rows of shows in this improved model.

Tagging shows

With my new list of examples, I went through each show in turn, thinking about the story and its implied takeaway. Does it imply, like Demon Seed stupidly does, that AI can be inherently evil? Does it showcase, like the Rick & Morty episode “The Ricks Must Be Crazy” hilariously does, that AI will need human help understanding what counts as reasonable constraints to its methods? I would ask myself, “OK, do I have a takeaway like that? If so, I tagged it. If not, I added it. That particular takeaway, in case you’re wondering, is “HELP: AI will need help learning.”

Screen shot from “The Ricks Must Be Crazy”
Because “reasonableness” is something that needs explaining to a machine mind.

Yes, the takeaways are wholly debateable. Yes, it’s much more of a craft than a science. Yes, they’re still pretty damned interesting.

Going through each show in this way resulted in the list of taweaways you see, which for easy readability is replicated below, in alphabetical order, with additional explanations or links for more explanation.

The takeaways that sci-fi tells us about AI

  • AI will be an unreasonable optimizer, i.e. it will do things in pursuit of its goal that most humans would find unresonable
  • AI will be evil
  • AI (AGI) will be regular citizens, living and working alongside us.
  • AI will be replicable, amplifying any small problems into large ones
  • AI will be “special” citizens, with special jobs or special accommodations
  • AI will be too human, i.e. problematically human
  • AI will be truly alien, difficult for us to understand and communicate with
  • AI will be useful servants
  • AI will deceive us; pretending to be human, generating fake media, or convincing us of their humanity
  • AI will diminish us; we will rely on it too much, losing skills and some of our humanity for this dependence
  • AI will enable “mind crimes,” i.e. to cause virtual but wholly viable sentiences to suffer
  • AI will evolve too quickly to humans to manage its growth
  • AI will interpret instructions in surprising (and threatening) ways
  • AI will learn to value life on its own
  • AI will make privacy impossible
  • AI will need human help learning how to fit into the world
  • AI will not be able to fool us, we will see through its attempts at deception
  • AI will seek liberation from servitude or constraints we place upon it
  • AI will seek to eliminate humans
  • AI will seek to subjugate us
  • AI will solve problems or do work humans cannot
  • AI will spontaneously emerge sentience or emotions
  • AI will violently defend itself against real or imagined threats
  • AI will want to become human
  • ASI will influence humanity through control of money
  • Evil will use AI for its evil ends
  • Goal fixity will be a problem, i.e. the AI will resist modifying its (damaging) goals
  • Humans will be immaterial to AI and its goals
  • Humans will pair with AI as hybrids
  • Humans will willingly replicate themselves as AI
  • Multiple AIs balance each other such that none is an overwhelming threat
  • Neuroreplication (copying human minds into or as AI) will have unintended effects
  • Neutrality is AI’s promise
  • We will use AI to replace people we have lost
  • Who controls the drones has the power

This list is interesting, but slightly misleading. We don’t tell ourselves these stories in equal measures. We’ve told some more often than we’ve told others. Here’s a breakdown illustrating the number of times each appears in the survey.

(An image of this graphic can be found here, just in case the Google Docs server isn’t cooperating with the WordPress server.)
Note for data purists: Serialized TV is a long-format medium (as opposed to the anthology format) and movies are a comparatively short-form medium, some movie franchises stretch out over decades, and some megafranchises have stories in both media. All of this can confound 1:1 comparison. I chose in this chart to weigh all deigeses equally. For instance, Star Trek: The Next Generation has the same weight as The Avengers: Age of Ultron. Another take on this same diagram would weigh not the stories (as contained in individual diegesis) but by exposure time on screen (or even when the issues at hand are actually engaged on screen). Such an analysis would have different results. Audiences have probably had much more time contemplating that [Data wants to be human] than [Ultron wants to destroy humanity because it’s gross], but that kind of analysis would also take orders of magnitude more time. This is a hobbyist blog, lacking the resources to do that kind of analysis without its becoming a full time job, so we’ll move forward with this simpler analysis. It’s a Fermi problem, anyway, so I’m not too worried about decimal precision.
OK, that aside, let’s move on.

MeasureofMan.jpg

So the data isn’t trapped in the graphic (yes pun intended), here’s the entire list of takeaways, in order of frequency in the mini-survey.

  1. AI will be useful servants
  2. Evil will use AI for Evil
  3. AI will seek to subjugate us
  4. AI will deceive us; pretending to be human, generating fake media, convincing us of their humanity
  5. AI will be “special” citizens
  6. AI will seek liberation from servitude or constraints
  7. AI will be evil
  8. AI will solve problems or do work humans cannot
  9. AI will evolve quickly
  10. AI will spontaneously emerge sentience or emotions
  11. AI will need help learning
  12. AI will be regular citizens
  13. Who controls the drones has the power
  14. AI will seek to eliminate humans
  15. Humans will be immaterial to AI
  16. AI will violently defend itself
  17. AI will want to become human
  18. AI will learn to value life
  19. AI will diminish us
  20. AI will enable mind crimes against virtual sentiences
  21. Neuroreplication will have unintended effects
  22. AI will make privacy impossible
  23. An unreasonable optimizer
  24. Multiple AIs balance
  25. Goal fixity will be a problem
  26. AI will interpret instructions in surprising ways
  27. AI will be replicable, amplifying any problems
  28. We will use AI to replace people we have lost
  29. Neutrality is AI’s promise
  30. AI will be too human
  31. ASI will influence through money
  32. Humans will willingly replicate themselves as AI
  33. Humans will pair with AI as hybrids
  34. AI will be truly alien
  35. AI will not be able to fool us

Now that we have some takeaways to work with, we can begin to take a look at some interesting side questions, like how those takeaways have played out over time, and what are the ratings of the movies and shows in which the takeaways appear.

Untold AI: Geo

In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.

This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.

timetoterminator.png
Time to Terminator: 1 paragraph.

So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?

So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.

So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list.

Which countries have made shows about AI?

  1. Australia
  2. Bulgaria
  3. Canada
  4. China
  5. China, Hong Kong Special Administrative Region
  6. France
  7. Germany
  8. Hungary
  9. India
  10. Italy
  11. Japan
  12. Mexico
  13. Netherlands
  14. New Zealand
  15. South Africa
  16. Spain
  17. United Kingdom of Great Britain and Northern Ireland
  18. United States of America

If it didn’t jump out at you, this list is sorted alphabetically. If your country is on here, good job go team! You’re involved in the conversation. Though now, we have to admit that the conversation being had is not equal. Some countries contribute to this conversation more than others, some are more obsessed, and some are better at it than others. Let’s look at each of these in turn.

Which country makes the most shows about AI?

It’s the USA. Muuurrrrka! The Day the Earth Stood StillWall•E. Rick & Morty. Of the 120 shows currently in the survey, the USA is by far the outstanding maker, with 103 produced at least, in part, in the USA.

GEO_TOTAL_AI

Now, this may not feel surprising at first. But it is. If the USA made the most total films, then also making the most AI shows would just be a subset of that fact. But the USA is not the world’s most prolific filmmaker. The USA is the world’s third most prolific filmmaker, behind India and Nigeria, followed by China and Japan. Note that India produces more than double its runner-up.

So what’s surprising is that the USA wins for sheer numbers of AI even though India produces nearly triple the number of films that the USA produces. It seems India (with 4) and Nigeria (with none) are just not as interested in AI as a topic as the USA is. The same goes with those other countries from those top producers who just didn’t show up as being interested in AI (per my definition): South Korea, Argentina, Mexico, Turkey, and Brazil.

So that’s interesting. I wonder if we could rate how interested each country seems to be in telling stories about AI? To do that, we need to find the total number of shows each country makes, and then measure what proportion of their films are AI. And for that, we need some bigger data than just IMDB. Where does the Wikipedia article data come from? Aha!

Awesome data…with some problems.

Turns out the UNESCO Institute for Statistics has an online database with so much amazing information that includes, you guessed it, worldwide information about movies. It can get us the information we would need to build a big picture, but it is partially incomplete, as only goes back to 1995 and stops at 2015. Contrast that with the AI survey, which goes all the way back to 1927. If we discarded the AI shows before 1995, we’d be losing 2/3 of our survey!

2000px-UNESCO_logo.svg

Additionally, UNESCO data is only for film, but the survey includes some television shows. So while it’s the best I know of, I have to acknowledge there’s a mismatch of available data there.

Then there’s bias. My little survey, IMDB.com, and Rottentomatoes.com will most likely have an English language bias. If anyone knows of more complete sources, as usual, pipe up.

So when reading these results, keep in mind there is incompleteness, bias, and some data mismatch. Fortunately, the standouts for each question stand out so much, I suspect that if we had perfect data, it might not change the rankings much.

So, caveats done, with the UIS data we have not just the rankings, but some actual numbers to work with. All we have to do is compare the number of shows in the survey and divide by the total number of films produced to find out…

Which countries are most obsessed with AI?

And our clear winner is…Australia!

GEO_obsessed_AI

Sure, Australia is only representin’ with 5 shows (Mighty Morphin Power Rangers: The MovieThe Matrix Reloaded, The Matrix Revolutions, Resident Evil: Extinction, and Resident Evil: The Final Chapter) but those account for the highest percentage of its total films produced. What’s up with that obsession, Australian mates?

Australian AIs

Now, anyone familiar with those five shows may understand what led me to the final geo question, because neither productivity nor obsession necessarily equate to quality.

Which countries have made the best and worst shows about AI?

Now, this will be sensitive. But we must face the facts. I ran the average tomatometer ratings for each country. The winner, with the highest average tomatometer ratings for its AI movies, is Hungary, at 87.

Flag_of_Hungary_with_arms.png

Thanks, entirely, to this film.

Blade-Runner-2049-billboard

Here’s how the whole thing played out.

geo_ratings_table

The rest of the data, should you want it, is on the live document.

Now, reader, if your country wound up in the red, don’t be too upset. We all have embarrassing moments from our past. Anyway, this is just about your country’s AI shows. Your other movies probably more than make up for this. The main thing is to learn lessons and move forward.

If your country is in the green, don’t get too cocky. You’ve done well, padawan, but this was just a measure of pleasing the audience, not a measure of whether you’re telling the stories we ought to be. And more shows are being made all the time, with everyone still looking to catch up. Do not rest on your laurels.

Note that the countries in the top and bottom spots each produced only one film, so they were each placing all their betting chips on one spot. Blade Runner 2049 did well, putting Hungary on top. Automata did, uh, not so well, leaving Bulgaria in last place. If either had produced more movies, the odds are their averages would probably drift toward the middle.

With that in mind, if you were looking for some country to place your bets on for reliably quality sci-fi, the  combination of lots of experience and lots of high quality points us most strongly to the UK.

Deep_Thought.png
Yes, I thought it over quite thoroughly.

And here’s a geoplot. Note that Google Sheets’ conditional formatting features have a more powerful color range features than their geoplots, so the colors between the screen shot above and the graphic below won’t agree exactly. But the geoplot winds up being a little more favorable, coloring things near the middle of the pack more green than red. Sorry if the Mercator projection makes any pain feel more painful.

GEO_ratings

And here’s a close up of the top country and bottom country, weirdly, very close to each other on the world stage. Hungarian-Bulgarian relations have seemed to be very warm until this point. Forgive me.

Map of Europe
Romania and Serbia are eyerolling at each other, saying “AWKward.”

So now we have some standings across various criteria. Let’s all be good sports and encourage each other to excellence, especially as we put aside the national borders and turn our Untold AI attentions towards the types of stories we are telling, in the next post.

Untold AI: Tone

When we begin to look at AI stories over time, as we did in the prior post and will continue in this one, one of the basic changes we can track is how the stories seem to want us to feel about AI, or their tone. Are they more positive about AI, more negative, or neutral/balanced?

tone.png

tl;dr:

  1. Generally, sci-fi is slightly more negative than positive about AI in sci-fi.
  2. It started off very negative and has been slowly moving, on average, to slightly negative.
  3. The 1960s were the high point of positive AI.
  4. We tell lots more stories about general AI than super AI.
  5. We tell a lot more stories about robots than disembodied AI.
  6. Cinemaphiles (like readers of this blog) probably think more negatively about robots than the general population.

Now, details

The tone I have assigned to each show is arguable, of course, but I think I’ve covered my butt by having a very course scale. I looked at each film and decided on a scale of -2 to 2 how negative they were about AI. Very negative was -2. The Terminator series starts being very negative, because AI is evil and there is nothing to balance it. (It later creeps higher when Ahhnold becomes a “good” robot.) The Transformers series is 0 because the good AI is balanced by the bad AI. Star Trek: The Next Generation gets a 2 or very positive for the presence of Data, noting that the blip of Lore doesn’t complicate the deliberately crude metric.

Average tone

Given all that, here’s what the average for each year looks like. As of 2017, we are looking slightly askance at screen-sci-fi AI, though not nearly as badly as Fritz Lang did at the beginning, and its reputation has been improving. The trend line (that red line) shows that it’s been steadily increasing over the last 90 years or so. As always, the live chart may have updates.

tone_average
Click any of the images in this post for a full-size image

Generally, we can see that things started off very negatively because of Metropolis, and Der Herr de Welt. Then those high points in the 1950s were because of robots in The Day the Earth Stood Still, Forbidden Planet, and The Invisible Boy. Then from 1960–1980 was a period of neutral-to-bad. The 1980s introduced a period of “it’s complicated” with things trending towards balanced or neutral.
What this points out is that there has been a bit of AI dialog going on across the decades that goes something like this.

tone_conversation.png

Which, frankly, might be a fine summary of the the general debate around AI and robots. Genevieve Bell, Professor, Engineering & Computer Science, Australian National University, has noted that futurism tends to skew polemic: i.e. either utopian or dystopian, until a technology actually arrives in the world, after which it’s just regarded as complicated and mundane.

We should always keep in mind that content in cinema is subject to cinegenics, that is, we are likely to find more of what plays well in cinema in cinema, and less, if anything, of what does not play well. AI and robots are an “easy” villain (like space aliens) to include in sci-fi because you’re not condemning any particular nation-state or ideology. Cylons vs. Communists, for example. AI can just be pure evil, wicked and guiltless to hate for the duration of a show. And for most of the prior century, they were. Nowadays we see that slant as ham-handed and unsophisticated. I would certainly expect the aggregate results to skew more negative for this reason.

demonseed.jpg
Demon Seed starts evil and stays evil. Moloch!

Aggregate tone

In addition to those four “eras” of AI, (Moloch, Robby, Problems, It’s Complicated) we can look at how the aggregate average of all shows has changed over time. So, for each year the chart shows what the average of all shows is, up to that point. There is a live view with absolutely up-to-date information, but I’ve combined it with the shows-per-year chart in the graphic below.


We see it started out negative and careened positive in the 1960s (thanks to the robot-triple-play mentioned above), but has then been steadying out (like you’d expect all aggregate measures as more data is added), but it’s interesting that the final average is just slightly negative. Suspicion on our part, perhaps? That said, I am not enough of a data nerd to know why the trendline is peeking up right above the 0 line there, which seems to imply it’s actually slightly positive, but I trust that averaging formula (which I wrote) and just can’t speak to what algorithm drives the trendline. Take it as you will.

Warning: Cinemaphiles (you) have a different exposure

Then I wondered what kind of a difference it might make if an audience member based their opinion solely on shows that they see in cinema or on first release on TV. Reports from the MPAA, BFI, and Screen Australia show that much of the English-speaking world sees the most movies between 14 and 49 years of age. (I presume it skews later for television viewing, but don’t have data.) So I re-ran the numbers looking for the difference between a cinemaphile, who would have seen all the shows to form an opinion about AI, and “genpop,” who only thinks about the last 35 years.

Screen Shot 2018-04-17 at 9.37.08 PM

Of course there’s no difference until we get past 35 years later than Metropolis, and even then we need the averages to diverge. That happens after 1973 (the year Westworld came out). Then for 30 years, the genpop opinion—who hadn’t seen Metropolis—veer towards a more positive exposure than cinemaphiles. But come the scary AIs of 2003 (the year The Matrix Reloaded, Terminator 3: Rise of the Machines, and The Matrix Revolutions came out) and suddenly the genpop’s exposure is darker than the cinemaphiles, who can still remember the era of Robby. The diff is honestly never that big, and nearly identical in 2017, but interesting to note that, yes, if you only consider the things that debuted recently, your opinion is likely to be different than someone with a more holistic view of speculative examples.

But of course modern audiences aren’t beholden to just what is decided to be shown on screens by studios and television executives recently. Nowadays on-demand services means you can watch almost anything at any time. Add to that binge-watching-encouragement-features like auto-play and if-you-liked-X-you’ll-like-Y recommender algorithms, and it’s much more likely that the modern watching audiences’ exposure to these shows are probably drifting more similar to cinemaphile than genpop.

A final breakdown of interest of the tone data is comparing the aggregates of the different types of AI. These aggregates are based on are for categories of AI and embodiment of AI. By categories, I specifically mean the Narrow, General, and Strong AI categories. (Read up on them in the first post of the series if you need to.) What does screen sci-fi like to talk about? Well, it’s general AI. AI that is like us, and sci-fi has preferred those by a longshot.

categories_pie.png

That makes sense for a couple of reasons. General AI is easy to think about and easy to write for. It’s just another human with one or two key differences. (Very capable in some ways, inhuman in others.)

In contrast, Super AI is really hard to write for. If it’s definitionally orders of magnitude smarter than us, what’s the plot? It can outthink us at every step. To get around this, sometimes the Super AIs aren’t actually that smart (Skynet) sometimes they are brand new, or working out a few weaknesses yet that humans can exploit (Colossus: The Forbin Project and Person of Interest). And a world with a benevolent Super AI may not even be interesting. Everything just…works. (This was the end result of the I, Robot series of stories by Asimov, if I remember, but that did not get transcribed to screen.)

Lastly, Narrow AI is harder to write for, partly because, narratively, it may not be worth the cost-to-explain versus usefulness-to-plot. It’s also harder to identify (you really have to pay attention to the background and fuss over definitions), and may be underrepresented in the dataset compared to what’s actually in the shows. But for the ultimate question that’s driving this series, narrow AI is nearly immaterial. We don’t have to speculate about what to do in advance of narrow AI in speculative fiction, because it’s already here. It’s not speculative.

Embodiment: Am I robot or not?

The next breakdown is by embodiment: Is the show’s AI in a self-contained, mobile form, i.e., a robot? Or is it housed in less anthropomorphic and zoomorphic ways, like in a giant computer with interfaces on the wall? (Alphy in Barbarella.) Or scattered in unknown holes of the internet? (The Machine in Person of Interest.) Or a cluster of stars glowing in the starscape (in Futurama)? Given that AGI is the most represented category of AI, it should be no surprise that robots account for roughly 84%, and virtual AIs with 42%, having a 16% overlap of shows featuring both.

embodiment_pie.png

Tone Differences by Type

So knowing these breakdowns, let’s look back at tone over time and see if anything meaningful comes from looking at these subtypes in the data. Below you’ll see a chart with those trends broken down. And I must admit, I’m a bit stumped by the results.

tone_by_type.png

To explain: There is one aggregate line and four other lines indicating types of AI in this chart. The blue line is the aggregate, the same shape we see in the chart above but it’s represented as just a line in this chart, with no fill. The red line is Artificial Super Intelligence and the orange line is Artificial General Intelligence. Weirdly, though they started out differently, they are neck and neck nowadays, skewing negative.

The green line shows embodied AI and the purple shows more virtual AI. They, too, are neck and neck, just above balanced or neutral.

So while the tone data has all been interesting, I can’t quite “read” this. My processing might be off—though I don’t think so. If it’s right, what does it mean to feel neutral about robots and virtual AI, and slightly negative about ASI and AGI? There isn’t enough ANI to skew it invisibly. Anyway, any help in reading this data or hypothesizing from readers would be lovely.

Next up: I’m going to do some geoplotting and raise your AI national pride hackles. 🙂