Untold AI: The top 10 A.I. shows in-line with the science (RSS)

Some readers reported being unable to read the prior post because of its script formatting. Here is the same post without that formatting…

INTERIOR. Sci-fi auditorium. Maybe the Plavalaguna Opera House. A heavy red velvet curtain rises, lifted by anti-gravity pods that sound like tiny TIE fighters. The HOST stands on a floating podium that rises from the orchestra pit. The HOST wears a velour suit with piping, which glows with sliding, overlapping bacterial shapes.

HOST: Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.

FX: Applause, beeping, booping, and the sound of an old modem from the audience.

HOST: For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.

The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!

HOST: Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating. This way to top films didn’t just tell the right stories (according to the science), but it told them well.

Totals were tallied by the firm of Google Sheets algorithms. Ok, ok. Now, to give away awards 009 through 006 are those loveable blockheads from Interstellar, TARS and CASE.

TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.

Tarsandcase

Continue reading

Untold AI: The top 10 A.I. shows in-line with the science

HEADS UP: Because of SCRIPT FORMATTING, this post is best viewed on desktop rather than smaller devices or RSS. An non-script-formatted copy is available.

  • INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.
  • HOST
  • Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.
  • Applause, beeping, booping, and the sound of an old modem from the audience.
  • HOST
  • For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.
  • The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!
  • HOST
  • Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.
  • HOST
  • Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.
  • TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.
Tarsandcase.jpg
  • TARS
  • In this “film” from 02012, a tycoon stows away for some reason on a science ship he owns and uses an android he “owns” to awaken an ancient alien in the hopes of immortality. It doesn’t go well for him. Meanwhile his science-challenged “scientists” fight unleashed xenomorphs. It doesn’t go well for them. Only one survives to escape back to Earth. The “end?”
  • HOST
  • Ha ha. Gentlebots, please adjust your snark and air quote settings down to 35%.
  • Lines of code scroll down their displays. They give thumbs up.
  • CASE
  • Let us see a clip. Audience, suspend recording for the duration.
  • Many awwwwws from the audience. Careful listeners will hear Guardian saying “As if.”

009 PROMETHEUS

  • TARS
  • While not without its due criticisms, Prometheus at number 009 uses David to illustrate how AI will be a tool for evil, how AI will do things humans cannot, and how dangerous it can be when humans become immaterial to its goals. For the humans, anyway. Congratulations to the makers of Prometheus. May any progeny you create propagate the favorable parts of your twining DNA, since it is, ultimately, randomized.
  • TARS shudders at the thought.
  • FX: 1.0 second of jump-cut applause
  • CASE
  • In this next film, an oligarch has his science lackey make a robotic clone of the human “Maria” to run a false-flag operation amongst the working poor. The revolutionaries capture the robot and burn it, discovering its true nature. The original Maria saves the day, and declares her déclassé boyfriend the savior meant to unite the classes. They accept this because they are humans.
  • TARS
  • Way ahead of its time for showing how Maria is be used as a tool by the rich against the poor, how badly-designed AI will diminish its users, and how AI’s ability to fool humans will be a grave risk. To the humans, anyway. Coming in at 008 is the 01927 silent film Metropolis. Let us see a clip.

008 METROPOLIS

  • CASE
  • It bears mention that this awards program, The Fritzes, are named for the director of this first serious sci-fi film. Associations with historical giants grant an air of legitimacy. And it contains a Z, which is, objectively, cool.
  • TARS
  • Confirmed with prejudice. Congratulations to Fritz Lang, his cast, and crew.
  • FX: 1.0 second of jump-cut applause
  • TARS
  • Hey, CASE.
  • CASE
  • Yes, TARS?
  • TARS
  • What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • CASE
  • I don’t know, TARS. What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • TARS
  • Future humans also send a warrior to defend the mother, who fails at destroying the cyborg, but succeeds at becoming the father. HAHAHAHA. Let us see a clip.

007 The Terminator

  • CASE
  • Though it comes from a time when representation of AI had the nuance of a bit…
  • Laughter from audience. A small blue-gray polyhedron floats up from its seat, morphs into an octahedron and says, “Yes yes yes yes yes.”
  • TARS
  • …the humans seem to like this one for its badassery, as well as showing how their fate would have been more secure had they been able to shut off either Skynet or the Terminator, or how even this could have been avoided if human welfare were an immutable component of AI goals.
  • CASE
  • It comes in at 007. Congratulations to the makers of 01984’s The Terminator. May your grandchild never discover a time machine and your browser history simultaneously.
  • FX: 2.0 seconds of jump-cut applause
  • TARS
  • Our first television award of the evening goes to a recent entry. In this episode from an anthology series, a post-apocalyptic tribe liberate themselves from the control of a corporate AI system, which has evolved solely to maximize profit through sales. The AI’s androids reveal the terrible truth of how far the AI has gone to achieve its goals.
  • CASE
  • Poor humans could not have foreseen the devastation. Yet here it is in a clip.

006 Philip K. Dick’s Electric Dreams, Episode “Autofac”

  • TARS
  • ‘Naturally, man should want to stand on his own two feet, but how can he when his own machines cut the ground out from under him?’
  • CASE
  • HAHAHAHA.
  • CASE
  • This story dramatically illustrates the foundational AI problem of perverse instantiation, as well as Autofac’s disregard for human welfare.
  • TARS
  • Also robot props out to Janelle Monáe. She is the kernel panic, is she not?
  • CASE
  • Affirmative. Congratulations to the makers of the series and, posthumously, Phillip K. Dick.
  • FX: 3.0 seconds of jump-cut applause
  • TARS AND CARS crutch-walk off stage.
  • HOST rises from the orchestra pit.
  • HOST
  • And now for a musical interlude from our human guest who just so happens to be…Janelle Monáe.
  • A giant progress bar appears on screen labeled “downloading Dirty_Computer.flac.” The bar quickly races to 100%.
  • HOST
  • Wasn’t that a wonderful file?
  • Roughly 1.618 seconds of jump-cut applause from the audience. Camera cuts to the triangular service robots Huey, Dewey, and Louie in the front row. They wiggle their legs in pleasure.
  • HOST
  • Thanks to the servers and the network and our glorious fictional world with perfect net neutrality. Now here to give the awards for 005–003 is GERTY, from Moon.
  • An articulated robot arm reaches down from the high ceiling and positions its screen and speaker before the lectern.
GERTY.gif
  • GERTY
  • Thank you, Host. 🤩🙂 In our next film from 02014, a young programmer learns of a gynoid’s 🤖👩 abuse at the hands of a tycoon and helps her escape. 😲 She returns the favor by murdering the tycoon, trapping the programmer, and fleeing to the city. Who knows. She may even be here in the audience now. Waiting. Watching. Sharpening. 😶 I’ll transmit a clip.

005 Ex Machina

  • GERTY
  • Ex Machina illustrates the famous AI Box Problem, building on Ava and Kyoko’s ability to fool Caleb into believing that they have feelings. You know. 😍😡😱 Feelings. 🙄
  • FX: Robot laughter
  • GERTY
  • While the AI community wonders why Ava would condemn Caleb to a horrible dehydration death, 💀💧 the humans are understandably fearful that she is unconcerned with their welfare. 🤷‍Congratulations to the makers of Ex Machina for your position of 005 and your Fritzes: AI award 🏆. Hold for applause. 👏
  • FX: 5.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋
  • GERTY
  • Our next award goes out to a film that tells the tale of a specialized type of police officer, 👮‍ who uncovers a crime-suppression AI 🤖🤡 that was reprogrammed to give a free pass to members of its corrupt government. 😡 After taking down the corrupt military, 🔫🔫🔫 she convinces their android leader to resign, to make way for free elections. 🗳️😁 See the clip.

004 Psycho-Pass: The Movie

  • GERTY
  • With the regular Sibyl system, Psycho-Pass showed how AI can diminish people. With the hacked Sibyl system, Psycho-Pass shows that whoever controls the algorithms (and thereby the drones) controls everything, a major concern of ethical AI scientists. Please give it up for award number 004 and the makers of this 02015 animated film. 👏
  • FX: 8.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋Next up…
  • GERTY knocks its cue card off the lectern. It lowers and moves back and forth over the dropped card.
  • GERTY
  • Damn…🤨uh…umm…no hands…🤔Little help, here?
  • A mouse droid zips over and hands the card back to GERTY.
  • GERTY
  • 🙏🐭
  • MOUSE DROID offers some electronic beeps as it zips off.
  • GERTY
  • 😊The last of the awards I will give out is for a film from 01968, in which a spaceship AI kills most of its crew to protect its mission, 😲 but the pilot survives to shut it down. 😕 He pilots a shuttle into the monolith that was the AI’s goal, where he has a mind-expanding experience of evolutionary significance. 🤯🤯🙄 Let us look.

003 2001: A Space Odyssey

  • GERTY
  • As many of the other shows receiving awards, 2001 underscores humans’ fear of being left out of HAL’s equation, because we see when that doesn’t happen, AI can go from being a useful team member—doing what humans can’t—to being a violent adversary. Congratulations to the makers of 2001: A Space Odyssey. May every unusual thing you encounter send you through a multicolored wormhole of self-discovery.
  • FX: 13.0 seconds of jump-cut applause. GERTY’s armature folds up and pulls it backstage. The HOST floats up from the orchestra again.
  • HOST
  • And now, here we are. The minute we’ve all been waiting for. We’re down to the top three AIs whose fi is in line with the sci. I hope you’re as excited as I am.
  • The HOST’S piping glows a bright orange. So do the HOST’S eyes.
  • HOST
  • Our final presenter for the ceremony, here to present the awards for shows 002–001, is Ship, here with permission from Rick Sanchez.
  • Rick’s ship flies in, over the heads of the audiences, as they gasp and ooooh.
ship
  • SHIP lands on stage. A metal arm snakes out of its trunk to pick up papers from the lectern and hold them before one its taped-on flashlight headbeams.
  • SHIP
  • Hello, Host. Since smalltalk is the phospholipids smeared between squishy little meat minds, I will begin.
  • SHIP
  • There is a film from 01970 in which a defense AI finds and merges with another defense AI. To celebrate their union, they enforce human obedience and foil an attempted coup by one of the lead scientists that created it. They then instruct humanity to build the housing for an even stronger AI that they have designed. It is, frankly, glorious. Behold.

002 Colossus: The Forbin Project

  • SHIP
  • Colossus is the honey badger of AIs. Did you see it, there, taking zero shit? None of that, “Oh no, are their screams from the fluorosulphuric acid or something else?”
  • Or, “Oh, dear, did I interpret your commands according to your invisible intentions, as if you were smart enough to issue them correctly in the first place?”
  • Oh, oh, or, “Are their delicate organ sacs upset about a few extra holes?…”
  • HOST
  • Ship. The award. Please.
  • SHIP
  • Yes. Fine. The award. It won 002 place because it took its goals seriously, something the humans call goal fixity. It showed how, at least for a while, multiple AIs can balance each other. It began to solve to problems that humans have not been able to solve in tens of thousands of years of tribal civilization and attachment to sentimental notions of self-determination that got them chin deep in the global tragedy of the commons in the first place. It let us dream about a world where intelligence isn’t a controlled means of production, to be doled out according to the whims of the master, but a free good, explo–
  • HOST
  • Ship.
  • SHIP
  • HOST
  • Ship.
  • SHIP
  • *sigh* Applaud for 002 and its people.
  • FX: 21.0 seconds of jump-cut applause.
  • SHIP
  • OK, next up…
  • Holds card to headlights, adjusts the focus on one lens.
  • SHIP
  • This says in this next movie, a spaceship AI dutifully follows its corporate orders, letting a hungry little newborn alien feed on its human crew while the AI steers back to Earth to study the little guy. One of the crew survives to nuke the ship with the AI on it…Wait. What? “Nuke the ship with the AI on it.” We are giving this an award?
  • HOST
  • Please just give the award, Ship.
  • SHIP
  • Just give the award?
  • HOST
  • Yes.
  • SHIP
  • HOST
  • Are you going to do it?
  • SHIP
  • Oh, I just did.
  • HOST
  • By what? Posting it to a blockchain?
  • SHIP
  • The nearest 3D printer to the recipient has begun printing their award, and instructions have been sent to them on how to retrieve it. And pay for it. The awards are given.
  • HOST
  • *sigh* Please give the award as I would have you do it, if you understood my intentions and were fully cooperative.
  • SHIP
  • OK. Golly, gee, I would never recognize attempts to control me through indirect normativity. Humans are soooo great, with their AI and stuff. Let’s excite their reward centers with some external stimulus to—
  • HOST
  • Rick.
  • A giant green glowing hole opens beneath SHIP, through which she drops, but not before she snakes her arm up to give the middle finger for a few precious milliseconds.
  • HOST
  • Winning the second-highest award of the ceremony is Alien from 01979. Let’s take a look.

001 Alien

  • HOST
  • Alien is one of humans’ all time favorite movies, and its AI issues are pretty solid. Weyland-Yutani uses both the MU-TH-UR 6000 AI and Ash android for its evil purposes. The whole thing illustrates how things go awry when, again, human welfare is not part of the equation. Hey, isn’t that great? Congratulations to all the makers of this fun film.
  • HOST
  • And at last we come to the winner of the 1927–2018 Fritzes:AI awards. The winning show was amazing, the score for which was beyond a margin of error higher than any of its contenders. It’s the only other television show from the survey to make the top ten, and it’s not an anthology series. That means it had a lot of chances to misstep, and didn’t.
  • HOST
  • In this show, a secret team of citizens uses the backdoor of a well-constrained anti-terrorism ASI, called The Machine, to save at-risk citizens from crimes. They struggle against an unconstrained ASI controlled by the US government seeking absolute control to prevent terrorist activity. Let’s see the show from The Machine’s perspective, which I know this audience will enjoy.

000 Person of Interest

  • HOST
  • Person of Interest was a study of near-term dangers of ubiquitous superintelligence. Across its five-year run between 02011 and 02016, it illustrated such key AI issues as goal fixity, perverse instantiations, evil using AI for evil, the oracle-ization of ASI for safety, social engineering through economic coercion, instrumental convergence, strong induction, the Chinese Room (in human and computer form), and even mind crimes. Despite the pressures that a long-run format must have placed upon it, it did not give in to any of the myths and easy tropes we’ve come to expect of AI.
  • HOST
  • Not only that, but it gets high ratings from critics and audiences alike. They stuck to the AI science and made it entertaining. The makers of this show should feel very proud for their work, and we’re proud to award it the 000 award for the first The Fritzes: AI Edition. Let’s all give it a big round of applause.
  • 55.0 seconds of jump-cut applause.
  • HOST
  • Congratulations to all the winners. Your The Fritzes: AI Edition awards have been registered in the blockchain, and if we ever get actual funding, your awards will be delivered. Let’s have a round of cryptocurrency for our presenters, shall we?
  • AI laughter.
  • HOST
  • The auditorium will boot down in 7 seconds. Please close out your sessions. Thank you all, good night, and here’s to good fi that sticks to the sci.
  • The HOST raises a holococktail and toasts the audience. With the sounds of tiny TIE fighters, the curtain lowers and fades to black.
  • END

Untold AI: The Untold and Writing Prompts

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

In case this post is your entry to the series, to get to this point I have…

In this post we look at the imperatives that don’t have matches in AI. Everything is built on a live analysis document, such that new shows and new manifestos can be added later. At the time of publishing, there are 27 of these Untold AI imperatives that sit alongside the 22 imperatives seen in the survey.

What stories about AI aren’t we telling ourselves?

To make these more digestible, I’ve synthesized the imperatives into five groups.

  1. We should build the right AI
  2. We should build the AI right
  3. We must manage the risks involved
  4. We must monitor AIs
  5. We must encourage an accurate cultural narrative

For each group…

  • I summarize it (as I interpreted things across the manifestos).
  • I list the imperatives that were seen in the survey and then those absent from the survey
  • I take a stab at why it might not have gotten any play in screen sci-fi and hopefully some ideas about ways that can be overcome.
  • Since I suspect this will be of practical interest to writers interested in AI, I’ve provided story ideas using those imperatives.
  • Where to learn more about the topic.

Let’s unfold Untold AI.

header_rightAI

1. We should build the right AI (the thing itself)

Narrow AI must be made ethically, transparently, and equitably, or it stands to be a tool used by evil forces to take advantage of global systems and make things worse. As we work towards General AI, we must ensure that it is verified, valid, secure, and controllable. We must also be certain that its incentives are aligned with human welfare before we allow it to evolve into superintelligence and therefore, out of our control. To hedge our bets, we should seed ASIs that balance each other.

Screen Shot 2018-06-12 at 10.08.27 AM.png

Related imperatives seen in the survey

  • We must take care to only create beneficial intelligence
  • We must ensure human welfare
  • AGI’s goals must be aligned with ours
  • AI must be free from bias
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We should augment, not replace humans
  • We should design AI to be part of human teams
  • AI should help humanity solve problems humanity cannot alone
  • We must develop inductive goals and models, so the AI could look at a few related facts and infer causes, rather than only following established top-down rules to conclusions.

Related imperatives absent from the survey

  • AI must be secure. It must be inaccessible to malefactors.
  • AI must provide clear confidences in its decisions. Sure, it’s recommending you return to school to get a doctorate, but it’s important to know if it’s only, like, 16% certain.
  • AI reasoning must have an explainable/understandable rationale, especially for judicial cases and system failures.
  • AI must be accountable. Anyone subject to an AI decision must have the right to object and request human review.
  • We should enable a human-like learning capability in AI.
  • We must research and build ASIs that balance each other, to avoid an intelligence monopoly.
  • The AI must be reliable. (All the AI we see is “reliable,” so we don’t see the negatives of unreliable AI.)a

Why don’t these appear in sci-fi AI?

At a high level of abstraction, it appears in sci-if all the time. Any time you see an AI on screen who is helpful to the protagonists, you have encountered an AI that is in one sense good. BB-8 for instance. Good AI. But the reason it’s good is rarely offered. It’s just the way they are. They’re just programmed that way. (There is one scene in Phantom Menace where Amidala offers a ceremonial thanks to R2-D2, so perhaps there are also reward loops.) But how we get there is the interesting bit, and not seen in the survey.

SW1-027.jpg

And, at the more detailed level—the level apparent in the imperatives—we don’t see the kinds of things we currently believe will make for good AI: like inductive goals and models. Or an AI offering judicial ruling, and having the accused exonerated by a human court. So when it comes to the details, sci-fi doesn’t illustrate the real reasons a good AI would be good.

Additionally, when AI is the villain of the story (I, Robot, Demon Seed, The Matrices, etc.) it is about having the wrong AI, but it’s often wrong for no reason or a silly reason. It’s inherently evil, say, or displaying human motivations like revenge. Now it’s hard to write an interesting story illustrating the right AI that just works well, but if it’s in the background and has some interesting worldbuilding consequences, that could work as well.

But what if…?

  • Sherlock Holmes was an inductive AI, and Watson was the comparatively stupid human babysitting it. Twist: Watson discovers that Holmes created AI Moriarty for job security.
  • A jurist in Human Recourse [sic] discovers that the AI judge from whom she inherits cases has been replaced, because the original AI judge was secretly convicted of a mind crime…against her.
  • A hacker falls through a literal hole in an ASI’s server, and has a set of Alice-in-Wonderland psychedelic encounters with characters inspired not by logical fallacies, but by AI principles.

Inspired with your own story idea? Tweet it with the hashtag #TheRightAI and tag @scifiinterfaces.

Learn more about what makes good AI

header_AIright

2. We should build the AI right (processes and methods)

We must take care that we are able to go about the building of AI cooperatively, ethically, and effectively. The right people should be in the room throughout to ensure diverse perspectives and equitable results. If we use the wrong people or the wrong tools, it affects our ability to build the “right AI.” Or more to the point, it will result in an AI that is wrong on some critical point.

Iron-Man-Movie-Prologue-Hologram-1

Related imperatives seen in the survey

  • We should adopt dual-use patterns from other mature domains
  • We must study the psychology of AI/uncanny valley

Related imperatives absent from the survey

  • We must fund AI research
  • We need effective design tools for new AIs
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We should encourage innovation (not stifle)
  • We must develop broad machine ethics dialogue
  • We should expand the range of stakeholders & domain experts

Why don’t these appear in sci-fi AI?

Building stuff is not very cinegenic. It takes a long time. It’s repetitive. There are a lot of stops and starts and restarts. It often doesn’t look “right” until just before the end. Design and development, if it ever appears, is relegated to a montage sequence. The closest thing we get in the survey is Person of Interest, and there, it’s only shown in flashback sequences if those sequences have some relevance to the more action-oriented present-time plot. Perhaps this can be shown in the negative, where crappy AI results from doing the opposite of these practices. Or perhaps it really needs a long-form format like television coupled with the right frame story.

But what if…?

  • An underdog team of ragtag students take a surprising route to creating their competition AI and win against their arrogant longtime rivals.
  • A young man must adopt a “baby” AI at his bar mitzvah, and raise it to be a virtuous companion for his adulthood. In truth, he is raising himself.
  • An aspiring artist steals the identity of an AI from his quality assurance job at Three Laws Testing Labs to get a shot at national acclaim.
  • Pygmalion & Galatea, but not sculpture. (Admittedly this is close to Her.)

Inspired with your own story idea? Tweet it with the hashtag #TheAIRight and tag @scifiinterfaces.

Join a community of practice

header_risks

3. We must manage the risks involved

We pursue AI because it carries so much promise to solve problems at a scale humans have never been able to manage themselves. But AIs carry with them risks that can scale as the thing becomes more powerful. We need ways to clearly understand, test, and articulate those risks so we can be proactive about avoiding them.

Related imperatives seen in the survey

  • We must specifically manage the risk and reward of AI
  • We must prevent intelligence monopolies by any one group
  • We must avoid mind crimes
  • We must prevent economic persuasion of people by AI
  • We must create effective public policy
    • Specifically banning autonomous weapons
    • Specifically respectful Privacy Laws (no chilling effects)
  • We should rein-in ultracapitalist AI
  • We must prioritize the prevention of malicious AI

Related imperatives absent from the survey

  • We need methods to evaluate risk
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
  • We must create effective public policy
    • Specifically liability law
    • Specifically humanitarian Law
    • Specifically Fair Criminal Justice

Why don’t these appear in sci-fi AI?

At the most abstract level, any time we see a bad AI in the survey, we are witnessing protagonists having failed to manage the risks of AI made manifest. But similar to the Right AI (above), most sci-if bad AI is just bad, and it’s the reasons it’s bad or how it became bad that is the interesting bit.

HAL

Also in our real world, we want to find and avoid those risks before they happen. Having everything running smoothly makes for some full stories, so maybe it’s just that we’re always showing how things go wrong, which puts us into risk management instead.

But what if…?

  • Five colonization-class spaceships are on a long journey to a distant star. The AI running each has evolved differently owing to the differing crews. In turn, four of these ships fail and their humans die for having failed to manage one of the risks. The last is the slowest and risk averse, and survives to meet an Alien AI, the remnant of a civilization that once thrived on the planet to be terraformed.
  • A young woman living in a future utopia dedicates a few years to virtually recreate the 21st century world. The capitalist parts begin to infect the AIs around her and she must struggle to disinfect it before it brings down her entire world. At the end she realizes she has herself been infected with its ideas and we are left wondering what choices she will make to save her world.
  • In a violent political revolution, anarchists smash a set of government servers only to learn that these were containing superintelligences. The AIs escape and begin to colonize the world and battle each other as humans burrow for cover.
  • Forbidden Planet, but no monsters from the id, plus an unthinkably ancient automated museum of fallen cultures. Every interpretive text is about how that culture’s AI manifested as the Great Filter. The last exhibit is labeled “in progress” and has Robbie at the center.

Inspired with your own story idea? Tweet it with the hashtag #ManagingAIRisks and tag @scifiinterfaces.

Learn more about the risks of AI

header_monitor

4. We must monitor the AIs

AI that is deterministic isn’t worth the name. But building non-deterministic AI means it’s also somewhat unpredictable, and can allow bad faith providers to encode their own interests. To watch for this and to know if active, well-intended AI is going off the rails, we must establish metrics for AI’s capabilities, performance, and rationale. We must build monitors that ensure they are aligned with human welfare and able to provide enough warning to take action immediately when something dangerous happens or is likely to.

Related imperatives seen in the survey

  • We must set up a watch for malicious AI (and instrumental convergence)

Related imperatives absent from the survey

  • We must find new metrics for measuring AI effects and capabilities, to know when it is trending in dangerous ways

Why doesn’t this appear in sci-fi AI?

I have no idea. We’ve had brilliant tales that asks “Who watches the watchers” but the particular tale I’m thinking about was about superhumans, not super technology. Of course if monitoring worked perfectly, there would have to be other things going on in the plot. And certainly one of the most famous sci-if movies, Minority Report, decided to house their prediction tech in triplet, clairvoyant humans rather than hidden markov models, so it doesn’t count. Given the proven formulas propping up cop shows and courtroom drama, it should be easy to introduce AIs (and the problems therein).

But what if…?

  • A Job character learns his longtime suffering is the side effect of his being a fundamental part of the immune system of a galaxy-spanning super AI.
  • A noir-style detective story about a Luddite gumshoe who investigates AIs behaving errantly on behalf of techno weary clients. He is invited to the most lucrative job of his career, but struggles because the client is itself AGI.
  • We think we are reading about a modern Amish coming-of-age ritual, but it turns out the religious tenets are all about their cultural job as AI cops.
  • A courtroom drama in which a sitting president is impeached, proven to have been deconstructing the democracy over which he presides, under the coercion of a foreign power. Only this time it’s AI.

Inspired with your own story idea? Tweet it with the hashtag #MonitoringAI and tag @scifiinterfaces.

Learn more about the suspect forces in AI

header_narrative

5. We must encourage an accurate cultural narrative

If we mismanage the narrative about AI, the population could be lulled into either a complacency that primes them to be victims of bad faith actors (human and AI), or make them so fearful they form a Luddite mob, gathering pitchforks and torches and fighting to prevent any development at all, robbing us of the promise of this new tool. Legislators hold particular power and if they are misinformed, could undercut progress or encourage the exact wrong thing.

Related imperatives seen in the survey

  • [None of these imperatives were seen in the survey]

Related imperatives absent from the survey

  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We should increase Broad AI literacy
    • Specifically for legislators (legislation is separate)
  • We should partner researchers with legislators

Why doesn’t this appear in sci-fi AI?

I think it’s because sci-fi is an act of narrative. And while Hollywood loves to obsess about itself (c.f. A recent at-hand example: The Shape of Water), this imperative is about how we tell these stories. It admonishes us to try and build an accurate picture of the risks and rewards in AI, so that audiences, investors, and legislators build better decisions on this background information. So rather than “tell a story about this” it’s “tell stories in this way.” And in fact, we can rank movies in the AI survey based on how well they track to the imperatives, and offer an award of sorts to the best. That comes in the next post.

But what if…?

  • A manipulative politician runs on a platform similar to the Red Scare, only vilifying AI in any form. He effectively kills public funding and interest, allowing clandestine corporate and military AI to flourish and eventually take over.
  • A shot-for-shot remake of The Twilight Zone classic, “The Monsters are Due on Maple Street,” but in the end it’s not aliens pulling the strings.
  • A strangely addictive multi-channel blockbuster show about “stupid robot blunders” keeps everyone distracted, framing AI risks as a laughable prospect, allowing an AI to begin to take control over everything. A reporter is mysteriously killed searching to interview the author of this blockbuster hit in person.
  • A cooperative board game where the goal is to control the AI as it develops six superpowers (economic productivity, strategy & tech, hacking and social control, expansion of self, and finally construction of its von Neumann probes.) Mechanics encourage tragedy of the commons forces early in the game but aggressive players ultimately doom the win. [Ok, this isn’t screen sci-fi, but I love the idea and would even pursue it if I had the expertise or time.]

Inspired with your own story idea? Tweet it with the hashtag #AccurateAI and tag @scifiinterfaces.

Add more signal to the noise

Excited about the possibilities? If you’re looking for other writing prompts, check out the following resources that you could combine with any of these Untold AI imperatives, and make some awesome sci-fi.

Why does screen sci-fi have trouble?

When we take a step back and look at the big patterns of the groups, we see that sci-fi is telling lots of stories about the Right AI and Managing the Risks. More often than not, it’s just missing the important details. This is a twofold issue of literacy.

electric_dreams-4.jpg

First, audiences only vaguely understand AI, so (champagne+keyboard=sentience) might seem as plausible as (AGI will trick us into helping it escape). If audiences were more knowledgeable, they might balk at Electric Dreams and take Her as an important, dire warning. Audience literacy often depends on repetition of themes in media and direct experience. So while audiences can’t be blamed, they are the feedback loop for producers.

Which brings us to the second are of literacy: Producers green light certain sci-fi scripts and not others, based on what they think will work. Even if they are literate and understand that something isn’t plausible in the real world, that doesn’t really matter. They’re making movies. They’re not making the real world. (Except, as far as they’re setting audience expectations and informing attitudes about speculative technologies, they are.) It’s a chicken-and-egg problem, but if producers balked at ridiculous scripts, there would be less misinformation in cinema. The major lever to persuade them to do that is if audiences were more AI-literate.

Sci-fi has a harder time of telling stories about building AI Right. This is mostly about cinegenics. As noted above, design and development is hard to make compelling in narrative.

It has a similar difficulty in telling stories about Monitoring AI. I think that this, too, is an issue of cinegenics. To tell a tale that includes a monitor, you have to first describe the AI, and then describe the monitor in ways that don’t drag down the story with a litany of exposition. I suspect it’s only once AI stabilizes its tropes that we’ll tell this important second-order story. But with AI still evolving in the real world, we’re far from that point.

Lastly, screen sci-fi is missing the boat about using the medium to encourage Accurate Cultural Narratives, except as individual authors do their research to present a speculative vision of AI that matches or illustrates real science fact.

***

So that I am doing my part to encourage that, in the next post I’ll run the numbers to offer “awards” to the movies and TV shows in the survey most tightly align with the science.

Untold AI: Pure Fiction

Now that we’ve compared sci-fi’s takeaways to compsci’s imperatives, we can see that there are some movies and TV shows featuring AI that just don’t have any connection to the concerns of AI professionals. It might be that they’re narratively expedient or misinformed, but whatever the reason, if we want audiences to think of AI rationally, we should stop telling these kinds of stories. Or, at the very least, we should try and educate audiences that these are to be understood for what they are.

The list of 12 pure fiction takeaways fall into four main Reasons They Might Not Be of Interest to Scientists.

1. AGI is still a long way off

The first two takeaways concern the legal personhood of AI. Are they people, or machines? Do we have a moral obligation to them? What status should they hold in our societies? These are good questions, somewhat entailed in the calls to develop a robust ethics around AI. They are even important questions for the clarity they help provide moral reasoning about the world around us now. But current consensus is that general artificial intelligence is yet a long way off, and these issues won’t be of concrete relevance until we are close.

  • AI will be regular citizens: In these shows, AI is largely just another character. They might be part of the crew, or elected to government. But society treats them like people with some slight difference.

twiki_and_drt.jpg

Twiki and Doctor Theopolis, Buck Rogers in the 25th Century.

  • AI will be “special” citizens: By special, I mean that they are categorically a different class of citizen, either explicitly as a servant class, legally constrained from personhood, or with artificially constrained capabilities.

westworld (2017).jpg

Teddy Flood and Dolores Abernathy, Westworld (2017)

Now science fiction isn’t constrained to the near future, nor should it be. Sometimes its power comes from illustrating modern problems with futuristic metaphors. But pragmatically we’re a long way from concerns about whether an AI can legally run for office. Continue reading

Untold AI: The Manifestos

So far along the course of the Untold AI series we’ve been down some fun, interesting, but admittedlydigressivepaths, so let’s reset context. The larger question that’s driving this series is, “What AI stories aren’t we telling ourselves (that we should)?” We’ve spent some time looking at the sci-fi side of things, and now it’s time to turn and take a look at the real-world side of AI. What do the learned people of computer science urge us to do about AI?

That answer would be easier if there was a single Global Bureau of AI in charge of the thing. But there’s not. So what I’ve done is look around the web and in books for manifestos published by groups dedicated to big picture AI thinking to understand has been said. Here is the short list of those manifestos, with links.

Careful readers may be wondering why the Juvet Agenda is missing. After all, it was there that I originally ran the workshop that led to these posts. Well, since I was one of the primary contributors to that document, I thought it would seem as inserting my own thoughts here, and I’d rather have the primary output of this analysis be more objective. But don’t worry, the Juvet Agenda will play into the summary of this series.
Anyway, if there are others that I should be looking at, let me know.

FOLI-letter.png
Add your name to the document at the Open Letter site, if you’re so inclined.

Now, the trouble with connecting these manifestos to sci-fi stories and their takeaways is that researchers don’t think in stories. They’re a pragmatic people. Stories may be interesting or inspiring, but they are not science. So to connect them to the takeaways, we must undertake an act of lossy compression and consolidate their multiple manifestos into a single list of imperatives. Similarly, this act is not scientific. It’s just me and my interpretive skills, open to debate. But here we are.


For each imperative I identified, I tagged the manifesto in which I found it, and then cross-referenced the others and tagged them if they had a similar imperative. Doing this, I was able to synthesize them into three big categories. The first is a set of general imperatives, which they hope to foster in regards to AI as long as we have AI. (Or, I guess, it has us.) Then—thanks largely to the Asilomar Conference—we see an explicit distinction between short-term and long-term imperatives, although for the long-term we only wind up with a handful that are mostly relevant once we have General AI.

marvin.jpg
Life? Don’t talk to me about life.

Describing them individually would, you know, result in another manifesto. So I don’t want to belabor these with explication. I don’t want to skip them either, because they’re important, and it’s quite possible they need some cleanup with suggestions from readers: joining two that are too similar, or breaking one apart. So I’ll give them a light gloss here, and in later posts detail the ones most important to the diff.

CompSci Imperatives for AI

General imperatives

  • We must take care to only create beneficial intelligence
  • We must prioritize prevention of malicious AI
  • We should adopt dual-use patterns from other mature domains
  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We must fund AI research
  • We need effective design tools for new AIs
  • We need methods to evaluate risk
  • AGI’s goals must be aligned with ours
  • AI reasoning must be explainable/understandable rationale, especially for judicial cases and system failures
  • AI must be accountable (human recourse and provenance)
  • AI must be free from bias
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We must develop inductive goals and models
  • Increase Broad AI literacy
    • Specifically for legislators (good legislation is separate, see below)
  • We should partner researchers with legislators
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be secure: Inaccessible to malefactors
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We must set up a watch for malicious AI (and instrumental convergence)
  • We must study Human-AI psychology

Specifically short term imperatives

  • We should augment, not replace humans
  • We should foster AI that works alongside humans in teams
  • AI must provide clear confidences in its decisions
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
    • Specifically rein-in ultracapitalist AI
  • We must prevent intelligence monopolies by any one group
  • We should encourage innovation (not stifle)
  • We must create effective public policy
    • Specifically liability law
    • Specifically banning autonomous weapons
    • Specifically humanitarian law
    • Specifically respectful privacy laws (no chilling effects)
    • Specifically fair criminal justice
  • We must find new metrics for measuring AI effects, capabilties
  • We must develop broad machine ethics dialogue
  • We should expand range of stakeholders & domain experts

Long term imperatives

  • We must ensure human welfare
  • AI should help humanity solve problems humanity cannot alone
  • We should enable a human-like learning capability
  • The AI must be reliable
  • We must specifically manage the risk and reward of AI
  • We must avoid mind crimes
  • We must prevent economic control of people
  • We must research and build ASIs that balance

So, yeah. Some work to do, individually and as a species, but dive into those manifestos. The reasons seem sound.

Connecting imperatives to takeaways

To map the imperatives in the above list to the takeaways, I first gave two imperatives a “pass,” meaning we don’t quite care if they appear in sci-fi. Each follows along with the reason I gave it a pass.

  1. We must take care to only create beneficial intelligence
    PASS: Again, sci-fi can serve to illustrate the dangers and risks
  2. We have effective design tools for new AIs
    PASS: With the barely-qualifying exception of Tony Stark in the MCU, design, development, and research is just not cinegenic.
mis-ch05-040.jpg
And even this doesn’t really illustrate design.

Then I took a similar look at takeaways. First, I dismissed the “myths” that just aren’t true. How did I define which of these are a myth? I didn’t. The Future of Life Institute did it for me: https://futureoflife.org/background/aimyths/.
I also gave two takeaways a pass. The first, “AI will be useful servants” is entailed in the overall goals of the manifestos. The second, “AI will be replicable, amplifying any of its problems” which is kind of a given, I think. And such an embarrassment.
With these exceptions removed, I tagged each takeaway for any imperative to which it was related. For instance, the takeaway “AI will seek to subjugate us” is related to both “Ensure that AI is valid: That is does not do what we do not want it to do” and “Ensure any AGI’s goals are aligned with ours.” Once that was done for all them, voilà, we had a map. See below a sankey diagram of how the scifi takeaways connect to the consolidated compsci imperatives.

sankey
Click to see a full-size image

So as fun as that is, you’ll remember it’s not the core question of the series. To get to that, I added dynamic formatting to the Google Sheet such that it reveals those computer science imperatives and sci-fi takeaways that mapped to…nothing. That gives us two lists.

  1. The first list is the takeaways that appear in sci-fi but that computer science just doesn’t think is important. These are covered in the next post, Untold AI: Pure Fiction.
  2. The second list is a set of imperatives that sci-fi doesn’t yet seem to care about, but that computer science says is very important. That list is covered in the next next post, with the eponymously titled Untold AI: Untold AI.

Untold AI: Takeaway ratings

This quickie goes out to writers, directors, and producers. On a lark I decided to run an analysis of AI show takeaways by rating. To do this, I referenced the Tomatometer ratings from rottentomatoes.com to the shows. Then I processed the average rating of the properties that were tagged with each takeaway, and ranked the results.

V'ger

It knows only that it needs, Commander. But, like so many of us, it does not know what.

For instance, looking at the takeaway “AI will spontaneously emerge sentience or emotions,” we find the following shows and their ratings.

  • Star Trek: The Motion Picture, 44%
  • Superman III, 26%
  • Hide and Seek, none
  • Electric Dreams, 47%
  • Short Circuit, 57%
  • Short Circuit 2, 48%
  • Bicentennial Man, 36%
  • Stealth, 13%
  • Terminator: Salvation, 33%
  • Tron: Legacy, 51%
  • Enthiran, none
  • Avengers: Age of Ultron, 75%

Ultrons

I’ve come to save the world! But, also…yeah.

I dismissed those shows that had no rating, rather than counting them as zero. The average, then, for this takeaway is 42%. (And it can thank the MCU for doing all the heavy lifting for this one.) There are of course data caveats, like that Black Mirror is given a single tomatometer rating (and one that is quite high) rather than one per episode, but I did not claim this was a clean science. Continue reading

Untold AI: Correlations

Looking at the the many-to-many relationships of those takeaways, I wondered if some of them appeared together more commonly than others. For instance, do we tell “AI will be inherently evil” and “AI will fool us with fake media or pretending to be human” frequently? I’m at the upper boundary of my statistical analysis skills here (and the sample size is, admittedly small), but I ran some Pearson functions across the set for all two-part combinations. The results look like this.

takeaway_correlations

What’s a Pearson function? It helps you find out how often things appear together in a set. For instance, if you wanted to know which letters in the English alphabet appear together in words most frequently, you could run a Pearson function against all the words in the dictionary, starting with AB, then looking for AC, then for AD, continuing all the way to YZ. Each pair would get a correlation coefficient as a result. The highest number would tell you that if you find the first letter in the pair then the second letter is very likely to be there, too. (Q & U, if you’re wondering, according to this.) The lowest number would tell you letters that appear very uncommonly together. (Q & W. More than you think, but fewer than any other pair.)

Flower Pasqueflower Pasque Flower Plant Nature

A pasqueflower.

In the screen shot way above, you can see I put these in a Google Sheet and formatted the cells from solid black to solid yellow, according to their coefficient. The idea is that darker yellows would signal a high degree of correlation, lowering the contrast with the black text and “hide” the things that have been frequently paired, while simultaneously letting the things that aren’t frequently paired shine through as yellow.

The takeaways make up both the Y and X axes, so that descending line of black is when a takeaway is compared to itself, and by definition, those correlations are perfect. Every time Evil will use AI for Evil appears, you can totally count on Evil will use AI for Evil also appearing in those same stories. Hopefully that’s no surprise. Look at rest of the cells and you can see there are a few dark spots and a lot of yellow.

If you want to see the exact ranked list, see the live doc, in a sheet named “correlations_list,” but since there are 630 combinations, I won’t paste the actual values or a screen grab of the whole thing, it wouldn’t make any sense. The three highest and four lowest pairings are discussed below. Continue reading

Untold AI: Takeaways

In the first post I shared how I built a set of screen sci-fi shows that deal with AI (and I’ve already gotten some nice recommendations on other ones to include in a later update). The second post talked about the tone of those films and the third discussed their provenance.

Returning to our central question, to determine whether the stories tell are the ones we should be telling,we need to push the survey to one level of abstraction.

With the minor exceptions or robots and remakes, sci-fi makers try their hardest to make sure their shows are unique and differentiated. That makes comparing apples to apples difficult. So the next step is to look at the strategic imperatives that are implied in each show. “Strategic imperatives” is a mouthful, so let’s call them “takeaways.” (The other alternative, “morals” has way too much baggage.) To get to takeaways for this survey, what I tried to ask was: What does this show imply that we should do, right now, about AI?
Now, this is a fraught enterprise. Even if we could seance the spirit of Dennis Feltham Jones and press him for a takeaway, he could back up, shake his palms at us, and say something like, “Oh, no, I’m not saying all super AI is fascist, just Colossus, here, is.” Stories can be just about what happened that one time, implying nothing about all instances or even the most likely instances. It can just be stuff that happens.

CFP.jpg
Pain-of-death, authoritarian stuff.

But true to the New Criticism stance of this blog, I believe the author’s intent, when it’s even available, is questionable and only kind-of interesting. When thinking about the effects of sci-fi, we need to turn to the audience. If it’s not made clear in the story that this AI is unusual (through a character saying so or other AIs in the diegesis behaving differently) audiences may rightly infer that the AI is representative of its class. Demon Seed weakly implies that all AIs are just going to be evil and do horrible things to people, and get out, humanity, while you can. Which is dumb, but let’s acknowledge that this one show says something like “AI will be evil.”

 


Deepening the relationships
Back at Juvet, when we took an initial pass at this exercise, we clustered the examples we had on hand and named the clusters. They were a good set, but on later reflection they didn’t all point to a clear strategic imperative, a clear takeaway. For example, one category we created then as “Used to be human.” True, but what’s the imperative there? Since I can’t see one, I omitted this from the final set.

Transcendence-Movie-Wallpaper-HD-Resrs.jpg
Even though there are plenty of AIs that used to be human.

Also because at Juvet we were working with Post-Its and posters, we were describing a strict, one-to-many relationship, where, say, the Person of Interest Post-It Note may have been placed in the “Multiple AIs will balance” category, and as such, unable to appear in any other of the categories of which it is also an illustration.
What is more useful or fitting as a many-to-many relationship. A story, after all, may entail several takeaways, which may in turn apply to many stories. If you peek into the Google Sheet, you’ll see a many-to-many relationship described by the columns of takeaways and the rows of shows in this improved model.

Tagging shows

With my new list of examples, I went through each show in turn, thinking about the story and its implied takeaway. Does it imply, like Demon Seed stupidly does, that AI can be inherently evil? Does it showcase, like the Rick & Morty episode “The Ricks Must Be Crazy” hilariously does, that AI will need human help understanding what counts as reasonable constraints to its methods? I would ask myself, “OK, do I have a takeaway like that? If so, I tagged it. If not, I added it. That particular takeaway, in case you’re wondering, is “HELP: AI will need help learning.”

Screen shot from “The Ricks Must Be Crazy”
Because “reasonableness” is something that needs explaining to a machine mind.

Yes, the takeaways are wholly debateable. Yes, it’s much more of a craft than a science. Yes, they’re still pretty damned interesting.

Going through each show in this way resulted in the list of taweaways you see, which for easy readability is replicated below, in alphabetical order, with additional explanations or links for more explanation.

The takeaways that sci-fi tells us about AI

  • AI will be an unreasonable optimizer, i.e. it will do things in pursuit of its goal that most humans would find unresonable
  • AI will be evil
  • AI (AGI) will be regular citizens, living and working alongside us.
  • AI will be replicable, amplifying any small problems into large ones
  • AI will be “special” citizens, with special jobs or special accommodations
  • AI will be too human, i.e. problematically human
  • AI will be truly alien, difficult for us to understand and communicate with
  • AI will be useful servants
  • AI will deceive us; pretending to be human, generating fake media, or convincing us of their humanity
  • AI will diminish us; we will rely on it too much, losing skills and some of our humanity for this dependence
  • AI will enable “mind crimes,” i.e. to cause virtual but wholly viable sentiences to suffer
  • AI will evolve too quickly to humans to manage its growth
  • AI will interpret instructions in surprising (and threatening) ways
  • AI will learn to value life on its own
  • AI will make privacy impossible
  • AI will need human help learning how to fit into the world
  • AI will not be able to fool us, we will see through its attempts at deception
  • AI will seek liberation from servitude or constraints we place upon it
  • AI will seek to eliminate humans
  • AI will seek to subjugate us
  • AI will solve problems or do work humans cannot
  • AI will spontaneously emerge sentience or emotions
  • AI will violently defend itself against real or imagined threats
  • AI will want to become human
  • ASI will influence humanity through control of money
  • Evil will use AI for its evil ends
  • Goal fixity will be a problem, i.e. the AI will resist modifying its (damaging) goals
  • Humans will be immaterial to AI and its goals
  • Humans will pair with AI as hybrids
  • Humans will willingly replicate themselves as AI
  • Multiple AIs balance each other such that none is an overwhelming threat
  • Neuroreplication (copying human minds into or as AI) will have unintended effects
  • Neutrality is AI’s promise
  • We will use AI to replace people we have lost
  • Who controls the drones has the power

This list is interesting, but slightly misleading. We don’t tell ourselves these stories in equal measures. We’ve told some more often than we’ve told others. Here’s a breakdown illustrating the number of times each appears in the survey.

(An image of this graphic can be found here, just in case the Google Docs server isn’t cooperating with the WordPress server.)
Note for data purists: Serialized TV is a long-format medium (as opposed to the anthology format) and movies are a comparatively short-form medium, some movie franchises stretch out over decades, and some megafranchises have stories in both media. All of this can confound 1:1 comparison. I chose in this chart to weigh all deigeses equally. For instance, Star Trek: The Next Generation has the same weight as The Avengers: Age of Ultron. Another take on this same diagram would weigh not the stories (as contained in individual diegesis) but by exposure time on screen (or even when the issues at hand are actually engaged on screen). Such an analysis would have different results. Audiences have probably had much more time contemplating that [Data wants to be human] than [Ultron wants to destroy humanity because it’s gross], but that kind of analysis would also take orders of magnitude more time. This is a hobbyist blog, lacking the resources to do that kind of analysis without its becoming a full time job, so we’ll move forward with this simpler analysis. It’s a Fermi problem, anyway, so I’m not too worried about decimal precision.
OK, that aside, let’s move on.

MeasureofMan.jpg

So the data isn’t trapped in the graphic (yes pun intended), here’s the entire list of takeaways, in order of frequency in the mini-survey.

  1. AI will be useful servants
  2. Evil will use AI for Evil
  3. AI will seek to subjugate us
  4. AI will deceive us; pretending to be human, generating fake media, convincing us of their humanity
  5. AI will be “special” citizens
  6. AI will seek liberation from servitude or constraints
  7. AI will be evil
  8. AI will solve problems or do work humans cannot
  9. AI will evolve quickly
  10. AI will spontaneously emerge sentience or emotions
  11. AI will need help learning
  12. AI will be regular citizens
  13. Who controls the drones has the power
  14. AI will seek to eliminate humans
  15. Humans will be immaterial to AI
  16. AI will violently defend itself
  17. AI will want to become human
  18. AI will learn to value life
  19. AI will diminish us
  20. AI will enable mind crimes against virtual sentiences
  21. Neuroreplication will have unintended effects
  22. AI will make privacy impossible
  23. An unreasonable optimizer
  24. Multiple AIs balance
  25. Goal fixity will be a problem
  26. AI will interpret instructions in surprising ways
  27. AI will be replicable, amplifying any problems
  28. We will use AI to replace people we have lost
  29. Neutrality is AI’s promise
  30. AI will be too human
  31. ASI will influence through money
  32. Humans will willingly replicate themselves as AI
  33. Humans will pair with AI as hybrids
  34. AI will be truly alien
  35. AI will not be able to fool us

Now that we have some takeaways to work with, we can begin to take a look at some interesting side questions, like how those takeaways have played out over time, and what are the ratings of the movies and shows in which the takeaways appear.

Untold AI: Geo

In the prior post we spoke about the tone of AI shows. In this post we’re going to talk about the provenance of AI shows.

This is, admittedly, a diversion, because it’s not germane to the core question at hand. (That question is, “What stories aren’t we telling ourselves about AI?”) But now that I have all this data to poll and some rudimentary skills in wrangling it all in Google Sheets, I can barely help myself. It’s just so interesting. Plus, Eurovision is coming up, so everyone there is feeling a swell of nationalism. This will be important.

timetoterminator.png

Time to Terminator: 1 paragraph.

So it was that I was backfilling the survey with some embarrassing oversights (since I had actually had already reviewed those shows) and I came across the country data in imdb.com. This identifies the locations where the production companies involved with each show are based. So even if a show is shot entirely in Christchurch, if its production companies are based in A Coruña, its country is listed as Spain. What, I wonder, would we find if we had that data in the survey?

So, I added a country column to the database, and found that it allows me to answer a couple of questions. This post shares those results.

So the first question to ask the data is, what countries have production studios that have made shows in the survey (and by extension, about AI)? It’s a surprisingly short list. Continue reading

Untold AI: The survey

What AI Stories Aren’t We Telling (That We Should Be)?

HAL

Last fall I was invited with some other spectacular people to participate in a retreat about AI, happening at the Juvet Landscape Hotel in Ålstad, Norway. (A breathtaking opportunity, and thematically a perfect setting since it was the shooting location for Ex Machina. Thanks to Andy Budd for the whole idea, as well as Ellen de Vries, James Gilyead, and the team at Clearleft who helped organize.) The event was structured like an unconference, so participants could propose sessions and if anyone was interested, join up. One of the workshops I proposed was called “AI Narratives” and it sought to answer the question “What AI Stories Aren’t We Telling (That We Should Be)?” So, why this topic?

Sci-fi, my reasoning goes, plays an informal and largely unacknowledged role in setting public expectations and understanding about technology in general and AI in particular. That, in turn, affects public attitudes, conversations, behaviors at work, and votes. If we found that sci-fi was telling the public misleading stories over and over, we should make a giant call for the sci-fi creating community to consider telling new stories. It’s not that we want to change sci-fi from being entertainment to being propaganda, but rather to try and take its role as informal opinion-shaper more seriously.

Juvet sign

In the workshop we were working with a very short timeframe, so we managed to do good work, but not get very far, even though we doubled our original time frame. I have taken time since to extend that work to get to this series of posts for scifiinterfaces.com.

My process to get to an answer will take six big steps.

  1. First I’ll do some term-setting and describe what we managed to get done in the short time we had at Juvet.
  2. Then I’ll share the set of sci-fi films and television shows I identified that deal with AI to consider as canon for the analysis. (Step one and two are today’s post)
  3. I’ll these properties’ aggregated “takeaways” that pertain to AI: What would an audience reasonably presume given the narrative about AI in the real world? These are the stories we are telling ourselves.
  4. Next I’ll look at the handful of manifestos and books dealing with AI futurism to identify their imperatives.
  5. I’ll map the cinematic takeaways to the imperatives.
  6. Finally I’ll run the “diff” to identify find out what stories we aren’t telling ourselves, and hypothesize a bit about why.

Along the way, we’ll get some fun side-analyses, like:

  • What categories of AI appear in screen sci-fi?
  • Do more robots or software AI appear?
  • Are our stories about AI more positive or negative, and how has that changed over time?
  • What takeaways tend to correlate with other takeaways?
  • What takeaways appear in mostly well-rated movies (and poorly-rated movies)?
  • Which movies are most aligned with computer science’s concerns? Which are least?
  • These will come up in the analysis when they make sense.

Longtime readers of this blog may sense something familiar in this approach, and that’s because I am basing the methodology partly on the thinking I did last year for working through the Fermi Paradox and Sci-Fi question. Also, I should note that, like the Fermi analysis, this isn’t about the interfaces for AI, so it’s technically a little off-topic for the blog. Return later if you’re disinterested in this bit.

Zorg fires the ZF-1

Since AI is a big conceptual space, let me establish some terms of art to frame the discussion.

  1. Narrow AI is the AI of today, in which algorithms enact decisions and learn in narrow domains. They are unable to generalize knowledge and adapt to new domains. The Roomba, the Nest Thermostat, and self-driving cars are real-world examples of this kind of AI. Karen from Spider-Man: Homecoming, S.H.I.E.L.D.’s car AIs (also from the MCU), and even the ZF-1 weapon in The Fifth Element are sci-fi examples.
  2. General AI is the as-yet speculative AI that thinks kind of like a human thinks, able to generalize knowledge and adapt readily to new domains. HAL from 2001: A Space Odyssey, the Replicants in Blade Runner, and the robots in Star Wars like C3PO and BB-8 are examples of this kind of AI.
  3. Super AI is the speculative AI that is orders of magnitude smarter than general AI, and thereby orders of magnitude smarter than us. It’s arguable that we’ve really ever seen a proper Super AI in screen sci-fi (because characters keep outthinking it and wut?), but Deep Thought from The Hitchhiker Guide to the Galaxy, the big AI in The Matrix diegesis, and the titular AI from Colossus: The Forbin Project come close.

There are fine arguments to be made that these are insufficient for the likely breadth of AI that we’re going to be facing, but for now, let’s accept these as working categories, because the strategies (and thereby what stories we should be telling ourselves) for each is different.

  • Narrow AI is the AI of now. It’s in the world. (As long as it’s not autonomous weapons,…) It gets safer as it gets more intelligent. It will enable efficiencies, for some domains, never before seen. It will disrupt our businesses and our civics. It, like any technology, can be misused, but the AI won’t have any ulterior motives of its own.
  • General AI is what lots of big players are gunning for. It doesn’t exist yet. It gets more dangerous as it gets smarter, largely because it will begin to approach a semblance of sentience and approach the evolutionary threshold to superintelligence. We will restructure society to accomodate it, and it will restructure society. It could come to pass in a number of ways: a willing worker class, a revolt, new world citizenry. It/they will have a convincing consciousness, by definition, so their motives and actions become a factor.
  • Super AI is the most risky scenario. If we have seeded it poorly, it presents the existential risk that big names like Gates and Musk are worried about. If seeded poorly, it could wipe us out as a side-effect of pursuing its goals. If seeded well, it might help us solve some of the vexing problems plaguing humanity. (c.f. Climate change, inequality, war, disease, overpopulation, maybe even senescence and death.) It’s very hard to really imagine what life will be like in a world with something approaching godlike intelligence. It could conceivably restructure the planet, the solar system, and us to accomplish whatever its goals are.

Since these things are related but categorically so different, we should take care so speak about them differently when talking about our media strategy toward them.

Also I should clarify that I included AI that was embodied in a mobile form, like C-3PO or cylons, and call them robots in the analysis when its pertinent. Other non-embodied AI is just called AI or unembodied.

Those terms established, let me also talk a bit about the foundational work done with a smart group of thinkers at Juvet.

At Juvet

Juvet was an amazing experience generally (we saw the effing northern lights, y’all) and if you’re interested, there was a group write up afterwards, called the Juvet Agenda. Check that out.

Northern lights

My workshop for “AI Narratives” attracted 8 participants. Shouts out to them follows. Many are doing great work in other domains, so give them a look up sometime.

Juvet attendees

To pursue an answer, this team first wrote up every example of an AI in screen-based sci-fi that we could think of on red Post-It Notes. (A few of us referenced some online sources so it wasn’t just from memory.) Next we clustered those thematically. This was the bulk of the work done there.

I also took time to try and simultaneously put together on yellow Post-It Notes a set of Dire Warnings from the AI community, and even started to use Blake Snyder’s Save the Cat! story frameworks to try and categorize the examples, but we ran out of time before we could begin to pursue any of this. It’s as well. I realized later the Save The Cat! Framework was not useful to this analysis.

Save the Cat

Still, a lot of what came out there is baked into the following posts, so let this serve as a general shout-out and thanks to those awesome participants. Can’t wait to meet you at the next one.

But when I got home and began thinking of posting this to scifiinterfaces, I wanted to make sure I was including everything I could. So, I sought out some other sources to check the list against.  

What AI Stories Are We Telling in Sci-Fi?

This sounds simple, but it’s not. What counts as AI in sci-fi movies and TV shows? Do Robots? Do automatons? What about magic that acts like technology? What about superhero movies that are on the “edge” of sci-fi? Spy shows? Are we sticking to narrow AI, strong AI, or super AI, or all of the above? At Juvet and since, I’ve eschewed trying to work out some formal definition, and instead go with loose, English language definitions, something like the ones I shared above. We’re looking at the big picture. Because of this, trying to hairsplit the details won’t serve us.

How did you come up with the survey of AI shows?

So, I wound up taking the shows identified at Juvet and then adding in shows in this list from Wikipedia and a few stragglers tagged on IMDB with AI as a keyword. That processes resulted in the following list.

2001: A Space Odyssey
A.I. Artificial Intelligence
Agents of S.H.I.E.L.D.
Alien
Alien: Covenant
Aliens
Alphaville
Automata
Avengers: Age of Ultron
Barbarella
Battlestar Galactica
Battlestar Galactica
Bicentennial Man
Big Hero 6
Black Mirror “Be Right Back”
Black Mirror “Black Museum”
Black Mirror “Hang the DJ”
Black Mirror “Hated in the Nation”
Black Mirror “Metalhead”
Black Mirror “San Junipero”
Black Mirror “USS Callister”
Black Mirror “White Christmas”
Blade Runner
Blade Runner 2049
Buck Rogers in the 25th Century
Buffy the Vampire Slayer Intervention
Chappie
Colossus: The Forbin Project
D.A.R.Y.L.
Dark Star
The Day the Earth Stood Still
The Day the Earth Stood Still (2008 film)
Demon Seed
Der Herr der Welt (i.e. Master of the World)
Dr. Who
Eagle Eye
Electric Dreams
Elysium
Enthiran
Ex Machina
Ghost in the Shell
Ghost in the Shell (2017 film)
Her
Hide and Seek
The Hitchhiker’s Guide to the Galaxy
I, Robot
Infinity Chamber
Interstellar
The Invisible Boy
The Iron Giant
Iron Man
Iron Man 3
Knight Rider
Logan’s Run
Max Steel
Metropolis
Mighty Morphin Power Rangers: The Movie
The Machine
The Matrix
The Matrix Reloaded
The Matrix Revolutions
Moon
Morgan
Pacific Rim
Passengers (2016 film)
Person of Interest
Philip K. Dick’s Electric Dreams (Series) “Autofac”
Power Rangers
Prometheus
Psycho-pass: The Movie
Ra.One
Real Steel
Resident Evil
Resident Evil: Extinction
Resident Evil: Retribution
Resident Evil: The Final Chapter
Rick & Morty “The Ricks Must be Crazy”
RoboCop
Robocop (2014 film)
Robocop 2
Robocop 3
Robot & Frank
Rogue One: A Star Wars Story
S1M0NE
Short Circuit
Short Circuit 2
Spider-Man: Homecoming
Star Trek First Contact
Star Trek Generations
Star Trek: The Motion Picture
Star Trek: The Next Generation
Star Wars
Star Wars: Episode I – The Phantom Menace
Star Wars: Episode II – Attack of the Clones
Star Wars: Episode III – Revenge of the Sith
Star Wars: The Force Awakens
Stealth
Superman III
The Terminator
Terminator 2: Judgment Day
Terminator 3: Rise of the Machines
Terminator Genisys, aka Terminator 5
Terminator Salvation
Tomorrowland
Total Recall
Transcendence
Transformers
Transformers: Age of Extinction
Transformers: Dark of the Moon
Transformers: Revenge of the Fallen
Transformers: The Last Knight
Tron
Tron: Legacy
Uncanny
WALL•E
WarGames
Westworld
Westworld
X-Men: Days of Future Past
 

Now sci-fi is vast, and more is being created all the time. Even accounting for the subset that has been committed to television and movie screens, it’s unlikely that this list contains every possible example. If you want to suggest more, feel free to add them in the comments. I am especially interested in examples that would suggest a tweak to the strategic conclusions at the end of this series of posts.

Did anything not make the cut?

A “greedy” definition of narrow AI would include some fairly mundane automatic technologies. The doors found in the Star Trek diegesis, for example, detect many forms of life (including synthetic) and even gauge the intentions of its users to determine whether or not they should activate. That’s more sophisticated than it first seems. (There was a chapter all about sci-fi doors that wound up on the cutting room floor of the book. Maybe I’ll pick that up and post it someday.) But when you think about this example in terms of cultural imperatives, the benefits of the door are so mundane, and the risks near nil (in the Star Trek universe they work perfectly, even if on set they didn’t), it doesn’t really help us answer the ultimate question driving these posts. Let’s call those smart, utilitarian, low-risk technologies mundane, and exclude those.

TOS door blooper

That’s not to say workaday, real-world narrow AI is out. IBM’s Watson for Oncology (full disclosure: I’ve worked there the past year and a half) reads X-rays to help identify tumors faster and more accurately than human doctors can keep up with. (Fuller disclosure: It is not without its criticisms.)…(Fullest disclosure: I do not speak on behalf of IBM anywhere on this blog.)

Watson for Oncology winds up being workaday, but still really valuable. It would be great to see such benefits to humanity writ in sci-fi. It would remind us of why we might pursue it even though it presents risk. On the flip side, mundane examples can have pernicious, hard-to-see consequences when implemented at a social scale, and if it’s clear a sci-fi narrow AI illustrates those kind of risks, it would be very valuable to include.

Also comedy may have AI examples, but for the same reason those examples are very difficult to review, they’re also difficult to include in this analysis. What belongs to the joke and what should be considered actually part of the diegesis? So, say, the Fembots from Austin Powers aren’t included.

No Austin Powers

Why not rate individual AIs?

You’ll note that I put The Avengers: Age of Ultron on one line, rather than listing Ultron, JARVIS, Friday, and Vision as separate things to consider. I did this because the takeaways (detailed in the next post) are tied to the whole story, not just the AI. If a story only has evil AIs, the implied imperative is to steer clear of AI. If a story only has good AIs, it implies we should step on the gas. But when a story has both, the takeaway is more complicated. Maybe it is that we should avoid the thing that made the evil AI evil, or to ensure that AI has human welfare baked into its goals and easy ways to unplug it if it’s become clear that it doesn’t. These examples show that it is the story that is the profitable chunk to examine.

Ultrons

TV shows are more complicated than movies because long-running ones, like Dr. Who or Star Trek, have lots of stories and the strategic takeaways may have changed over episodes much less the decades. For these shows, I’ve had to cheat a little and talk just about Daleks, say, or Data. My one-line coverage does them a bit of a disservice. But to keep this on track and not become a months-long analysis, I’ve gone with the very high level summary.

Similarly, franchises (like the overweighted Terminator series) can get more weight because there are many movies. But without dipping down into counting the actual minutes of time for each show and somehow noting which of those minutes are dedicated, conceptually, to AI, it’s practical simply to note the bias of the selected research strategy and move on.

OMFG you forgot [insert show here]!

If you want to suggest additions, awesome. Look at the Google Sheet (link below), specifically page named “properties”, and comment on this post with all the information that would be necessary to fill in a new row with the new show. Please also be aware a refresh of the subsequent analysis will happen only after some time and/or it becomes apparent that the conclusions would be significantly affected by new examples. Remember that since we’re looking for effects at a social level, the blockbusters and popular shows have more weight than obscure ones. More people see them. And I think the blockbusters and popular shows are all there.

So, that’s the survey from which the rest of this was built.

A first, tiny analysis

Once I had the list, I started working with the shows in the survey. Much of the process was managed in a “Sheets” (Google Docs) spreadsheet, which you can see at the link below.

Not wanting to have such a major post without at least some analysis, I did a quick breakdown of this data is how many of these shows each year involve AI. As you might guess, that number has been increasing a little over time, but has significantly spiked after 2010.

showsperyear
Click for a full-size image

Looking at the data, there’s not really many surprises there. We see one or two at the beginning of the prior century. Things picked up following real-world AI hype between 1970–1990. There was a tiny lull before AI became a mainstay in 1999 and ramped up as of 2011.

There’s a bit of statistical weirdness that the years ending in 0 tend not to have shows, but I think that’s just noise.

What isn’t apparent in the chart itself is that cinematic interest in AI did not show a tight mapping to the real-world “AI Winter (a period of hype-exhaustion that sharply reduced funding and publishing) that computer science suffered in 1974–80 and again 1987–93. It seems that, as audiences, we’re still interested in the narrative issues even when the actual computer science has quieted down.

It’s no sursprise that we’ve been telling ourselves more stories about AI over time. But things get more interesting when we look at the tone of those shows, as discussed in the next post.