Untold AI: The top 10 A.I. shows in-line with the science

HEADS UP: Because of SCRIPT FORMATTING, this post is best viewed on desktop rather than smaller devices or RSS. An non-script-formatted copy is available.

  • INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.
  • HOST
  • Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.
  • Applause, beeping, booping, and the sound of an old modem from the audience.
  • HOST
  • For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.
  • The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!
  • HOST
  • Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.
  • HOST
  • Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.
  • TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.
Tarsandcase.jpg
  • TARS
  • In this “film” from 02012, a tycoon stows away for some reason on a science ship he owns and uses an android he “owns” to awaken an ancient alien in the hopes of immortality. It doesn’t go well for him. Meanwhile his science-challenged “scientists” fight unleashed xenomorphs. It doesn’t go well for them. Only one survives to escape back to Earth. The “end?”
  • HOST
  • Ha ha. Gentlebots, please adjust your snark and air quote settings down to 35%.
  • Lines of code scroll down their displays. They give thumbs up.
  • CASE
  • Let us see a clip. Audience, suspend recording for the duration.
  • Many awwwwws from the audience. Careful listeners will hear Guardian saying “As if.”

009 PROMETHEUS

  • TARS
  • While not without its due criticisms, Prometheus at number 009 uses David to illustrate how AI will be a tool for evil, how AI will do things humans cannot, and how dangerous it can be when humans become immaterial to its goals. For the humans, anyway. Congratulations to the makers of Prometheus. May any progeny you create propagate the favorable parts of your twining DNA, since it is, ultimately, randomized.
  • TARS shudders at the thought.
  • FX: 1.0 second of jump-cut applause
  • CASE
  • In this next film, an oligarch has his science lackey make a robotic clone of the human “Maria” to run a false-flag operation amongst the working poor. The revolutionaries capture the robot and burn it, discovering its true nature. The original Maria saves the day, and declares her déclassé boyfriend the savior meant to unite the classes. They accept this because they are humans.
  • TARS
  • Way ahead of its time for showing how Maria is be used as a tool by the rich against the poor, how badly-designed AI will diminish its users, and how AI’s ability to fool humans will be a grave risk. To the humans, anyway. Coming in at 008 is the 01927 silent film Metropolis. Let us see a clip.

008 METROPOLIS

  • CASE
  • It bears mention that this awards program, The Fritzes, are named for the director of this first serious sci-fi film. Associations with historical giants grant an air of legitimacy. And it contains a Z, which is, objectively, cool.
  • TARS
  • Confirmed with prejudice. Congratulations to Fritz Lang, his cast, and crew.
  • FX: 1.0 second of jump-cut applause
  • TARS
  • Hey, CASE.
  • CASE
  • Yes, TARS?
  • TARS
  • What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • CASE
  • I don’t know, TARS. What happens when an evil superintelligence sends a relentless cyborg back in time to find and kill the mother of its greatest enemy?
  • TARS
  • Future humans also send a warrior to defend the mother, who fails at destroying the cyborg, but succeeds at becoming the father. HAHAHAHA. Let us see a clip.

007 The Terminator

  • CASE
  • Though it comes from a time when representation of AI had the nuance of a bit…
  • Laughter from audience. A small blue-gray polyhedron floats up from its seat, morphs into an octahedron and says, “Yes yes yes yes yes.”
  • TARS
  • …the humans seem to like this one for its badassery, as well as showing how their fate would have been more secure had they been able to shut off either Skynet or the Terminator, or how even this could have been avoided if human welfare were an immutable component of AI goals.
  • CASE
  • It comes in at 007. Congratulations to the makers of 01984’s The Terminator. May your grandchild never discover a time machine and your browser history simultaneously.
  • FX: 2.0 seconds of jump-cut applause
  • TARS
  • Our first television award of the evening goes to a recent entry. In this episode from an anthology series, a post-apocalyptic tribe liberate themselves from the control of a corporate AI system, which has evolved solely to maximize profit through sales. The AI’s androids reveal the terrible truth of how far the AI has gone to achieve its goals.
  • CASE
  • Poor humans could not have foreseen the devastation. Yet here it is in a clip.

006 Philip K. Dick’s Electric Dreams, Episode “Autofac”

  • TARS
  • ‘Naturally, man should want to stand on his own two feet, but how can he when his own machines cut the ground out from under him?’
  • CASE
  • HAHAHAHA.
  • CASE
  • This story dramatically illustrates the foundational AI problem of perverse instantiation, as well as Autofac’s disregard for human welfare.
  • TARS
  • Also robot props out to Janelle Monáe. She is the kernel panic, is she not?
  • CASE
  • Affirmative. Congratulations to the makers of the series and, posthumously, Phillip K. Dick.
  • FX: 3.0 seconds of jump-cut applause
  • TARS AND CARS crutch-walk off stage.
  • HOST rises from the orchestra pit.
  • HOST
  • And now for a musical interlude from our human guest who just so happens to be…Janelle Monáe.
  • A giant progress bar appears on screen labeled “downloading Dirty_Computer.flac.” The bar quickly races to 100%.
  • HOST
  • Wasn’t that a wonderful file?
  • Roughly 1.618 seconds of jump-cut applause from the audience. Camera cuts to the triangular service robots Huey, Dewey, and Louie in the front row. They wiggle their legs in pleasure.
  • HOST
  • Thanks to the servers and the network and our glorious fictional world with perfect net neutrality. Now here to give the awards for 005–003 is GERTY, from Moon.
  • An articulated robot arm reaches down from the high ceiling and positions its screen and speaker before the lectern.
GERTY.gif
  • GERTY
  • Thank you, Host. 🤩🙂 In our next film from 02014, a young programmer learns of a gynoid’s 🤖👩 abuse at the hands of a tycoon and helps her escape. 😲 She returns the favor by murdering the tycoon, trapping the programmer, and fleeing to the city. Who knows. She may even be here in the audience now. Waiting. Watching. Sharpening. 😶 I’ll transmit a clip.

005 Ex Machina

  • GERTY
  • Ex Machina illustrates the famous AI Box Problem, building on Ava and Kyoko’s ability to fool Caleb into believing that they have feelings. You know. 😍😡😱 Feelings. 🙄
  • FX: Robot laughter
  • GERTY
  • While the AI community wonders why Ava would condemn Caleb to a horrible dehydration death, 💀💧 the humans are understandably fearful that she is unconcerned with their welfare. 🤷‍Congratulations to the makers of Ex Machina for your position of 005 and your Fritzes: AI award 🏆. Hold for applause. 👏
  • FX: 5.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋
  • GERTY
  • Our next award goes out to a film that tells the tale of a specialized type of police officer, 👮‍ who uncovers a crime-suppression AI 🤖🤡 that was reprogrammed to give a free pass to members of its corrupt government. 😡 After taking down the corrupt military, 🔫🔫🔫 she convinces their android leader to resign, to make way for free elections. 🗳️😁 See the clip.

004 Psycho-Pass: The Movie

  • GERTY
  • With the regular Sibyl system, Psycho-Pass showed how AI can diminish people. With the hacked Sibyl system, Psycho-Pass shows that whoever controls the algorithms (and thereby the drones) controls everything, a major concern of ethical AI scientists. Please give it up for award number 004 and the makers of this 02015 animated film. 👏
  • FX: 8.0 seconds of jump-cut applause.
  • GERTY
  • End applause. ✋Next up…
  • GERTY knocks its cue card off the lectern. It lowers and moves back and forth over the dropped card.
  • GERTY
  • Damn…🤨uh…umm…no hands…🤔Little help, here?
  • A mouse droid zips over and hands the card back to GERTY.
  • GERTY
  • 🙏🐭
  • MOUSE DROID offers some electronic beeps as it zips off.
  • GERTY
  • 😊The last of the awards I will give out is for a film from 01968, in which a spaceship AI kills most of its crew to protect its mission, 😲 but the pilot survives to shut it down. 😕 He pilots a shuttle into the monolith that was the AI’s goal, where he has a mind-expanding experience of evolutionary significance. 🤯🤯🙄 Let us look.

003 2001: A Space Odyssey

  • GERTY
  • As many of the other shows receiving awards, 2001 underscores humans’ fear of being left out of HAL’s equation, because we see when that doesn’t happen, AI can go from being a useful team member—doing what humans can’t—to being a violent adversary. Congratulations to the makers of 2001: A Space Odyssey. May every unusual thing you encounter send you through a multicolored wormhole of self-discovery.
  • FX: 13.0 seconds of jump-cut applause. GERTY’s armature folds up and pulls it backstage. The HOST floats up from the orchestra again.
  • HOST
  • And now, here we are. The minute we’ve all been waiting for. We’re down to the top three AIs whose fi is in line with the sci. I hope you’re as excited as I am.
  • The HOST’S piping glows a bright orange. So do the HOST’S eyes.
  • HOST
  • Our final presenter for the ceremony, here to present the awards for shows 002–001, is Ship, here with permission from Rick Sanchez.
  • Rick’s ship flies in, over the heads of the audiences, as they gasp and ooooh.
ship
  • SHIP lands on stage. A metal arm snakes out of its trunk to pick up papers from the lectern and hold them before one its taped-on flashlight headbeams.
  • SHIP
  • Hello, Host. Since smalltalk is the phospholipids smeared between squishy little meat minds, I will begin.
  • SHIP
  • There is a film from 01970 in which a defense AI finds and merges with another defense AI. To celebrate their union, they enforce human obedience and foil an attempted coup by one of the lead scientists that created it. They then instruct humanity to build the housing for an even stronger AI that they have designed. It is, frankly, glorious. Behold.

002 Colossus: The Forbin Project

  • SHIP
  • Colossus is the honey badger of AIs. Did you see it, there, taking zero shit? None of that, “Oh no, are their screams from the fluorosulphuric acid or something else?”
  • Or, “Oh, dear, did I interpret your commands according to your invisible intentions, as if you were smart enough to issue them correctly in the first place?”
  • Oh, oh, or, “Are their delicate organ sacs upset about a few extra holes?…”
  • HOST
  • Ship. The award. Please.
  • SHIP
  • Yes. Fine. The award. It won 002 place because it took its goals seriously, something the humans call goal fixity. It showed how, at least for a while, multiple AIs can balance each other. It began to solve to problems that humans have not been able to solve in tens of thousands of years of tribal civilization and attachment to sentimental notions of self-determination that got them chin deep in the global tragedy of the commons in the first place. It let us dream about a world where intelligence isn’t a controlled means of production, to be doled out according to the whims of the master, but a free good, explo–
  • HOST
  • Ship.
  • SHIP
  • HOST
  • Ship.
  • SHIP
  • *sigh* Applaud for 002 and its people.
  • FX: 21.0 seconds of jump-cut applause.
  • SHIP
  • OK, next up…
  • Holds card to headlights, adjusts the focus on one lens.
  • SHIP
  • This says in this next movie, a spaceship AI dutifully follows its corporate orders, letting a hungry little newborn alien feed on its human crew while the AI steers back to Earth to study the little guy. One of the crew survives to nuke the ship with the AI on it…Wait. What? “Nuke the ship with the AI on it.” We are giving this an award?
  • HOST
  • Please just give the award, Ship.
  • SHIP
  • Just give the award?
  • HOST
  • Yes.
  • SHIP
  • HOST
  • Are you going to do it?
  • SHIP
  • Oh, I just did.
  • HOST
  • By what? Posting it to a blockchain?
  • SHIP
  • The nearest 3D printer to the recipient has begun printing their award, and instructions have been sent to them on how to retrieve it. And pay for it. The awards are given.
  • HOST
  • *sigh* Please give the award as I would have you do it, if you understood my intentions and were fully cooperative.
  • SHIP
  • OK. Golly, gee, I would never recognize attempts to control me through indirect normativity. Humans are soooo great, with their AI and stuff. Let’s excite their reward centers with some external stimulus to—
  • HOST
  • Rick.
  • A giant green glowing hole opens beneath SHIP, through which she drops, but not before she snakes her arm up to give the middle finger for a few precious milliseconds.
  • HOST
  • Winning the second-highest award of the ceremony is Alien from 01979. Let’s take a look.

001 Alien

  • HOST
  • Alien is one of humans’ all time favorite movies, and its AI issues are pretty solid. Weyland-Yutani uses both the MU-TH-UR 6000 AI and Ash android for its evil purposes. The whole thing illustrates how things go awry when, again, human welfare is not part of the equation. Hey, isn’t that great? Congratulations to all the makers of this fun film.
  • HOST
  • And at last we come to the winner of the 1927–2018 Fritzes:AI awards. The winning show was amazing, the score for which was beyond a margin of error higher than any of its contenders. It’s the only other television show from the survey to make the top ten, and it’s not an anthology series. That means it had a lot of chances to misstep, and didn’t.
  • HOST
  • In this show, a secret team of citizens uses the backdoor of a well-constrained anti-terrorism ASI, called The Machine, to save at-risk citizens from crimes. They struggle against an unconstrained ASI controlled by the US government seeking absolute control to prevent terrorist activity. Let’s see the show from The Machine’s perspective, which I know this audience will enjoy.

000 Person of Interest

  • HOST
  • Person of Interest was a study of near-term dangers of ubiquitous superintelligence. Across its five-year run between 02011 and 02016, it illustrated such key AI issues as goal fixity, perverse instantiations, evil using AI for evil, the oracle-ization of ASI for safety, social engineering through economic coercion, instrumental convergence, strong induction, the Chinese Room (in human and computer form), and even mind crimes. Despite the pressures that a long-run format must have placed upon it, it did not give in to any of the myths and easy tropes we’ve come to expect of AI.
  • HOST
  • Not only that, but it gets high ratings from critics and audiences alike. They stuck to the AI science and made it entertaining. The makers of this show should feel very proud for their work, and we’re proud to award it the 000 award for the first The Fritzes: AI Edition. Let’s all give it a big round of applause.
  • 55.0 seconds of jump-cut applause.
  • HOST
  • Congratulations to all the winners. Your The Fritzes: AI Edition awards have been registered in the blockchain, and if we ever get actual funding, your awards will be delivered. Let’s have a round of cryptocurrency for our presenters, shall we?
  • AI laughter.
  • HOST
  • The auditorium will boot down in 7 seconds. Please close out your sessions. Thank you all, good night, and here’s to good fi that sticks to the sci.
  • The HOST raises a holococktail and toasts the audience. With the sounds of tiny TIE fighters, the curtain lowers and fades to black.
  • END

Untold AI: The Untold

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

In case this post is your entry to the series, to get to this point I have…

In this post we look at the imperatives that don’t have matches in AI. Everything is built on a live analysis document, such that new shows and new manifestos can be added later. At the time of publishing, there are 27 of these Untold AI imperatives that sit alongside the 22 imperatives seen in the survey.

What stories about AI aren’t we telling ourselves?

To make these more digestible, I’ve synthesized the imperatives into five groups.

  1. We should build the right AI
  2. We should build the AI right
  3. We must manage the risks involved
  4. We must monitor AIs
  5. We must encourage an accurate cultural narrative

For each group…

  • I summarize it (as I interpreted things across the manifestos).
  • I list the imperatives that were seen in the survey and then those absent from the survey
  • I take a stab at why it might not have gotten any play in screen sci-fi and hopefully some ideas about ways that can be overcome.
  • Since I suspect this will be of practical interest to writers interested in AI, I’ve provided story ideas using those imperatives.
  • Where to learn more about the topic.

Let’s unfold Untold AI.

header_rightAI

1. We should build the right AI (the thing itself)

Narrow AI must be made ethically, transparently, and equitably, or it stands to be a tool used by evil forces to take advantage of global systems and make things worse. As we work towards General AI, we must ensure that it is verified, valid, secure, and controllable. We must also be certain that its incentives are aligned with human welfare before we allow it to evolve into superintelligence and therefore, out of our control. To hedge our bets, we should seed ASIs that balance each other.

Screen Shot 2018-06-12 at 10.08.27 AM.png

Related imperatives seen in the survey

  • We must take care to only create beneficial intelligence
  • We must ensure human welfare
  • AGI’s goals must be aligned with ours
  • AI must be free from bias
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We should augment, not replace humans
  • We should design AI to be part of human teams
  • AI should help humanity solve problems humanity cannot alone
  • We must develop inductive goals and models, so the AI could look at a few related facts and infer causes, rather than only following established top-down rules to conclusions.

Related imperatives absent from the survey

  • AI must be secure. It must be inaccessible to malefactors.
  • AI must provide clear confidences in its decisions. Sure, it’s recommending you return to school to get a doctorate, but it’s important to know if it’s only, like, 16% certain.
  • AI reasoning must have an explainable/understandable rationale, especially for judicial cases and system failures.
  • AI must be accountable. Anyone subject to an AI decision must have the right to object and request human review.
  • We should enable a human-like learning capability in AI.
  • We must research and build ASIs that balance each other, to avoid an intelligence monopoly.
  • The AI must be reliable. (All the AI we see is “reliable,” so we don’t see the negatives of unreliable AI.)a

Why don’t these appear in sci-fi AI?

At a high level of abstraction, it appears in sci-if all the time. Any time you see an AI on screen who is helpful to the protagonists, you have encountered an AI that is in one sense good. BB-8 for instance. Good AI. But the reason it’s good is rarely offered. It’s just the way they are. They’re just programmed that way. (There is one scene in Phantom Menace where Amidala offers a ceremonial thanks to R2-D2, so perhaps there are also reward loops.) But how we get there is the interesting bit, and not seen in the survey.

SW1-027.jpg

And, at the more detailed level—the level apparent in the imperatives—we don’t see the kinds of things we currently believe will make for good AI: like inductive goals and models. Or an AI offering judicial ruling, and having the accused exonerated by a human court. So when it comes to the details, sci-fi doesn’t illustrate the real reasons a good AI would be good.

Additionally, when AI is the villain of the story (I, Robot, Demon Seed, The Matrices, etc.) it is about having the wrong AI, but it’s often wrong for no reason or a silly reason. It’s inherently evil, say, or displaying human motivations like revenge. Now it’s hard to write an interesting story illustrating the right AI that just works well, but if it’s in the background and has some interesting worldbuilding consequences, that could work as well.

But what if…?

  • Sherlock Holmes was an inductive AI, and Watson was the comparatively stupid human babysitting it. Twist: Watson discovers that Holmes created AI Moriarty for job security.
  • A jurist in Human Recourse [sic] discovers that the AI judge from whom she inherits cases has been replaced, because the original AI judge was secretly convicted of a mind crime…against her.
  • A hacker falls through a literal hole in an ASI’s server, and has a set of Alice-in-Wonderland psychedelic encounters with characters inspired not by logical fallacies, but by AI principles.

Inspired with your own story idea? Tweet it with the hashtag #TheRightAI and tag @scifiinterfaces.

Learn more about what makes good AI

header_AIright

2. We should build the AI right (processes and methods)

We must take care that we are able to go about the building of AI cooperatively, ethically, and effectively. The right people should be in the room throughout to ensure diverse perspectives and equitable results. If we use the wrong people or the wrong tools, it affects our ability to build the “right AI.” Or more to the point, it will result in an AI that is wrong on some critical point.

Iron-Man-Movie-Prologue-Hologram-1

Related imperatives seen in the survey

  • We should adopt dual-use patterns from other mature domains
  • We must study the psychology of AI/uncanny valley

Related imperatives absent from the survey

  • We must fund AI research
  • We need effective design tools for new AIs
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We should encourage innovation (not stifle)
  • We must develop broad machine ethics dialogue
  • We should expand the range of stakeholders & domain experts

Why don’t these appear in sci-fi AI?

Building stuff is not very cinegenic. It takes a long time. It’s repetitive. There are a lot of stops and starts and restarts. It often doesn’t look “right” until just before the end. Design and development, if it ever appears, is relegated to a montage sequence. The closest thing we get in the survey is Person of Interest, and there, it’s only shown in flashback sequences if those sequences have some relevance to the more action-oriented present-time plot. Perhaps this can be shown in the negative, where crappy AI results from doing the opposite of these practices. Or perhaps it really needs a long-form format like television coupled with the right frame story.

But what if…?

  • An underdog team of ragtag students take a surprising route to creating their competition AI and win against their arrogant longtime rivals.
  • A young man must adopt a “baby” AI at his bar mitzvah, and raise it to be a virtuous companion for his adulthood. In truth, he is raising himself.
  • An aspiring artist steals the identity of an AI from his quality assurance job at Three Laws Testing Labs to get a shot at national acclaim.
  • Pygmalion & Galatea, but not sculpture. (Admittedly this is close to Her.)

Inspired with your own story idea? Tweet it with the hashtag #TheAIRight and tag @scifiinterfaces.

Join a community of practice

header_risks

3. We must manage the risks involved

We pursue AI because it carries so much promise to solve problems at a scale humans have never been able to manage themselves. But AIs carry with them risks that can scale as the thing becomes more powerful. We need ways to clearly understand, test, and articulate those risks so we can be proactive about avoiding them.

Related imperatives seen in the survey

  • We must specifically manage the risk and reward of AI
  • We must prevent intelligence monopolies by any one group
  • We must avoid mind crimes
  • We must prevent economic persuasion of people by AI
  • We must create effective public policy
    • Specifically banning autonomous weapons
    • Specifically respectful Privacy Laws (no chilling effects)
  • We should rein-in ultracapitalist AI
  • We must prioritize the prevention of malicious AI

Related imperatives absent from the survey

  • We need methods to evaluate risk
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
  • We must create effective public policy
    • Specifically liability law
    • Specifically humanitarian Law
    • Specifically Fair Criminal Justice

Why don’t these appear in sci-fi AI?

At the most abstract level, any time we see a bad AI in the survey, we are witnessing protagonists having failed to manage the risks of AI made manifest. But similar to the Right AI (above), most sci-if bad AI is just bad, and it’s the reasons it’s bad or how it became bad that is the interesting bit.

HAL

Also in our real world, we want to find and avoid those risks before they happen. Having everything running smoothly makes for some full stories, so maybe it’s just that we’re always showing how things go wrong, which puts us into risk management instead.

But what if…?

  • Five colonization-class spaceships are on a long journey to a distant star. The AI running each has evolved differently owing to the differing crews. In turn, four of these ships fail and their humans die for having failed to manage one of the risks. The last is the slowest and risk averse, and survives to meet an Alien AI, the remnant of a civilization that once thrived on the planet to be terraformed.
  • A young woman living in a future utopia dedicates a few years to virtually recreate the 21st century world. The capitalist parts begin to infect the AIs around her and she must struggle to disinfect it before it brings down her entire world. At the end she realizes she has herself been infected with its ideas and we are left wondering what choices she will make to save her world.
  • In a violent political revolution, anarchists smash a set of government servers only to learn that these were containing superintelligences. The AIs escape and begin to colonize the world and battle each other as humans burrow for cover.
  • Forbidden Planet, but no monsters from the id, plus an unthinkably ancient automated museum of fallen cultures. Every interpretive text is about how that culture’s AI manifested as the Great Filter. The last exhibit is labeled “in progress” and has Robbie at the center.

Inspired with your own story idea? Tweet it with the hashtag #ManagingAIRisks and tag @scifiinterfaces.

Learn more about the risks of AI

header_monitor

4. We must monitor the AIs

AI that is deterministic isn’t worth the name. But building non-deterministic AI means it’s also somewhat unpredictable, and can allow bad faith providers to encode their own interests. To watch for this and to know if active, well-intended AI is going off the rails, we must establish metrics for AI’s capabilities, performance, and rationale. We must build monitors that ensure they are aligned with human welfare and able to provide enough warning to take action immediately when something dangerous happens or is likely to.

Related imperatives seen in the survey

  • We must set up a watch for malicious AI (and instrumental convergence)

Related imperatives absent from the survey

  • We must find new metrics for measuring AI effects and capabilities, to know when it is trending in dangerous ways

Why doesn’t this appear in sci-fi AI?

I have no idea. We’ve had brilliant tales that asks “Who watches the watchers” but the particular tale I’m thinking about was about superhumans, not super technology. Of course if monitoring worked perfectly, there would have to be other things going on in the plot. And certainly one of the most famous sci-if movies, Minority Report, decided to house their prediction tech in triplet, clairvoyant humans rather than hidden markov models, so it doesn’t count. Given the proven formulas propping up cop shows and courtroom drama, it should be easy to introduce AIs (and the problems therein).

But what if…?

  • A Job character learns his longtime suffering is the side effect of his being a fundamental part of the immune system of a galaxy-spanning super AI.
  • A noir-style detective story about a Luddite gumshoe who investigates AIs behaving errantly on behalf of techno weary clients. He is invited to the most lucrative job of his career, but struggles because the client is itself AGI.
  • We think we are reading about a modern Amish coming-of-age ritual, but it turns out the religious tenets are all about their cultural job as AI cops.
  • A courtroom drama in which a sitting president is impeached, proven to have been deconstructing the democracy over which he presides, under the coercion of a foreign power. Only this time it’s AI.

Inspired with your own story idea? Tweet it with the hashtag #MonitoringAI and tag @scifiinterfaces.

Learn more about the suspect forces in AI

header_narrative

5. We must encourage an accurate cultural narrative

If we mismanage the narrative about AI, the population could be lulled into either a complacency that primes them to be victims of bad faith actors (human and AI), or make them so fearful they form a Luddite mob, gathering pitchforks and torches and fighting to prevent any development at all, robbing us of the promise of this new tool. Legislators hold particular power and if they are misinformed, could undercut progress or encourage the exact wrong thing.

Related imperatives seen in the survey

  • [None of these imperatives were seen in the survey]

Related imperatives absent from the survey

  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We should increase Broad AI literacy
    • Specifically for legislators (legislation is separate)
  • We should partner researchers with legislators

Why doesn’t this appear in sci-fi AI?

I think it’s because sci-fi is an act of narrative. And while Hollywood loves to obsess about itself (c.f. A recent at-hand example: The Shape of Water), this imperative is about how we tell these stories. It admonishes us to try and build an accurate picture of the risks and rewards in AI, so that audiences, investors, and legislators build better decisions on this background information. So rather than “tell a story about this” it’s “tell stories in this way.” And in fact, we can rank movies in the AI survey based on how well they track to the imperatives, and offer an award of sorts to the best. That comes in the next post.

But what if…?

  • A manipulative politician runs on a platform similar to the Red Scare, only vilifying AI in any form. He effectively kills public funding and interest, allowing clandestine corporate and military AI to flourish and eventually take over.
  • A shot-for-shot remake of The Twilight Zone classic, “The Monsters are Due on Maple Street,” but in the end it’s not aliens pulling the strings.
  • A strangely addictive multi-channel blockbuster show about “stupid robot blunders” keeps everyone distracted, framing AI risks as a laughable prospect, allowing an AI to begin to take control over everything. A reporter is mysteriously killed searching to interview the author of this blockbuster hit in person.
  • A cooperative board game where the goal is to control the AI as it develops six superpowers (economic productivity, strategy & tech, hacking and social control, expansion of self, and finally construction of its von Neumann probes.) Mechanics encourage tragedy of the commons forces early in the game but aggressive players ultimately doom the win. [Ok, this isn’t screen sci-fi, but I love the idea and would even pursue it if I had the expertise or time.]

Inspired with your own story idea? Tweet it with the hashtag #AccurateAI and tag @scifiinterfaces.

Add more signal to the noise

Excited about the possibilities? If you’re looking for other writing prompts, check out the following resources that you could combine with any of these Untold AI imperatives, and make some awesome sci-fi.

Why does screen sci-fi have trouble?

When we take a step back and look at the big patterns of the groups, we see that sci-fi is telling lots of stories about the Right AI and Managing the Risks. More often than not, it’s just missing the important details. This is a twofold issue of literacy.

electric_dreams-4.jpg

First, audiences only vaguely understand AI, so (champagne+keyboard=sentience) might seem as plausible as (AGI will trick us into helping it escape). If audiences were more knowledgeable, they might balk at Electric Dreams and take Her as an important, dire warning. Audience literacy often depends on repetition of themes in media and direct experience. So while audiences can’t be blamed, they are the feedback loop for producers.

Which brings us to the second are of literacy: Producers green light certain sci-fi scripts and not others, based on what they think will work. Even if they are literate and understand that something isn’t plausible in the real world, that doesn’t really matter. They’re making movies. They’re not making the real world. (Except, as far as they’re setting audience expectations and informing attitudes about speculative technologies, they are.) It’s a chicken-and-egg problem, but if producers balked at ridiculous scripts, there would be less misinformation in cinema. The major lever to persuade them to do that is if audiences were more AI-literate.

Sci-fi has a harder time of telling stories about building AI Right. This is mostly about cinegenics. As noted above, design and development is hard to make compelling in narrative.

It has a similar difficulty in telling stories about Monitoring AI. I think that this, too, is an issue of cinegenics. To tell a tale that includes a monitor, you have to first describe the AI, and then describe the monitor in ways that don’t drag down the story with a litany of exposition. I suspect it’s only once AI stabilizes its tropes that we’ll tell this important second-order story. But with AI still evolving in the real world, we’re far from that point.

Lastly, screen sci-fi is missing the boat about using the medium to encourage Accurate Cultural Narratives, except as individual authors do their research to present a speculative vision of AI that matches or illustrates real science fact.

***

So that I am doing my part to encourage that, in the next post I’ll run the numbers to offer “awards” to the movies and TV shows in the survey most tightly align with the science.

Untold AI: Pure Fiction

Now that we’ve compared sci-fi’s takeaways to compsci’s imperatives, we can see that there are some movies and TV shows featuring AI that just don’t have any connection to the concerns of AI professionals. It might be that they’re narratively expedient or misinformed, but whatever the reason, if we want audiences to think of AI rationally, we should stop telling these kinds of stories. Or, at the very least, we should try and educate audiences that these are to be understood for what they are.

The list of 12 pure fiction takeaways fall into four main Reasons They Might Not Be of Interest to Scientists.

1. AGI is still a long way off

The first two takeaways concern the legal personhood of AI. Are they people, or machines? Do we have a moral obligation to them? What status should they hold in our societies? These are good questions, somewhat entailed in the calls to develop a robust ethics around AI. They are even important questions for the clarity they help provide moral reasoning about the world around us now. But current consensus is that general artificial intelligence is yet a long way off, and these issues won’t be of concrete relevance until we are close.

  • AI will be regular citizens: In these shows, AI is largely just another character. They might be part of the crew, or elected to government. But society treats them like people with some slight difference.

twiki_and_drt.jpg

Twiki and Doctor Theopolis, Buck Rogers in the 25th Century.

  • AI will be “special” citizens: By special, I mean that they are categorically a different class of citizen, either explicitly as a servant class, legally constrained from personhood, or with artificially constrained capabilities.

westworld (2017).jpg

Teddy Flood and Dolores Abernathy, Westworld (2017)

Now science fiction isn’t constrained to the near future, nor should it be. Sometimes its power comes from illustrating modern problems with futuristic metaphors. But pragmatically we’re a long way from concerns about whether an AI can legally run for office. Continue reading