Untold AI: The Untold and Writing Prompts

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

In case this post is your entry to the series, to get to this point I have…

In this post we look at the imperatives that don’t have matches in AI. Everything is built on a live analysis document, such that new shows and new manifestos can be added later. At the time of publishing, there are 27 of these Untold AI imperatives that sit alongside the 22 imperatives seen in the survey.

What stories about AI aren’t we telling ourselves?

To make these more digestible, I’ve synthesized the imperatives into five groups.

  1. We should build the right AI
  2. We should build the AI right
  3. We must manage the risks involved
  4. We must monitor AIs
  5. We must encourage an accurate cultural narrative

For each group…

  • I summarize it (as I interpreted things across the manifestos).
  • I list the imperatives that were seen in the survey and then those absent from the survey
  • I take a stab at why it might not have gotten any play in screen sci-fi and hopefully some ideas about ways that can be overcome.
  • Since I suspect this will be of practical interest to writers interested in AI, I’ve provided story ideas using those imperatives.
  • Where to learn more about the topic.

Let’s unfold Untold AI.

header_rightAI

1. We should build the right AI (the thing itself)

Narrow AI must be made ethically, transparently, and equitably, or it stands to be a tool used by evil forces to take advantage of global systems and make things worse. As we work towards General AI, we must ensure that it is verified, valid, secure, and controllable. We must also be certain that its incentives are aligned with human welfare before we allow it to evolve into superintelligence and therefore, out of our control. To hedge our bets, we should seed ASIs that balance each other.

Screen Shot 2018-06-12 at 10.08.27 AM.png

Related imperatives seen in the survey

  • We must take care to only create beneficial intelligence
  • We must ensure human welfare
  • AGI’s goals must be aligned with ours
  • AI must be free from bias
  • AI must be verified: Make sure it does what we want it to do
  • AI must be valid: Make sure it does not do what we don’t want it to do
  • AI must be controllable: That we can we correct or unplug an AI if needed without retaliation
  • We should augment, not replace humans
  • We should design AI to be part of human teams
  • AI should help humanity solve problems humanity cannot alone
  • We must develop inductive goals and models, so the AI could look at a few related facts and infer causes, rather than only following established top-down rules to conclusions.

Related imperatives absent from the survey

  • AI must be secure. It must be inaccessible to malefactors.
  • AI must provide clear confidences in its decisions. Sure, it’s recommending you return to school to get a doctorate, but it’s important to know if it’s only, like, 16% certain.
  • AI reasoning must have an explainable/understandable rationale, especially for judicial cases and system failures.
  • AI must be accountable. Anyone subject to an AI decision must have the right to object and request human review.
  • We should enable a human-like learning capability in AI.
  • We must research and build ASIs that balance each other, to avoid an intelligence monopoly.
  • The AI must be reliable. (All the AI we see is “reliable,” so we don’t see the negatives of unreliable AI.)a

Why don’t these appear in sci-fi AI?

At a high level of abstraction, it appears in sci-if all the time. Any time you see an AI on screen who is helpful to the protagonists, you have encountered an AI that is in one sense good. BB-8 for instance. Good AI. But the reason it’s good is rarely offered. It’s just the way they are. They’re just programmed that way. (There is one scene in Phantom Menace where Amidala offers a ceremonial thanks to R2-D2, so perhaps there are also reward loops.) But how we get there is the interesting bit, and not seen in the survey.

SW1-027.jpg

And, at the more detailed level—the level apparent in the imperatives—we don’t see the kinds of things we currently believe will make for good AI: like inductive goals and models. Or an AI offering judicial ruling, and having the accused exonerated by a human court. So when it comes to the details, sci-fi doesn’t illustrate the real reasons a good AI would be good.

Additionally, when AI is the villain of the story (I, Robot, Demon Seed, The Matrices, etc.) it is about having the wrong AI, but it’s often wrong for no reason or a silly reason. It’s inherently evil, say, or displaying human motivations like revenge. Now it’s hard to write an interesting story illustrating the right AI that just works well, but if it’s in the background and has some interesting worldbuilding consequences, that could work as well.

But what if…?

  • Sherlock Holmes was an inductive AI, and Watson was the comparatively stupid human babysitting it. Twist: Watson discovers that Holmes created AI Moriarty for job security.
  • A jurist in Human Recourse [sic] discovers that the AI judge from whom she inherits cases has been replaced, because the original AI judge was secretly convicted of a mind crime…against her.
  • A hacker falls through a literal hole in an ASI’s server, and has a set of Alice-in-Wonderland psychedelic encounters with characters inspired not by logical fallacies, but by AI principles.

Inspired with your own story idea? Tweet it with the hashtag #TheRightAI and tag @scifiinterfaces.

Learn more about what makes good AI

header_AIright

2. We should build the AI right (processes and methods)

We must take care that we are able to go about the building of AI cooperatively, ethically, and effectively. The right people should be in the room throughout to ensure diverse perspectives and equitable results. If we use the wrong people or the wrong tools, it affects our ability to build the “right AI.” Or more to the point, it will result in an AI that is wrong on some critical point.

Iron-Man-Movie-Prologue-Hologram-1

Related imperatives seen in the survey

  • We should adopt dual-use patterns from other mature domains
  • We must study the psychology of AI/uncanny valley

Related imperatives absent from the survey

  • We must fund AI research
  • We need effective design tools for new AIs
  • We must foster research cooperation, discussion
  • We should develop golden-mean world-model precision
  • We should encourage innovation (not stifle)
  • We must develop broad machine ethics dialogue
  • We should expand the range of stakeholders & domain experts

Why don’t these appear in sci-fi AI?

Building stuff is not very cinegenic. It takes a long time. It’s repetitive. There are a lot of stops and starts and restarts. It often doesn’t look “right” until just before the end. Design and development, if it ever appears, is relegated to a montage sequence. The closest thing we get in the survey is Person of Interest, and there, it’s only shown in flashback sequences if those sequences have some relevance to the more action-oriented present-time plot. Perhaps this can be shown in the negative, where crappy AI results from doing the opposite of these practices. Or perhaps it really needs a long-form format like television coupled with the right frame story.

But what if…?

  • An underdog team of ragtag students take a surprising route to creating their competition AI and win against their arrogant longtime rivals.
  • A young man must adopt a “baby” AI at his bar mitzvah, and raise it to be a virtuous companion for his adulthood. In truth, he is raising himself.
  • An aspiring artist steals the identity of an AI from his quality assurance job at Three Laws Testing Labs to get a shot at national acclaim.
  • Pygmalion & Galatea, but not sculpture. (Admittedly this is close to Her.)

Inspired with your own story idea? Tweet it with the hashtag #TheAIRight and tag @scifiinterfaces.

Join a community of practice

header_risks

3. We must manage the risks involved

We pursue AI because it carries so much promise to solve problems at a scale humans have never been able to manage themselves. But AIs carry with them risks that can scale as the thing becomes more powerful. We need ways to clearly understand, test, and articulate those risks so we can be proactive about avoiding them.

Related imperatives seen in the survey

  • We must specifically manage the risk and reward of AI
  • We must prevent intelligence monopolies by any one group
  • We must avoid mind crimes
  • We must prevent economic persuasion of people by AI
  • We must create effective public policy
    • Specifically banning autonomous weapons
    • Specifically respectful Privacy Laws (no chilling effects)
  • We should rein-in ultracapitalist AI
  • We must prioritize the prevention of malicious AI

Related imperatives absent from the survey

  • We need methods to evaluate risk
  • We must manage labor markets upended by AI
  • We should ensure equitable benefits for everyone
  • We must create effective public policy
    • Specifically liability law
    • Specifically humanitarian Law
    • Specifically Fair Criminal Justice

Why don’t these appear in sci-fi AI?

At the most abstract level, any time we see a bad AI in the survey, we are witnessing protagonists having failed to manage the risks of AI made manifest. But similar to the Right AI (above), most sci-if bad AI is just bad, and it’s the reasons it’s bad or how it became bad that is the interesting bit.

HAL

Also in our real world, we want to find and avoid those risks before they happen. Having everything running smoothly makes for some full stories, so maybe it’s just that we’re always showing how things go wrong, which puts us into risk management instead.

But what if…?

  • Five colonization-class spaceships are on a long journey to a distant star. The AI running each has evolved differently owing to the differing crews. In turn, four of these ships fail and their humans die for having failed to manage one of the risks. The last is the slowest and risk averse, and survives to meet an Alien AI, the remnant of a civilization that once thrived on the planet to be terraformed.
  • A young woman living in a future utopia dedicates a few years to virtually recreate the 21st century world. The capitalist parts begin to infect the AIs around her and she must struggle to disinfect it before it brings down her entire world. At the end she realizes she has herself been infected with its ideas and we are left wondering what choices she will make to save her world.
  • In a violent political revolution, anarchists smash a set of government servers only to learn that these were containing superintelligences. The AIs escape and begin to colonize the world and battle each other as humans burrow for cover.
  • Forbidden Planet, but no monsters from the id, plus an unthinkably ancient automated museum of fallen cultures. Every interpretive text is about how that culture’s AI manifested as the Great Filter. The last exhibit is labeled “in progress” and has Robbie at the center.

Inspired with your own story idea? Tweet it with the hashtag #ManagingAIRisks and tag @scifiinterfaces.

Learn more about the risks of AI

header_monitor

4. We must monitor the AIs

AI that is deterministic isn’t worth the name. But building non-deterministic AI means it’s also somewhat unpredictable, and can allow bad faith providers to encode their own interests. To watch for this and to know if active, well-intended AI is going off the rails, we must establish metrics for AI’s capabilities, performance, and rationale. We must build monitors that ensure they are aligned with human welfare and able to provide enough warning to take action immediately when something dangerous happens or is likely to.

Related imperatives seen in the survey

  • We must set up a watch for malicious AI (and instrumental convergence)

Related imperatives absent from the survey

  • We must find new metrics for measuring AI effects and capabilities, to know when it is trending in dangerous ways

Why doesn’t this appear in sci-fi AI?

I have no idea. We’ve had brilliant tales that asks “Who watches the watchers” but the particular tale I’m thinking about was about superhumans, not super technology. Of course if monitoring worked perfectly, there would have to be other things going on in the plot. And certainly one of the most famous sci-if movies, Minority Report, decided to house their prediction tech in triplet, clairvoyant humans rather than hidden markov models, so it doesn’t count. Given the proven formulas propping up cop shows and courtroom drama, it should be easy to introduce AIs (and the problems therein).

But what if…?

  • A Job character learns his longtime suffering is the side effect of his being a fundamental part of the immune system of a galaxy-spanning super AI.
  • A noir-style detective story about a Luddite gumshoe who investigates AIs behaving errantly on behalf of techno weary clients. He is invited to the most lucrative job of his career, but struggles because the client is itself AGI.
  • We think we are reading about a modern Amish coming-of-age ritual, but it turns out the religious tenets are all about their cultural job as AI cops.
  • A courtroom drama in which a sitting president is impeached, proven to have been deconstructing the democracy over which he presides, under the coercion of a foreign power. Only this time it’s AI.

Inspired with your own story idea? Tweet it with the hashtag #MonitoringAI and tag @scifiinterfaces.

Learn more about the suspect forces in AI

header_narrative

5. We must encourage an accurate cultural narrative

If we mismanage the narrative about AI, the population could be lulled into either a complacency that primes them to be victims of bad faith actors (human and AI), or make them so fearful they form a Luddite mob, gathering pitchforks and torches and fighting to prevent any development at all, robbing us of the promise of this new tool. Legislators hold particular power and if they are misinformed, could undercut progress or encourage the exact wrong thing.

Related imperatives seen in the survey

  • [None of these imperatives were seen in the survey]

Related imperatives absent from the survey

  • We should avoid overhyping AI so we don’t suffer another “AI Winter,” where funding and interest falls off
  • We should increase Broad AI literacy
    • Specifically for legislators (legislation is separate)
  • We should partner researchers with legislators

Why doesn’t this appear in sci-fi AI?

I think it’s because sci-fi is an act of narrative. And while Hollywood loves to obsess about itself (c.f. A recent at-hand example: The Shape of Water), this imperative is about how we tell these stories. It admonishes us to try and build an accurate picture of the risks and rewards in AI, so that audiences, investors, and legislators build better decisions on this background information. So rather than “tell a story about this” it’s “tell stories in this way.” And in fact, we can rank movies in the AI survey based on how well they track to the imperatives, and offer an award of sorts to the best. That comes in the next post.

But what if…?

  • A manipulative politician runs on a platform similar to the Red Scare, only vilifying AI in any form. He effectively kills public funding and interest, allowing clandestine corporate and military AI to flourish and eventually take over.
  • A shot-for-shot remake of The Twilight Zone classic, “The Monsters are Due on Maple Street,” but in the end it’s not aliens pulling the strings.
  • A strangely addictive multi-channel blockbuster show about “stupid robot blunders” keeps everyone distracted, framing AI risks as a laughable prospect, allowing an AI to begin to take control over everything. A reporter is mysteriously killed searching to interview the author of this blockbuster hit in person.
  • A cooperative board game where the goal is to control the AI as it develops six superpowers (economic productivity, strategy & tech, hacking and social control, expansion of self, and finally construction of its von Neumann probes.) Mechanics encourage tragedy of the commons forces early in the game but aggressive players ultimately doom the win. [Ok, this isn’t screen sci-fi, but I love the idea and would even pursue it if I had the expertise or time.]

Inspired with your own story idea? Tweet it with the hashtag #AccurateAI and tag @scifiinterfaces.

Add more signal to the noise

Excited about the possibilities? If you’re looking for other writing prompts, check out the following resources that you could combine with any of these Untold AI imperatives, and make some awesome sci-fi.

Why does screen sci-fi have trouble?

When we take a step back and look at the big patterns of the groups, we see that sci-fi is telling lots of stories about the Right AI and Managing the Risks. More often than not, it’s just missing the important details. This is a twofold issue of literacy.

electric_dreams-4.jpg

First, audiences only vaguely understand AI, so (champagne+keyboard=sentience) might seem as plausible as (AGI will trick us into helping it escape). If audiences were more knowledgeable, they might balk at Electric Dreams and take Her as an important, dire warning. Audience literacy often depends on repetition of themes in media and direct experience. So while audiences can’t be blamed, they are the feedback loop for producers.

Which brings us to the second are of literacy: Producers green light certain sci-fi scripts and not others, based on what they think will work. Even if they are literate and understand that something isn’t plausible in the real world, that doesn’t really matter. They’re making movies. They’re not making the real world. (Except, as far as they’re setting audience expectations and informing attitudes about speculative technologies, they are.) It’s a chicken-and-egg problem, but if producers balked at ridiculous scripts, there would be less misinformation in cinema. The major lever to persuade them to do that is if audiences were more AI-literate.

Sci-fi has a harder time of telling stories about building AI Right. This is mostly about cinegenics. As noted above, design and development is hard to make compelling in narrative.

It has a similar difficulty in telling stories about Monitoring AI. I think that this, too, is an issue of cinegenics. To tell a tale that includes a monitor, you have to first describe the AI, and then describe the monitor in ways that don’t drag down the story with a litany of exposition. I suspect it’s only once AI stabilizes its tropes that we’ll tell this important second-order story. But with AI still evolving in the real world, we’re far from that point.

Lastly, screen sci-fi is missing the boat about using the medium to encourage Accurate Cultural Narratives, except as individual authors do their research to present a speculative vision of AI that matches or illustrates real science fact.

***

So that I am doing my part to encourage that, in the next post I’ll run the numbers to offer “awards” to the movies and TV shows in the survey most tightly align with the science.

One thought on “Untold AI: The Untold and Writing Prompts

  1. Pingback: Untold AI: Poster | Sci-fi interfaces

Leave a Reply