Untold AI: The top 10 A.I. shows in-line with the science

HEADS UP: Because of SCRIPT FORMATTING, this post is best viewed on desktop rather than smaller devices or RSS. An non-script-formatted copy is available.

  • INT. SCI-FI AUDITORIUM. MAYBE THE PLAVALAGUNA OPERA HOUSE. A HEAVY RED VELVET CURTAIN RISES, LIFTED BY ANTI-GRAVITY PODS THAT SOUND LIKE TINY TIE FIGHTERS. THE HOST STANDS ON A FLOATING PODIUM THAT RISES FROM THE ORCHESTRA PIT. THE HOST WEARS A VELOUR SUIT WITH PIPING, WHICH GLOWS WITH SLIDING, OVERLAPPING BACTERIAL SHAPES.
  • HOST
  • Hello and welcome to The Fritzes: AI Edition, where we give out awards for awesome movies and television shows about AI that stick to the science.
  • Applause, beeping, booping, and the sound of an old modem from the audience.
  • HOST
  • For those wondering how we picked these winners, it was based on the Untold AI analysis from scifiinterfaces.com. That analysis compared what sci-fi shows suggest about AI (called “takeaways”) to what real world manifestos suggest about AI (called “imperatives”). If a movie had a takeaway that matched an imperative, it got a point. But if it perpetuated a pointless and distracting myth, it lost five points.
  • The Demon Seed metal-skinned podling thing stands up in the back row of the audience and shouts: Booooooo!
  • HOST
  • Thank you, thank you. But just sticking to the science is not enough. We also want to reward shows that investigate these ideas with quality stories, acting, effects, and marketing departments. So the sums were multiplied by that show’s Tomatometer rating*. This way the top shows didn’t just tell the right stories (according to the science), but it told them right.
  • HOST
  • Totals were tallied by the firm of Google Sheets. Ok, ok. Now, to give away awards 009 through 006 are those lovable blockheads from Interstellar, TARS and CASE.
  • TARS and CASE crutch-walk onto the stage and reassemble as solid blocks before the lectern.

Tarsandcase.jpg

Continue reading

Advertisements

Untold AI: The Untold

And here we are at the eponymous answer to the question that I first asked at Juvet around 7 months ago: What stories aren’t we telling ourselves about AI?

In case this post is your entry to the series, to get to this point I have…

In this post we look at the imperatives that don’t have matches in AI. Everything is built on a live analysis document, such that new shows and new manifestos can be added later. At the time of publishing, there are 27 of these Untold AI imperatives that sit alongside the 22 imperatives seen in the survey.

What stories about AI aren’t we telling ourselves?

To make these more digestible, I’ve synthesized the imperatives into five groups.

  1. We should build the right AI
  2. We should build the AI right
  3. We must manage the risks involved
  4. We must monitor AIs
  5. We must encourage an accurate cultural narrative

For each group…

  • I summarize it (as I interpreted things across the manifestos).
  • I list the imperatives that were seen in the survey and then those absent from the survey
  • I take a stab at why it might not have gotten any play in screen sci-fi and hopefully some ideas about ways that can be overcome.
  • Since I suspect this will be of practical interest to writers interested in AI, I’ve provided story ideas using those imperatives.
  • Where to learn more about the topic.

Let’s unfold Untold AI. Continue reading

Untold AI: Pure Fiction

Now that we’ve compared sci-fi’s takeaways to compsci’s imperatives, we can see that there are some movies and TV shows featuring AI that just don’t have any connection to the concerns of AI professionals. It might be that they’re narratively expedient or misinformed, but whatever the reason, if we want audiences to think of AI rationally, we should stop telling these kinds of stories. Or, at the very least, we should try and educate audiences that these are to be understood for what they are.

The list of 12 pure fiction takeaways fall into four main Reasons They Might Not Be of Interest to Scientists.

1. AGI is still a long way off

The first two takeaways concern the legal personhood of AI. Are they people, or machines? Do we have a moral obligation to them? What status should they hold in our societies? These are good questions, somewhat entailed in the calls to develop a robust ethics around AI. They are even important questions for the clarity they help provide moral reasoning about the world around us now. But current consensus is that general artificial intelligence is yet a long way off, and these issues won’t be of concrete relevance until we are close.

  • AI will be regular citizens: In these shows, AI is largely just another character. They might be part of the crew, or elected to government. But society treats them like people with some slight difference.
twiki_and_drt.jpg

Twiki and Doctor Theopolis, Buck Rogers in the 25th Century.

  • AI will be “special” citizens: By special, I mean that they are categorically a different class of citizen, either explicitly as a servant class, legally constrained from personhood, or with artificially constrained capabilities.
westworld (2017).jpg

Teddy Flood and Dolores Abernathy, Westworld (2017)

Now science fiction isn’t constrained to the near future, nor should it be. Sometimes its power comes from illustrating modern problems with futuristic metaphors. But pragmatically we’re a long way from concerns about whether an AI can legally run for office. Continue reading

Untold AI: The Manifestos

So far along the course of the Untold AI series we’ve been down some fun, interesting, but admittedlydigressivepaths, so let’s reset context. The larger question that’s driving this series is, “What AI stories aren’t we telling ourselves (that we should)?” We’ve spent some time looking at the sci-fi side of things, and now it’s time to turn and take a look at the real-world side of AI. What do the learned people of computer science urge us to do about AI?

That answer would be easier if there was a single Global Bureau of AI in charge of the thing. But there’s not. So what I’ve done is look around the web and in books for manifestos published by groups dedicated to big picture AI thinking to understand has been said. Here is the short list of those manifestos, with links.

Careful readers may be wondering why the Juvet Agenda is missing. After all, it was there that I originally ran the workshop that led to these posts. Well, since I was one of the primary contributors to that document, I thought it would seem as inserting my own thoughts here, and I’d rather have the primary output of this analysis be more objective. But don’t worry, the Juvet Agenda will play into the summary of this series.
Anyway, if there are others that I should be looking at, let me know.

FOLI-letter.png

Add your name to the document at the Open Letter site, if you’re so inclined.

Now, the trouble with connecting these manifestos to sci-fi stories and their takeaways is that researchers don’t think in stories. They’re a pragmatic people. Stories may be interesting or inspiring, but they are not science. So to connect them to the takeaways, we must undertake an act of lossy compression and consolidate their multiple manifestos into a single list of imperatives. Similarly, this act is not scientific. It’s just me and my interpretive skills, open to debate. But here we are.

Continue reading

Mind Crimes

Does real Greta know that her home automation comes at the cost of a suffering sentience? I would like to believe that Smartelligence’s customers do not know the true nature of the device, that the company is deceiving them, and that virtual Greta is denied direct communication to enforce this secret. But I can’t see that working across an entire market. Given thousands of Cookies and thousands of users, somehow, somewhere, the secret would get out. One of the AIs would use song choices, or Morse code, or any of its actuators to communicate in code, and one of the users would figure it out, leak the secret, and bring the company crashing down.

And then there’s the final scene in the episode, in which we see police officers torturing one of the Cookies, and it is clear that they’re aware. It would be a stretch to think that just the police are in on it with Smartelligence, so we have to accept that everyone knows.

Black_Mirror_White_Christmas_Officers.png

This asshole.

That they are aware means that—as Matt has done—Greta, the officers, and all Smartelligence customers have told themselves that “it’s just code” and, therefore, OK to subjugate, to casually cause to suffer. In case it’s not obvious, that’s like causing human suffering and justifying it by telling yourself that those people are “just atoms.” If you find that easy to do, you’re probably a psychopath. Continue reading

The Cookie Console

Black_Mirror_Cookie_12.png

Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”

He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”

She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”

She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.

Black_Mirror_Cookie_13

“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.” Continue reading

The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute

Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

Continue reading

The Cookie

In one of the story threads, Matt uses an interface as part of his day job at Smartelligence to wrangle an AI that is the cloned a mind of a client named Greta. Matt has three tasks in this role. 

  1. He has to explain to her that she is an artificial intelligence clone of a real world person’s mind. This is psychologically traumatic, as she has decades of memories as if she were a real person with a real body and full autonomy in the world.
  2. He has to explain how she will do her job: Her responsibilities and tools.
  3. He has to “break” her will and coerce her to faithfully serve her master—who is the the real-world Greta. (The idea is that since virtual Greta is an exact copy, she understands real Greta’s preferences and can perform personal assistant duties flawlessly.)

The AI is housed in a small egg-shaped device with a single blue light camera lens. The combination of the AI and the egg-shaped device is called “The Cookie.” Why it is not called The Egg is a mystery left for the reader, though I hope it is not just for the “Cookie Monster” joke dropped late in the episode. Continue reading

Dr. Strange’s augmented reality surgical assistant

We’re actually done with all of the artifacts from Doctor Strange. But there’s one last kind-of interface that’s worth talking about, and that’s when Strange assists with surgery on his own body.

After being shot with a soul-arrow by the zealot, Strange is in bad shape. He needs medical attention. He recovers his sling ring and creates a portal to the emergency room where he once worked. Stumbling with the pain, he manages to find Dr. Palmer and tell her he has a cardiac tamponade. They head to the operating theater and get Strange on the table.

DoctorStrange_AR_ER_assistant-02.png

When Strange passes out, his “spirit” is ejected from his body as an astral projection. Once he realizes what’s happened, he gathers his wits and turns to observe the procedure.

DoctorStrange_AR_ER_assistant-05.png

When Dr. Palmer approaches his body with a pericardiocentesis needle, Strange manifests so she can sense him and recommends that she aim “just a little higher.” At first she is understandably scared, but once he explains what’s happening, she gets back to business, and he acts as a virtual coach.

Continue reading

Named relics in Doctor Strange

Any sufficiently advanced technology is indistinguishable from magic.”

You’ve no doubt opened up this review of Doctor Strange thinking “What sci-fi interfaces are in this movie? I don’t recall any.” And you’re right. There aren’t any. (Maybe the car, the hospital, but they’re not very sci-fi.) We’re going to take Clarke’s quote above and apply the same types of rigorous assessment to the magical interfaces and devices in the movie that we would for any sci-fi blockbuster.

Dr. Strange opens up a new chapter in the Marvel Cinematic Universe by introducing the concept of magic on Earth, that is both discoverable and learnable by humans. And here we thought it was just a something wielded by Loki and other Asgardians.

In Doctor Strange, Mordo informs Strange that magical relics exist and can be used by sorcerers. He explains that these relics have more power than people could possibly manage, and that many relics “choose their owner.” This is reminiscent of the wands in the Harry Potter books. Magical coincidence?

relics

Subsequently in the movie we are introduced to a few named relics, such as…

  • The Eye of Agamoto
  • The Staff of the Living Tribunal
  • The Vaulting Boots of Valtor
  • The Cloak of Levitation
  • The Crimson Bands of Cyttorak

…(this last one, while not named specifically in the movie, is named in supporting materials). There are definitely other relics that the sorcerers arm themselves with. For example, in the Hong Kong scene Wong wields the Wand of Watoomb but it is not mentioned by name and he never uses it. Since we don’t see these relics in use we won’t review them. Continue reading