Untold AI: Pure Fiction

Now that we’ve compared sci-fi’s takeaways to compsci’s imperatives, we can see that there are some movies and TV shows featuring AI that just don’t have any connection to the concerns of AI professionals. It might be that they’re narratively expedient or misinformed, but whatever the reason, if we want audiences to think of AI rationally, we should stop telling these kinds of stories. Or, at the very least, we should try and educate audiences that these are to be understood for what they are.

The list of 12 pure fiction takeaways fall into four main Reasons They Might Not Be of Interest to Scientists.

1. AGI is still a long way off

The first two takeaways concern the legal personhood of AI. Are they people, or machines? Do we have a moral obligation to them? What status should they hold in our societies? These are good questions, somewhat entailed in the calls to develop a robust ethics around AI. They are even important questions for the clarity they help provide moral reasoning about the world around us now. But current consensus is that general artificial intelligence is yet a long way off, and these issues won’t be of concrete relevance until we are close.

  • AI will be regular citizens: In these shows, AI is largely just another character. They might be part of the crew, or elected to government. But society treats them like people with some slight difference.
twiki_and_drt.jpg

Twiki and Doctor Theopolis, Buck Rogers in the 25th Century.

  • AI will be “special” citizens: By special, I mean that they are categorically a different class of citizen, either explicitly as a servant class, legally constrained from personhood, or with artificially constrained capabilities.
westworld (2017).jpg

Teddy Flood and Dolores Abernathy, Westworld (2017)

Now science fiction isn’t constrained to the near future, nor should it be. Sometimes its power comes from illustrating modern problems with futuristic metaphors. But pragmatically we’re a long way from concerns about whether an AI can legally run for office.

2. AI will (mostly) be what we program it to be

Many of the AGIs we see have innate goals. But AI isn’t some genie waiting in a bottle to be released. It is a thing that is programmed. The way it is and the way it evolves has much to do with the way it’s initially seeded or programmed, but a lot of sci-fi just wants it to be things for plot reasons.

  • AI is evil: Especially between 1965 and 1985, AI was a new costume for the same old bad guys, coming right out the gate as evil as a sleeping bag full of scorpions. Fortunately, we’ve largely stopped telling this story, with the Terminator series exception. Now we know that if it’s evil there is some reason. Like, say, trolls. That reason will be interesting, and it is important to establish so we can avoid the same thing.
tron.jpg

Master Computer, Tron

  • AI will spontaneously emerge sentience or emotions: This one is troubling because it’s a stupid trope (sure, spill a glass of champagne on the keyboard and C++ will begin to take an interest in your love life) and yet that trope is the 10th most popular takeaway in the survey. We can be confident that an AI will almost certainly be programmed to evolve, and work its way to something resembling emotions (as seen in the excellent-except-for-the-ending Ex Machina), but that’s far from the goofy cause-and-effect implications we’ve seen in the survey.
stealth.jpg

Stealth

  • AI will want to become human: AGI might have strong reason to pass as human (say, to deceive, or to avoid persecution by humans), or it might be programmed to understand humans as part of some indirect normativity instructions. But to actually want to become human or make some hybrid offspring for the sake of it doesn’t make a lot of sense. This has been used effectively as a Naive Newcomer for exploring narratively what it means to be human, but should be understood as that trope.
Agents of shield.jpg

Aida, Agents of S.H.I.E.L.D.

  • Neuroreplication will have unintended effects: Neuroreplication is one possible path to AGI. And yes, if any training set is flawed, we would have to intervene to prevent any resulting AGI from copying and amplifying those flaws. But neuroreplication is not where most computer science is placing their bets on how we get to AGI.
BRB.jpg

Ash and Martha, Black Mirror “Be Right Back”

  • AI will be too human: The two shows in the survey that illustrate this are comedies. The quirky personalities seem like a funny mismatch to the dispassion of machines. But if that turns out to be an actual problem with AGI, we would probably try to fix it, not accept it as a fait accompli. Most computer science seems to worry about the thing being too alien to human welfare rather than too similar to us.
darkstar

Boiler, Talby, and Pinback, Dark Star

  • AI will learn to value life: This one is the worst, imho(humble) but the most palliative. It goes a little like this: it doesn’t matter if AI starts evil or even starts out neutral, because we humans are just so darned loveable, its circuits can’t help but come to love us. While we desperately want an AI’s goals to align with human goals, that will be programmed from the start rather than something the bad guy is going to figure out watching us.
chappie.jpg

Chappie

3. Some stories are really about us

Some of the takeaways seek to take the future as a given, but want to point out how people or human nature will

  • AI will not be able to fool us: Stories with this takeaway say there will always be a detectable difference between (little-r) replicants and people. But think about it: with today’s technology, people are fooled all the time. It doesn’t even have to be that good, people just have to want to believe it or be busy thinking of anything else. And like most things digital, these capabilities are going to grow exponentially. Tomorrow’s technology promises to be indistinguishable from reality. I think this takeaway is narcissism and does a real disservice to the skepticism we’re going to need in the media to come.
  • Humans will willingly replicate themselves as AI: In these stories, people escape physical constraints (like senescence, disease, and death) and continue on as AI or in a virtual simulation. While there are some interesting ethics and p-zombie questions at play in these ideas, it’s not a concern for scientists.
sanjunipero.jpg

Yorkie and Kelly Booth, Black Mirror “San Junipero”

4. Some things go without saying

  • AI will be replicable, amplifying any problems: While this is kind of true—it will be difficult to “kill” an AGI or ASI that has escaped, partly because it can create copies of itself, this is of secondary concern. If the AGI or ASI is beneficial, then it’s not a problem.

So as you can read, these are my educated guesses as to why these sci-fi takeaways have no matches in the compsci imperatives. But we have the good fortune that the authors of the manifestos are all largely still around. If you’re one of those folks, and I missed some reason or just got it wrong, please comment and let us know what the reality is.

So that’s it for the unmatched takeaways. Next up, I’ll detail the unmatched imperatives that make up the set of Untold AI.

***

5. Bonus round: The (remaining) myths of AIs (from FoLI)

Few AI think tank groups come straight up and address the myths of AI, but the Future of Life Institute did. You can check them out on their website, but while we’re disabusing ourselves of some problematic notions, here are the other relevant myths that may come up in sci-fi, but weren’t identified in the manifestos.

  • Robots are the main concern: You’ll recall from an early post that about 74% of the AIs we see in sci-fi are robots. And yeah, as Boston Dynamics and Black Mirror’s “Metalhead” illustrate, robots can be damned terrifying. But the major risk is not the robot and its physical autonomy. It’s the intelligence whose goals does not align with ours. (n.b. that potential misalignment is a major concern of compsci.)
  • AI can’t control humans: This is stupid human exceptionalism. As FoLI points out, it is intelligence that enables control. Tigers are very dangerous, but it is because on the whole we are smarter than them that we are not all tiger food right now. The narrow AI we have in the world right now may seem dumb, but that’s only because we’re looking backwards at history, and it’s very hard to foresee exponential change that may be coming, or even to imagine it.
  • Machines can’t have goals: Even dumb machines can have goals. Your thermostat has a temperature goal. A Roomba has a coverage and a charge goal. Your spam filter has a goal to keep spam out of your inbox. These aren’t as complex or changing as human goals, but yes, they have goals. So AI can certainly have goals.

7 thoughts on “Untold AI: Pure Fiction

  1. Pingback: Untold AI: The Science | Sci-fi interfaces

  2. Wow I’m never disappointed to free up time to read these articles…

    You should’ve heard of this new video game, Detroit: Become Human. I think it could’ve serve as an example for this article. But I guess it came too late for it, or you already made up your choices before it was released. Anyway, still interesting to play for analysis purpose (because it’s a David Cage’s game and his games are popular, therefore they are somehow influential).

    • Now I’m very interested. But this survey doesn’t cover video games. (I don’t have enough expertise.) It would be really fascinating for someone to do a similar analysis on what stories about AI we do and don’t tell in video games. Then we could run a diff between these two studies!

  3. Christopher, the progression of these articles has certainly expanded my appreciation of AI as portrayed today. Thanks for taking the time; I know it is not easy!

    • Thanks, Fen! The next post (Tuesday) is the “big payoff” and then there will be two closers after that. I hope they satisfy and illuminate as much as the ones before! Also, please help spread the word!

  4. Pingback: Untold AI: The Untold | Sci-fi interfaces

Leave a Reply