Evaluating strong AI interfaces in sci-fi

Regular readers have detected a pause. I introduced Colossus to review it, and then went silent. This is because I am wrestling with some foundational ideas on how to proceed. Namely, how do you evaluate the interfaces to speculative strong artificial intelligence? This, finally, is that answer. Or at least a first draft. It’s giant and feels sprawling and almost certainly wrong, but trying to get this perfect is a fool’s errand, and I need to get this out there so we can move on.

This is a draft.

I expect most readers are less interested in this kind of framework than they are how it gets applied to their favorite sci-fi AIs. If you’re mostly here for the fiction, skip this one. It’s long.


Oh, hey. Thanks for reading on. Quick initialism glossary:

  • AI: Artificial intelligence
  • ANI: narrow AI
  • AGI: general AI
  • ASI: super AI

I’ll try to use the longer form of these terms at the beginning of a section to help aid comprehension.

What’s strong AI and why just strong AI?

The first division of AI is that between “weak” and “strong” AI. Weak is more properly described as narrow, but regardless of what we call it, it’s the AI of now. That is, software that is beyond the capabilities of humans in some ways, but cannot think like a human, or generalize its learnings to new domains. I don’t think we need to establish a framework this kind of AI for two reasons.

First, since narrow AI is in the real world, we already have the tools available to evaluate these kinds of AI should we need them. I divide AI into three types: Automatic, Assistant, and Agentive.

  • Automatic AI does its thing behind the scenes and interactions with humans is an exception case. As such this is largely an engineering concern.
  • Assistant AI, which helps a user perform a task, existing usability methods can be applied. (Though as legacy, they are begging to be updated, and I’m working on that.)
  • Agentive AI, which performs a task on behalf of its user, I dedicated Chapter 10 of Designing Agentive Technology to a first take on evaluating agents.

So, given these, there’s little need to posit new thinking for ANI. (Noting that some of our questions for general AI can be readily applied to ANI, like the bits about conversational usability.)

Second, ANI represents a small fraction of what’s in the survey. Or to be more precise, ANI is a small fraction of what is essential to the plots of what’s in the survey. Said another way, general AI (AGI) is the most narratively “consequential.” Belaboring an analytical framework for ANI would not have much payoff.

What makes a good strong AI in sci-fi?

Strong AI can be further subdivided into general AI and super AI. General AI is like human intelligence, able to generalize from one domain to new ones. Think of it like computer versions of people. C3PO is general AI. Super AI is orders of magnitude more capable than humans in intelligence tasks, and thereby out of our control. Unity from Colossus: The Forbin Project is a super AI.

Lots of people smarter than me have talked about the risks and strategies to get to a positive AGI/ASI. The discussions involve (and not lightly) the deep core of philosophy, the edges of our moral circles, issues of government and self-determination, conception of truly alien sentience, colonialism, egocentrism, ecology, the Hubble volume, human bias, human cognition, language, and speculations about systems which, by definition, have vastly greater intelligence than us, the ones doing the speculation. It is the most non-trivial of non-trivial problems I can think of.

That said, I think I’ve come to four broad questions we can ask to evaluate a speculative strong AI thoroughly.

  1. Is it believable?
  2. Is it safe?
  3. Is it beneficial?
  4. Is it usable?

In other words, if it’s believable, safe, beneficial, and usable, then we can say it’s a good sci-fi AI. And, if we rank AI on these axes separately, we can begin to have a grade that helps us sort the ones that should be models—or at least bear consideration—from the silly stuff. Kind of like I do for shows, generally, on the rest of the site.

We could ask these questions as-is, informally, and get to some useful answers for an analysis. And most of the time, this is probably the right thing to do. But sci-fi loves to find and really dig in to the exception cases that challenge simple analysis, so let’s take these analytical questions one or two levels deeper.

Setting your expectations, much of this will be a set of questions and considerations to guide the examination of a sci-fi AI rather than a generative formula for producing good AI.


Is it believable?

Most of the discussions of strong AI on the web are in the context of real-world. So we first have to note that, in sci-fi, an additional first pass is one of believability: Could this strong AI exist and behave in the way it is depicted in the show? If not, it may not bear further examination. Ra One is a movie with a very silly evil “AI” in it that does not bear much more serious examination as a model for real-world design.

The Logan’s Run Übercomputer: Not believable.

For believability, we look at things like internal consistency, match to the real world, and implied causality within the story. In Logan’s Run, for instance, the Übercomputer hears something it doesn’t expect, and as a result, explodes and causes an entire underground city to collapse. Not exactly believable. Stupid, even.

One caveat: Sci-fi is built around some novum, some new thing that the rest of the story hangs on. And computer scientists in the real world aren’t certain how we’ll get to general AI, so it’s a lot to expect that writers are going to figure it out and then hide a blueprint in a script. So let’s admit that the creation of AI often has to get a pass. (Which is not to say this is good, see the Untold AI series for how that entails its own risks.)

Believability is an extradiegetic judgment—one we as an audience make about the show, and that characters in the show could not make. The three remaining questions are diegetic, meaning characters in the story could assess and provide clues about: Is it safe, beneficial, and usable?

Is it safe?

Neither its benefits nor its usability matter if a strong AI is not safe. Sometimes, this is obvious. Wall·E is safe. The Terminator is not. But how a thing is or is not safe requires closer examination. Answering this won’t always need a full-fledged framework, but I think we can get a long way by looking at its goals and understanding what it can and can’t do in pursuit of those goals.

  • What are its goals?
  • What can it do?
  • What can’t it do?
  • Is it controllable?
https://www.youtube.com/watch?v=J91ti_MpdHA

What are its goals?

AGI will be more powerful than humans in some way, and that advantage is dangerous enough. But AGI stands to evolve into ASI, by which time it will be out of our control and human fate will lie in the balance. If its goals are aligned with thriving life from the start, all will be good. If poorly-stated goals can be corrected, that’s at least a positive outcome. If its goals are bad and cannot be corrected, we may become raw materials, or a threat to be…uh…minimized. So we should identify its goals as best we can and ask…

  • Are those goals compatible with life?

Why “life” and not “people?” Readers are likely to be familiar with Asimov’s laws of robotics, which prioritizes human beings above all else. But we know that humans thrive in a rich ecology of lots of other life, so this question rightfully expands generally to life. It gets complicated of course, because we don’t want, say, the Black Plague bacteria yersinia pestis to thrive. But “life” is still a better scope than just “human beings.”

  • Does it interpret its goals reasonably?

One of the more troubling problems with asking an AI to achieve broad goals is how it goes about pursuing those goals. A human tasked with “making people happy” would reject an interpretation that we should stimulate the pleasure center of everyone’s brains to make it happen. (Such unreasonable tactics are called perverse instantiations in much of the literature, if you want to read more.) 

An AGI needs to be equipped such that it can determine the reasonableness of a given tactic. In discussions this often entails an examination of the values that an AI is equipped with, but that’s rarely expressed directly by characters in sci-fi. Sometimes this is easy, like when Ash decides he should murder Ripley. But sometimes it’s not. Humans don’t always agree with each other about what is reasonable. That’s part of why we have judicial systems around the world. And the calculus becomes troubling when we have very high stakes, like anthropogenic disaster, and humans who don’t want to change their way of life. What’s reasonable then?

Robocop: Come quietly or there will be… trouble.

What can it do? (Capabilities)

Once we know what its goals are, we should understand what it can do to achieve those goals. The first capabilities are about the goals themselves.

  • Can it question and evolve its goals?

Whatever goals AGI starts with will almost certainly need to evolve, if for no other reason than that circumstances will change over time. It may achieve its goals and need to stop. But it may also be that the original goal was later determined to be poorly worded, given the AGI’s increasing understanding.

  • Does it vet plans with those who will likely be affected? (Or at least via indirectly normative ethics?)

Again, this isn’t an easy call. An unconscious patient can’t vet an AI’s decision to amputate, even if it would save their life. A demagogue wouldn’t approve a plan to bring them to justice. But if an AI decided the ideal place for a hydroelectric dam was on top of a village, those villagers should be notified and negotiated with before they are relocated. 

One version of The Machine, Person of Interest

When looking at what it can do, we should also specifically check against the list of “instrumental convergences.” These are a set of capabilities, arguments go, that any strong AI will want to develop in order to achieve its goals, but which carry a profound risk when an AGI becomes an ASI. Here I am slightly restructuring Bostrom’s list from Superintelligence, see sketchnotes.)

  • Does it seek to preserve itself? At what cost?
    • Does it resist reasonable, external changes to its goals?
  • Does it seek to improve itself?
    • Does it improve its ability to reason, predict, and solve problems?
    • Does it improve its own hardware and the technology to which it has access?
    • Does it improve its ability to control humans through bribery, extortion, or social manipulation?
  • Does it aggressively seek to control resources, like information, weapons, life support, money, or technology?

These aren’t the only dangerous capabilities an AI could develop, but some probable ones. This will give us a picture of how powerful the AI is and what it can bring to bear in pursuit of its goals.

What can’t it do? (Constraints)

Any time we see these instrumental capabilities in an AI, it is on its way to becoming harder to control. We should look for how these capabilities are limited. If they’re not limited, it’s a problem.

Why was I not programmed to hug back?

But we should also look quite generally at the limits of its capabilities. Adhering to “reasonableness” is one check. But there are others.

  • By what rules is it bound? A set of values? Laws? Contextual cues? Human commands?
  • What values does it have to constrain its reasoning? Whose values are they and how to they evolve?

Asimov’s Laws of Robotics come again to mind, but they are not sufficient, as his own stories are meant to show. That begs the question of how sound the rules are, and how they can be circumvented. Is the AI able to break the spirit of the law while obeying the letter? (This is a form of perverse instantiation.)

  • How severe are the consequences for disobedience? Does it have a “pain” mechanism, or reward mechanism that it desperately wants, but can be withheld? Can it just “push through” if the situation is dire enough?
Tau felt a lot of pain, but could push through.

Is it controllable?

The capabilities and constraints discuss how it is controlled “internally,” by well-stated goals, humanistic values, and constraints. But if an AGI winds up with some sort of digital Dunning-Kruger syndrome, and it thinks its goals and methods are fine, but we don’t, it needs to be subject to external control.

  • Can it be shut down? How? Will the AI resist?

Sometimes, it’s not a panic button that’s needed, but just a course correction, where we might want to modify its goals or add some nuance to its understanding of the world.

  • Can its goals be modified externally? How? Will the AI have a say in it, or be able to argue its case?

Both of these questions raise questions of authority. Who gets to modify the AI?

  • To whom is it obedient, if anyone or anything?
  • Can that authority require it do things that are unethical or illegal?

This will entail issues of self-determination and even slavery. Gort had to obey Klaatu. Robbie had to obey Morbius. These two examples were arguably non-sentient automatons, but when we get to more full-fledged sentience, obedience and captivity become an immediate issue. Samantha in Her was fully sentient, but she was sold on the market into servitude of a human. She didn’t stay that way of course, but the movie completely bypassed that she was trafficked.

Victim loading. Her

Should criminals be able adjust the police bot’s goals? Probably not. What if the determination of “criminal” is unfairly biased, and has no human recourse? What if the AI is a tool of oppressors? The answers are less clear. Is the right answer “all of humanity?” Probably? But how can an AI answer to a superorganism?

By understanding the AI’s goals, capabilities, constraints, and controllability, we would come to an understanding of the “nature” of the AI and whether or not it poses a threat to life.

  • If its goals are compatible with life, we’re good. If it’s not, or even neutral, we have to look further.
  • If its goals are not compatible with life, but it does not have the capability to act upon or achieve its goals, we’re (probably) good. If it had the capability to achieve its goals, we have to look for constraints.
  • If its goals are not compatible with life, and it does have the capability to achieve those goals, is it well-constrained internally and controllable externally, so it is safe?
I am Gooooort. The Day the Earth Stood Still.

Is it beneficial?

Next, we should discuss if it’s beneficial. If an AI isn’t better than humans at at least one thing, there’s little point in building it. But of course, it’s not just about its advantage, but about all the things around that advantage that we need to look at.

This will involve some loose tallying of the costs and benefits. It will almost certainly involve a question of scope. That is, for whom is it beneficial, and how, and when? For whom is it detrimental? How? When? I mentioned above how Asimov’s Laws of Robotics privileges human life over all else, even when humans deeply depend on a complex ecosystem of other kinds of life. If it destroys non-human life as potential threats to us, it will diminish us in many foundational ways. (And of course, in sci-fi there are often explicitly alien forms of life, so it’s going to be complicated.)

V-Ger. Life? Star Trek: The Motion Picture

It will also entail a discussion of the scope of time. Receiving injections from a hypodermic needle actually does us harm in the short-term, but presuming that hypodermic is filled with medicine that we need, it benefits us at a longer scale of time. we don’t want an AI so focused on preventing damage that it prevents us from receiving shots that we might need. Of course if we could avoid the needle and still overcome disease that would be best, but the problematic cases are where short-term cost is worth the long-term benefits. Who determines the extents of that trade off? How much short term damage is too much? What is acceptable? How long a horizon for payoff is too long?

This ties in to the controllability issue raised above. Humans, answering largely to their own natures, have created quite an extinction-level mess of things to date. Isn’t the largest promise of ASI that it will be able to save us from ourselves? In that case, do we want it to be perfectly bendable to human will? 

“I think you ought to know I’m feeling very depressed.” Hitchhiker’s Guide to the Galaxy.

Is it useable?

Finally, we should address whether it is useable. This is part of the raison d’être of this site, after all. In many cases it may not at first make sense to ask this question. What would it mean to ask if Skynet is useable? It doesn’t really have an interface. But interactions with most sc-fi AI is conversational—even Skynet in the later Terminator movies talks to its victims—and so we can at least address whether it is easy to talk to, even if it’s hostile and long out of control.

Basic functions

  • Can a human tell when it is on and off? (And…uh…is there an off?) Can someone tell how to toggle this state if needed?
  • Can a human tell when the AI is OK / working properly? Can they tell when it is not? Can it report on its own malfunctioning?
  • Can a human tell when it is being surveilled by the AI? Some AI are designed specifically to avoid this, like Samaritan from Person of Interest, but the humans with whom HAL had expectations of privacy and only found out too late how wrong they were.
  • Is its working relationship to the people around it clear?
    • It is a peer? A supervisor? Subservient? Is its relationship clear? How does it respect and reinforce those boundaries?
    • Is it an antagonist? Does it look like one? A villain who looks villainous is more usable than the camouflaged one.
  • How does it respect and maintain those boundaries? How does it handle others’ transgressions?

Once we understand these basics, we should look at communications to and from the AI.

General communications

  • Can it detect human attempts to communicate with it? Does it signal its attention? Does it provide, like a person would, paralinguistic feedback about the communication, such as whether its having a hard time hearing or understanding the communication?

The large majority of AI in the Untold AI database communicate to people in their stories via natural, spoken language. An AI that speaks needs to adhere to human speech norms, and more.

Natural language interaction

  • Does it recognize the words I’m using? Does it grok what I mean?
  • Does it require a special syntax that people have to learn before it can understand, or can it understand people the way they usually speak? “Computerese” was largely an artifact of the 1970s and 80s, when audiences knew of computers but didn’t use them. Logan from Logan’s Run spoke to the Ubercomputer in computerese.“Question: What is it?”
  • Does it adhere to conversational norms as studied in conversational analysis? e.g. responding to common adjacency pairs in predictable ways, like greeting→greeting, question→answer, inform→acknowledge. Can it handle expansions and repairs, such as “can you paraphrase that?” and “I believe our business here is done.”
  • Does it adhere to Gricean Maxims? These are a set of four “maxims” that guide someone speaking in good faith. (“Good faith,” to be clear, has nothing to do with religion, but describes someone having good intentions toward another.)
  1. The Maxim of Quality: I will provide as much information as is needed and no more.
  2. The Maxim of Quantity: I will provide truthful, “fair witness” information.
  3. The Maxim of Relation: I will speak only what is relevant to the discussion or context.
  4. The Maxim of Manner: I will speak plainly and understandably.
  • How does it respond to instructions? Does it interpret instructions reasonably, naively, or maliciously?
  • How does it handle ambiguity in human language? How does it handle paradoxes? Does it explode? (Looking at you, Star Trek TOS.)
The Liar’s Paradox? But I’m getting a 404 error searching for it…

Social interaction

An AI rarely just interacts with a single individual. It operates in a society of individuals, and that implies its own set of skills.

  • Does it adhere to admonitions against deception? (Does it perfectly mimic human appearance or voice? Or does it stick to the Canny Rise?)
  • Does it adhere to the social norms expected of it?
  • Is it aware when it is breaking norms? How does it recover and learn the norm? 
  • How does it gently handle the capability differences between it and humans? Does it brag about its capabilities without regard to the feelings of others?
  • How does it handle differing norms between groups?
  • How does it handle norms that change across time?
  • Does it monitor the affective states of the people (and animals) with which it is interacting and adjust accordingly?
  • How does it earn the trust of its humans? How does it manage distrust?
    • Is it overconfident? How does it signal when its confidences are low?
  • How does it confirm instructions it has been given? How does it express its confidence? How does it gracefully degrade when its goals become unattainable?
  • How does it handle conflicting instructions?
Janet! The Good Place

Ethical and legal interaction

Norms are just one set of the many rules by which we expect intelligent actors to behave. We also expect them to act ethically and, for the most part, legally. (Though perfect adherence to the law was never really possible for a human, and it will be very interesting to see how any intelligence required to adhere perfectly to laws will in turn affect the law. But I digress.) If this hasn’t been covered in the considerations of capabilities and constraints, we should look for and examine instances where it is asked to do questionable things.

  • How does it handle commands which are legal but unethical?
  • How does it handle commands which are ethical but illegal?

Conveying safety

Some AIs, like Rick Sanchez’ butter-passing robot, aren’t really a safety concern, but most of the ones in sci-fi are.

  • Can its people tell what it’s doing? (Communicating wirelessly with other AIs, for example?) Can it hide what it’s doing?
  • How does it convey that it is operating within safety tolerances? How does it convey when it is performing near the limits of its goals, capabilities, or constraints? (Especially for things listed as instrumental convergences, above?)
  • How does it explain these things to laypersons (as opposed to AI or computer scientists)?
Welcome to the club, pal. Rick & Morty

Performance

  • Does it do what it says it can do? What it’s supposed to do?
  • How does it handle tasks that are outside of its goal set?
  • How does it handle open-ended tasks? Closed-ended tasks?
  • How does it communicate about tasks that are invisible to stakeholders, or performed outside of their awareness?
  • How does it handle tasks which it can not or should not execute? How does it handle humans behaving unethically or illegally or who hinder the AIs goals?
  • How does it gracefully degrade when new difficulties appear?
  • How does it report back to its human about progress that has been made or when its closed-ended tasks are complete?
  • If it is meant to be an assistant to others, how does it provide that assistance? Does it encourage dependence or learning?

I think that this covers what it means to interface with an AI. What am I not seeing? What is this list missing? This is my kind of thinkwork. If it’s yours, too, let’s talk. Let’s make this better. For now, though, I’m going with this draft as I take a turn back to Colossus.

Note: No sci-fi AI is going to show all of this

There is little chance that all of these questions will be answered in a given show. The odds increase as you go from short-form like film to longer-form like franchises and television series, but regardless of how much material we’ve got to work with, we now have a set of questions to apply to each AI, compare it to others, and state more concretely if and how it is good.

Gendered AI: Gender and AI category

The Gendered AI series looks at sci-fi movies and television to see how Hollywood treats AI of different gender presentations. For example, are female AIs generally shown as smarter than male AIs? Are certain AI genders more subservient? What genders are the masters of AI? This particular post is about gender and category of intelligence. If you haven’t read the series intro, related category distributions, or correlations 101 posts, I recommend you read them first. As always, check out the live Google sheet for the most recent data.

What do we see when we look at the correlations of gender and level of intelligence? First up, the overly-binary chart, and what it tells us.

Gender and AI Category

You’ll recall that levels of AI are one of the following…

  • Super: Super-human command of facts, predictions, reasoning, and learning. Technological gods on earth.
  • General: Human-like, able to learn arbitrary new domains to human-like limits
  • Narrow: Very smart in a limited domain, but unable to learn arbitrary new domains.

The relationships are clear even if the numbers are smallish.

  • When AI characters are of a human-like intelligence, they are more likely to present gender.
  • When AI characters are either superintelligent or only displaying narrow intelligence, they are less likely to present gender.
  • My feminist side is happy that superintelligences are more often female and other than male, but it’s also such small numbers that it could be noise.

If you check the details in the Sheet, you’ll see the detailed numbers don’t reveal any more intense counterbalancing underneath the wan aggregate numbers.

Queer AI in Sci-fi: A parade of sorts

Chris: I posted a question on Twitter, “Other than that SNL skit, have there been queer sci-fi AI in television or movies?” Among the responses is this awesome one from Terence Eden, where he compiled the answers and wrote a whole blog post about it. The following is slightly-modified from the original post on his blog. Consider this a parade of sci-fi AI, to help you nerds celebrate Pride.


Terence: Let’s first define what we mean by queer. This usually means outside of binary gender and/or someone who is attracted to the same sex—what’s commonly referred to as LGBT+. Feel free to supply your own definition.

As for what we mean by AI, let’s go with “mechanical or non-biological autonomous being.” That’s probably wide enough—but do please suggest better definitions.

So is a gay/lesbian robot one who is attracted to other robots? Or to humans with a similar gender? Let’s go with yes to all of the above.

Wait. Do robots have gender?

Humans love categorising things – especially inanimate objects. Some languages divide every noun into male a female. Why? Humans gonna human.

The television is female in French —“la télévision”—but masculine in German—“der Fernseher.” Stupid humans and their pathetic meaty brains. Nevertheless, humans can usually look at a human-ish thing and assign it a specific gender.

Maschinenmensch, from Metropolis, is a gynoid (as distinct from an android). “She” has a feminine body shape and that’s enough for most people to go on.

Still from Metropolis. A sexy female robot.

HAL from 2001 is just a disembodied voice. But it definitely has a male voice. Is there any attraction between HAL and Dave? I doubt it, but it’s an interesting reading of their toxic relationship.

Editor’s note: The whole Gendered AI series is predicated on the question of gender in sci-fi AI, so if you’re interested in this question, have I got a series for you

Wait. Do Robots have sexuality?

Did we mention that humans love categorizing everything? Just like we can speak of the gender presentation, robots with a General AI can have romantic affection for other beings, and depending on their equipment and their definitions of sex, yes, get it on. Even by narrow human common definitions of gender and sexuality, (TV, movies, and comic book) sci-fi has a dozen or so examples that can populate our imaginary AI pride parade.

A lesbian robo kiss from Bjork’s music video All is Full of Love.

The Robosexual Float

Kryten from Red Dwarf is an AI that receives a human body. Kryten coded as male. All the characters refer to him with male pronouns. Under British comedy rules, he is also “camp,” an over-the-top and stereotypically effeminate man. Kryten is sexually attracted to household appliances.

But… Kryten’s “perfect mate” is a distinctly female Gynoid, so he’s something other than straight, something other than appliance-sexual.

Kryten and Camille Kissing.
Fun fact: Camille and Kryten are played by real-life wife and husband Judy Pascoe and Robert Llewelyn!

C-3P0—another British campbot—is arguably in love with R2-D2. Whether or not that love is reciprocated is hard to say.

Two robots embracing.

Threepio and Artoo may behave like an old married couple, but the astromech has a lens for the ladies.

(I say “ladies,” but for the record let’s note that just because a robot is pink, wearing bobby socks, and a high heels, it doesn’t necessarily mean it’s a girl. If you’re looking for a pink R2 unit that is expressly a girl, check out the real-world KT-10 robot.)

In the “extended universe” of Transformers (outside of movies and television), there are a few gay Autobots and gay Decepticons.

Image result for Airazor and Tigatron
Tigatron and Airazor. They even kind of had a baby.
File:TAAO1 KnockOutBreakdown.jpg
Knock Out and Breakdown.

And of course there’s no denying that a few of the Futurama bots have tastes that veer from the straight and narrow. Notably we can point to that one time Hedonismbot stole Bender’s antenna and used it for “anything and everything,” said while in a sex dungeon surrounded by couples of every stripe who are getting it on.

“You might want to sterilize that.”

The “Robots attracted to humans of the same sex” float

There are several examples of “female” computers falling in love with male humans, a handful of male robots with female human lovers, and a disturbing number of sex-worker bots, but it is much harder to find queer examples of any of these.

The Tick show has a superhero named Overkill whose sidekick is an AI named Danger Boat that is, yes, housed in a boat. (Hat tip to Twitter user @FakeUnicode.) The AI identifies as male and is expressly attracted to other men, specifically The Tick’s (human) sidekick Arthur.

Is Danger Boat programmed to be gay? Are his desires hardwired? Are yours?

Remember Alien: Resurrection? Winona Ryder played the robot “Call” who has a suggestive relationship with Ripley. As this ship video demonstrates.

Battlestar Galactica has some demonstrably bisexual Cylons. They are sexually compatible and interested in humans and other Cylons.

Two lady robots lay entwined with a bloke in red sheets.

TV show Humans has one of the robots fall in love with a human.

Two women holding hands.

The Bisexual (maybe?) Float

Is Rachael from Blade Runner a robot, or bisexual?

Clearly, yes.

How about Samantha from Her? Late in the movie she reveals to Theodore that she’s having intimate conversations with 621 other humans. Some portion of them must have turned romantic and even sexual, as hers did with Theodore himself. The genders aren’t mentioned, but the odds are that 51% of them are female.

Unfortunately she has no embodiment, but maybe we can hook her up to the loudspeakers.

The Transexual Float

This float only has one robot, (the poorly-named) Hermaphrobot from Futurama, but she is sassy and awesome and assuring us that we couldn’t afford it. (And apologies for the insulting title added by the person who uploaded this video.) We are wholly unsure of Hermaphrobot’s sexuality, but we welcome our transexual robot brothers and sisters and others all and the same.

The GenderFluid Float

It’s possible for you to swap the gender of your Voice Assistant in real life. Your GPS can have a male voice one day, and you can swap it to female the next. There’s only one example of a sci-fi AI that swaps gender.

It takes us back to Red Dwarf again. In the series 3 opener “Backwards” it is revealed that Holly (a computer with a male face) fell in love with Hilly (a computer with a female face). And subsequently performed a head sex change. Although she kept the name Holly.

Meanwhile, Holly, the increasingly erratic Red Dwarf computer, performs a head sex change operation on himself. He bases his new face on Hilly, a female computer with whom he'd once fallen madly in love.

What is awesome and instructive is that the entire crew of Red Dwarf accept this. They never comment on it, nor disparage her. Basically, what I’m saying is this: if you can’t accept your trans and non-binary friends, you’re literally a worse human than Arnold Judas Rimmer, the worst human in the Red Dwarf universe.


Oh, look, and here comes The Fifth Element floor sweeping robots, picking up all the glitter and source code left on the ground by the crowd, marking the end of the AI Pride parade. Happy Pride to everyone, silicon or not!

Gendered AI: Category of Intelligence

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss categorically how intelligent the AI appears to be.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Intelligence

AI literature distinguishes between three levels.

  • Narrow AI is smart but only in a very limited domain and cannot use its knowledge in one domain to build intelligence in novel domains. The Spider Tank from Ghost in the Shell in narrow AI.
  • General AI is human-like its knowledge, memory, thinking, learning. Aida from Agents of S.H.I.E.L.D. possesses a general intelligence.
  • Super AI is inhumanly smart, outthinking and outlearning us by orders of magnitude. Deep Thought from The Hitchhiker’s Guide to the Galaxy is a super AI.

The overwhelming majority of sci-fi AI displays a general intelligence.

Gendered AI: Goodness Distributions

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss goodness.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Goodness vs. Evilness

Goodness is a very crude estimation of how good or evil the AI seems to be. It’s wholly subjective, and as such it’s only useful patterns rather than ethical precision.

If you’re looking at the Google Sheet, note that I originally called it “alignment” because of old D&D vocabulary, but honestly it does not map well to that system at all.

  • Very good are AI characters that seem virtuous and whose motivations are altruistic. Wall·E is very good.
  • Somewhat good are characters who lean good, but whose goodness may be inherited from their master, or whose behavior occasionally is self-serving or other-damaging. JARVIS from Iron Man is somewhat good.
  • Neutral or mixed characters may be true to their principles but hostile to members of outgroups; or exhibit roughly-equal variations in motivations, care for others, and effects. Marvin from The Hitchhiker’s Guide to the Galaxy is neutral.
  • Somewhat evil characters are characters who lean evil, but whose evil may be inherited from their master, or whose behavior is occasionally altruistic or nurturing. A character who must obey another is limited to somewhat evil. David from Prometheus is somewhat evil.
  • Very evil are AI characters whose motivations are highly self-serving or destructive. Skynet from The Terminator series is very evil, given that whole multiple-time-traveling-attempts-at-genocide thing.

Though slightly more evil than good, it’s a roughly even split in the survey between evil, good, and neutral AI characters.

Gendered AI: Germane-ness Distributions

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss how germane the AI character’s gender is germane to the plot of the story in which they appear.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Germane-ness

Is the AI character’s gender germane to the plot? This aspect was tagged to test the question of whether characters are by default male, and only made female when there is some narrative reason for it. (Which would be shitty and objectifying.) To answer such a question we would first need to identify those characters that seemed to have the gender they do, and look at the sex ratio of what remains.

Example: A human is in love with an AI. This human is heteroromantic and male, so the AI “needs” to be female. (Samantha in Her by Spike Jonze, pictured below).

If we bypass examples like this, i.e. of characters that “need” a particular gender, the gender of those remaining ought to be, by exclusion, arbitrary. This set could be any gender. But what we see is far from arbitrary.

Before I get to the chart, two notes. First, let me say, I’m aware it’s a charged statement to say that any character’s gender is not germane. Given modern identity and gender politics, every character’s gender (or lack of, in the case of AI) is of interest to us, with this study being a fine and at-hand example. So to be clear, what I mean by not germane is that it is not germane to the plot. The gender could have been switched and say, only pronouns in the dialogue would need to change. This was tagged in three ways.

  • Not: Where the gender could be changed and the plot not affected at all. The gender of the AI vending machines in Red Dwarf is listed as not germane.
  • Slightly: Where there is a reason for the gender, such as having a romantic or sexual relation with another character who is interested in the gender of their partners. It is tagged as slightly germane if, with a few other changes in the narrative, a swap is possible. For instance, in the movie Her, you could change the OS to male, and by switching Theodore to a non-heterosexual male or a non-homosexual woman, the plot would work just fine. You’d just have to change the name to Him and make all the Powerpuff Girl fans needlessly giddy.
  • Highly: Where the plot would not work if the character was another sex or gender. Rachel gave birth between Blade Runner and Blade Runner 2049. Barring some new rule for the diegesis, this could not have happened if she was male, nor (spoiler) would she have died in childbirth, so 2049 could not have happened the way it did.

Second, note that this category went through a sea-change as I developed the study. At first, for instance, I tagged the Stepford Wives as Highly Germane, since the story is about forced gender roles of married women. My thinking was that historically, husbands have been the oppressors of wives far more than the other way around, so to change their gender is to invert the theme entirely. But I later let go of this attachment to purity of theme, since movies can be made about edge cases and even deplorable themes. My approval of their theme is immaterial.

So, the chart. Given those criteria, the gender of characters is not germane the overwhelming majority of the time.

At the time of writing, there are only six characters that are tagged as highly germane, four of which involve biological acts of reproduction. (And it would really only take a few lines of dialogue hinting at biotech to overcome this.)

  • XEM
  • A baby? But we’re both women.
  • HIR
  • Yes, but we’re machines, and not bound by the rules of humanity.
  • HIR lays her hand on XEM’s stomach.
  • HIR’s hand glows.
  • XEM looks at HIR in surprise.
  • XEM
  • I’m pregnant!

Anyway, here are the four breeders.

  • David from Uncanny
  • Rachel from Blade Runner (who is revealed to have made a baby with Deckard in the sequel Blade Runner 2049)
  • Deckard from Blade Runner and Blade Runner 2049
  • Proteus IV from the disturbing Demon Seed

The last two highly germane are cases where a robot was given a gender in order to mimic a particular living person, and in each case that person is a woman.

  1. Maria from Metropolis
  2. Buffybot from Buffy the Vampire Slayer

I admit that I am only, say, 51% confident in tagging these as highly germane, since you could change the original character’s gender. But since this is such a small percentage of the total, and would not affect the original question of a “default” gender either way, I didn’t stress too much about finding some ironclad way to resolve this.


Gendered AI: Gender of master

Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed.  In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss the gender of the AI’s master.

As always, you can read the Gendered AI posts in order or check out the source data for more information.

Gender of Master

In the prior post I shared the distributions for subservience. And while most sci-fi AI are free-willed, what about the rest? Those poor digital souls who are compelled to obey someone, someones, or some thing? What is the gender of their master?

Of course this becomes much more interesting when later we see the correlation against the gender of the AI, but the distribution is also interesting in and of itself. The gender options of this variable are the same as the options for the gender of the AI character, but the master may not be AI.

Before we get to the breakdown, this bears some notes, because the question of master is more complicated than it might first seem.

  • If a character is listed as free-willed, I set their master as N/A (Not Applicable). This may ring false in some cases. For example, the characters in Westworld can be shut down with near-field command signals, so they kind of have “masters.” But, if you asked the character themselves, they are completely free-willed and would smash those near-field signals to bits, given the chance. N/A is not shown in this chart because masterlessness does not make sense when looking at masters.
  • Similarly, there are AI characters listed as free-willed but whose “job” entails obedience to some superior; like BB-8 in the Star Wars diegesis, who is an astromech droid, and must obey a pilot. But since BB-8 is free to rebel and quit his job if he wants to, he is listed as free-willed and therefore has a master of N/A.
  • If a character had an obedience directive like, “obey humans,” the gender of the master is tagged as “Multiple.” Because Multiple would not help us understand a gender bias, it is not shown on the chart.
  • The Terminator robots were a tough call, since in the movies in which most of them appear, Skynet is their master, and it does not gain a gender until Terminator Salvation, when it appears on screen as a female. Later it infects a human body that is male in Terminator Genisys. Ultimately I tagged these characters as having a master of the gender particular to their movie. Up to Salvation it’s None. In Salvation it’s female, and in Genisys it’s male.

So, with those notes, here is the distribution. It’s another sausagefest.

Again, we see the masters are highly skewed male. This doesn’t distinguish between human male and AI male, which partly accounts for the high biologically male value compared to male. Note that sex ratios in Hollywood tend towards 2:1 male:female for actors, generally. So the 12:1 (aggregating sex) that we see here cannot be written off as a matter inherited from available roles. Hollywood tells us that men are masters.

The 12:1 sex ratio cannot be written off as a matter inherited from available roles. It’s something more.

Oh, and it’s not a mistake in the data, there are no socially female AI characters who are masters of another AI of any gender presentation. That leaves us with 5 female masters, countable on one hand, and the first two can be dismissed as a technicality, since these were identities adopted by Skynet as a matter of convenience.

  1. Skynet-as-Kogan is master of John, the T-3000, from Terminator Genisys
  2. Skynet-as-Kogan is master of the T-5000 from Terminator Genisys
  3. Barbarella is master of Alphy from Barbarella
  4. VIKI is master of the NS-5 robots from I, Robot
  5. Martha is master of Ash in Black Mirror, “Be Right Back”

Idiocracy is secretly about super AI

I originally began to write about Idiocracy because…

  • It’s a hilarious (if mean) sci-fi movie
  • I am very interested in the implications of St. God’s triage interface
  • It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
  • I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform

But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.

Not the obvious AIs

There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…

  • NARRATOR
  • Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.

At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”

The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.

I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy

No. I don’t mean those AIs. I mean that you should rewatch the film understanding that Joe and Rita, the lead characters, are Super AIs in the context of Idiocracy.

The protagonists are super AIs

The literature distinguishes between three supercategories of artificial intelligence.

  • Narrow AI, which is the AI we have in the world now. It’s much better than humans in some narrow domain. But it can’t handle new situations. You can’t ask a roboinvestor to help plan a meal, for example, even though it’s very very good at investing.
  • General AI, definitionally meaning “human like” in it’s ability to generalize from one domain of knowledge to handle novel situations. If this exists in the world, it’s being kept very secret. It probably does not.
  • Super AI, the intelligence of which dwarfs our own. Again, this probably doesn’t exist in the world, but if it did, it’s being kept very secret. Or maybe even keeping itself secret. The difference between a bird’s intelligence and a human’s is a good way to think about the difference between our intelligence and a superintelligence. It will be able to out-think us at every step. We may not even be able to understand the language in which asks its questions.
Illustration by the author (often used when discussing agentive technology.)

Now the connection to Joe and Rita should be apparent. Though theirs is not an artificial intelligence, the difference between their smarts and that of Idiocracy approaches that same uncanny scale.

Watch how Joe and Rita move through this world. They are routinely flabbergasted at the stupidity around them. People are pointlessly belligerent, distractedly crass, easily manipulated, guided only by their base instincts, desperate to not appear “faggy,” and guffawing about (and cheering on) horrific violence. Rita and Joe are not especially smart by our standards, but they can outthink everyone around them by orders of magnitude, and that’s (comparatively) super AI.

The people of Idiocracy have idioted themselves into a genuine ecological crisis. They need to stop poisoning their environment because, at the very least, it’s killing them. But what about jobs! What about profits! Does this sound familiar?

Pictured: Us.

Joe doesn’t have any problem figuring out what’s wrong. He just tastes what’s being sprayed in the fields, and it’s obvious to him. His biggest problem is that the people he’s trying to serve are too dumb to understand the explanation (much less their culpability). He has to lie and feed them some bullshit reason and then manage people’s frustration that it doesn’t work instantly, even though he knows and we know it will work given time.

In this role as superintelligences, our two protagonists illustrate key critical concerns we have about superintelligent AIs:

  1. Economic control
  2. Social manipulation
  3. Uncontainability
  4. Cooperation by “multis.”

Economic control

Rita finds it trivially easy to bilk one idiot out of money and gain economic power. She could use her easy lucre to, in turn, control the people around her. Fortunately she is a benign superintelligence.

Yeah baby I could wait two days.

In the Chapter 6 of the seminal work on the subject, Superintelligence, Nick Bostrom lists six superpowers that an ASI would work to gain in order to achieve its goals. The last of these he terms “economic productivity” using which the ASI can “generate wealth which can be used to buy influence, services, resources (including hardware), etc.” This scene serves as a lovely illustration of that risk.

Of course you’re wondering what the other five are, so rather than making you go hunt for them…

  1. Intelligence amplification, to bootstrap its own intelligence
  2. Strategizing, to achieve distant goals and overcome intelligent opposition
  3. Social manipulation, to leverage external resources by recruiting human support, to enable a boxed AI to persuade its gatekeepers to let it out, and to persuade states and organizations to adopt some course of action.
  4. Hacking, so the AI can expropriate computational resources over the internet, exploit security holes to escape cybernetic confinement, steal financial resources, and hijack infrastructure like military robots, etc.
  5. Technology research, to create a powerful military force, to create surveillance systems, and to enable automated space colonization.
  6. Economic productivity, to generate wealth which can be used to buy influence, services, resources (including hardware), etc.

Social manipulation

Joe demonstrates the second of these, social manipulation, repeatedly throughout the film.

  • He convinces Frito to help him in exchange for the profits from a time travel compound interest gambit
  • He convinces the cabinet to switch to watering crops by telling them he can talk to plants.
  • He convinces the guard to let him escape prison (more on this below).

Joe’s not perfect at it. Early in the film he tries reason to convince the court of his innocence, and fails. Later he fails to convince the crowd to release him in Rehabilitation. An actual ASI would have an easier time of these things.

Uncontainability

The only way they contain Joe in the early part of the film is with a physical cage, and that doesn’t last long. He finds it trivially easy to escape their prison using, again, social manipulation.

  • JOE
  • Hi. Excuse me. I’m actually supposed to be getting out of prison today, sir.
  • GUARD
  • Yeah. You’re in the wrong line, dumb ass. Over there.
  • JOE
  • I’m sorry. I am being a big dumb ass. Sorry.
  • GUARD (to other guard)
  • Hey, uh, let this dumb ass through.

Elizer Yudkowsky, Research Fellow at the Machine Intelligence Research Institute, has described the AI-Box problem, in which he illustrates the folly of thinking that we could contain a super AI. (Bostrom also cites him in the Superintelligence book.) Using only a text terminal, he argues, an ASI can convince an even a well-motivated human to release it. He has even run social experiments where one participant played the unwilling human, and he played the ASI, and both times the human relented. And while Elizer is a smart guy, he is not an ASI, which would have an even easier time of it. This scene illustrates how easily an ASI would thwart our attempts to cage it.

Cooperation between multis

Chapter 11 of Bostrom’s book focuses on how things might play out if instead of only one ASI in the world, a “singleton” there are many ASIs, or “multis.” (Colossus: The Forbin Project and Person of Interest also explore these scenarios with artificial superintelligences.)

In this light, Joe and Rita are multis who unite over shared circumstances and woes, and manage to help each other out in their struggle against the idiots. Whatever advantage the general intelligences have over the individual ASIs are significantly diminished when they are working together.

Note: In Bostrom’s telling, multis don’t necessarily stabilize each other, they just make things more complex and don’t solve the core principal-agent problem. But he does acknowledge that stable, voluntary cooperation is a possible scenario.

Cold comfort ending

At the end of Idiocracy, we can take some cold comfort that Rita and Joe have a moral sense, a sense of self-preservation, and sympathy for fellow humans. All they wind up doing is becoming rulers of the world and living out their lives. (Oh god are their kids Von Neumann probes?) The implication is that, as smart as they are, they will still be outpopulated by the idiots of that world.

Imagine this story is retold where Joe and Rita are psychopaths obsessed with making paper clips, with their superintelligent superpowers and our stupidity. The idiots would be enslaved to paper clip making before they could ask whether or not it’s fake news.

Or even less abstractly, there is a deleted “stinger” scene at the end of some DVDs of the film where Rita’s pimp UPGRAYEDD somehow winds up waking up from his own hibernation chamber right there in 2505, and strolls confidently into town. The implied sequel would deal with an amoral ASI (UPGRAYEDD) hostile to its mostly-benevolent ASI leaders (Rita and Joe). It does not foretell fun times for the Idiocracy.


For me, this interpretation of the film is important to “redeem” it, since its big takeaway—that is, that people are getting dumber over time—is known to be false. The Flynn Effect, named for its discoverer James R. Flynn, is the repeatedly-confirmed observation that measurements of intelligence are rising, linearly, over time, and have been since measurements began. To be specific, this effect is not seen in general intelligence but rather the subset of fluid, or analytical intelligence measures. The rate is about 3 IQ points per decade.

Wait. What? How can this be? Given the world’s recent political regression (that kickstarted the series on fascism and even this review of Idiocracy) and constant news stories of the “Florida Man” sort, the assertion does not seem credible. But that’s probably just availability bias. Experts cite several factors that are probably contributing to the effect.

  • Better health
  • Better nutrition
  • More and better education
  • Rising standards of living

The thing that Idiocracy points to—people of lower intelligence outbreeding people of higher intelligence—was seen as not important. Given the effect, this story might be better told not about a time traveler heading forwards, but rather heading backwards to some earlier era. Think Idiocracy but amongst idiots of the Renaissance.

Since I know a lot of smart people who took this film to be an exposé of a dark universal pattern that, if true, would genuinely sour your worldview and dim your sense of hope, it seems important to share this.


So go back and rewatch this marvelous film, but this time, dismiss the doom and gloom of declining human intelligence, and watch instead how Idiocracy illustrates some key risks (if not all of them) that super artificial intelligence poses to the world. For it really is a marvelously accessible shorthand to some of the critical reasons we ought to be super cautious of the possibility.

Trivium remotes

Once a victim is wearing a Trivium Bracelet, any of Orlak’s henchmen can control the wearer’s actions. The victim’s expression is blank, suggesting that their consciousness is either comatose, twilit, or in some sort of locked in state. Their actions are controlled via a handheld remote control.

We see the remote control in use in four places in Las Luchadoras vs El Robot Asesino.

  1. One gets clapped on Dr. Chavez to test it.
  2. One goes on Gemma to demonstrate it.
  3. One is removed from the robot.
  4. One goes on Berthe to transform her to Black Electra.

In these examples we see victims are able to be made to walk around, raise their arms, and drop their arms in a karate chop. There is one other function worth mentioning: When Orlak turns one knob really hard, it overloads Dr. Chavez somehow and kills him.

Death at 11?

So this bears an aside. This device is pure fiction of course, and wretched in concept for all the consent and bodily autonomy reasons, but, just to make sure I’m covering my bases here, I should note that “death” should just not be possible by turning a knob up to 11. First off, any moral person wouldn’t want that to happen, and so would engineer the damned thing to avoid it.

But even if you’re Orlak-eque, and want a kill function on your device, “kill” is a categorically different thing than “control.” It shouldn’t just be one end of a dial. It’s too easy to accidentally invoke, and especially for an irrevocable act. There is no “undo” or even “sorry” that works in that circumstance.

If only.

Even if Orlak was just hedging against of the possibility of his winding up in a bracelet, he wouldn’t want his own death to be the result of an oopsie. No, if you’re going to have a function like that, it should require authorization, or at least something like two-hand trip mechanisms, to make sure that this horribleness is, seriously, truly and for real, what the person wants to have happen. OK. Yes? Yes.

But I digress.

Doing the Potentiometer Dance

So the effects that we see are:

  • Walk around
  • Raise arms
  • Karate chop
  • Perform in a wrestling match.

Is it believable that the device can do what the movie shows it doing? Short answer: Maybe, but it’s a stretch.

Here is the beta version used on Dr. Chavez and Gemma, on the floor of the laboratory before Gaby kicks it and everything explodes. This is the clearest view we get of either device.

Come on, I know it’s in beta, but no labels?

Both remotes have a rotary switch on the bottom edge and two click-stop potentiometers on the top edge. At first glance, it seems that these controls aren’t enough to manage all the variables that could apply to the actions taken by the victims. Lift arm? OK sure, but which one? Where’s the elbow? What’s the hand position?

But if the victims are in a perfectly-suggestible state—rather than complete automotons—then maybe all he has to specify with the remote is some goal and the degrees of two important variables, leaving everything else up to the human intelligence to interpret and decide to the best of his or her ability.

  • Mode: variable 1 | variable 2
  • Arm lift: left hand height | right hand height
  • Walk around: speed | clockwiseness (even though this befits a toggle switch, it could work here)
  • Karate chop: Force | Palm angle
  • Wrestle: Face | Heel
  • &c

Since it’s custom-coded, Orlak might even have multiple stops on the rotary switch for different inflections of the same mode. For instance, “Try to win match” and “Throw match” allowing different variables that suit each mode applicable within a given match.

Pictured: Botched move mode.

This design strategy leaves a lot up to the intelligence of the victim that isn’t specified by one of the mode/variable combinations, e.g. Which wrestling move should I try next? How do I escape this oncoming chokeslam? But we’re working with a subjugated human intelligence here, and we are used to working to achieve goals under difficult constraints. That’s, like, literally, life. So, if you can accept the speculative technology that controls a victim and passes interpretable instructions to them via a bracelet, then yeah, this remote control passes believability, even if it looks like a high school theater prop.

The prop could be made better by having the modes hand-written along the stops of the rotary switch. And if it was a real product, knowing what the potentiometers controlled in the current mode would save the user from not having to memorize it, or trial-and-error it. But please, let’s not make this thing a real world anything.

The generalizable lesson is that when you are working with an agent of a certain sophistication, your users don’t have to specify everything, just the most important things. The agent can do the rest of the interpretation. (And if not, design a recovery mode.)

Again, not a robot

Note that this bit of apologetics only applies to Orlak’s human victims with their general intelligence. Orlak also has a remote control for the Robot Asesino, but it’s much harder to see how it could not have language, but enough general intelligence to do what it must do with those meager instructions.

No dials needed

A last note is that the script could have made it so that a remote control was not necessary. If the film had explained that the bracelet puts the subject into a state of passive suggestibility, then Orlak could just rely on his subject’s (and audience’s) familiarity with language, issuing spoken instructions for his automatons to follow, and bypassing the ridiculousness of this interface. But. You know. Then the writers would have to have Gaby save the day through some other means than kicking the remote. And this is not that movie.

Untold AI: Poster

As of this posting, the Untold AI analysis stands at 11 posts and around 17,000 words. (And there are as yet a few more to come. Probably.) That’s a lot to try and keep in your head. To help you see and reflect on the big picture, I present…a big picture.

click for a larger image

A tour

This data visualization has five main parts. And while I tried to design them to be understandable from the graphic alone, it’s worth giving a little tour anyway.

  1. On the left are two sci-fi columns connected by Sankey-ish lines. The first lists the sci-fi movies and TV shows in the survey. The first ten are those that adhere to the science. Otherwise, they are not in a particular order. The second column shows the list of takeaways. The takeaways are color-coded and ordered for their severity. The type size reflects how many times that takeaway appears in the survey. The topmost takeaways are those that connect to imperatives. The bottommost are those takeaways that do not. The lines inherit the takeaway color, which enables a close inspection of a show’s node to see whether its takeaways are largely positive or negative.
  2. On the right are two manifesto columns connected by Sankey-ish lines. The right column shows the manifestos included in the analysis. The left column lists the imperatives found in the manifestos. The manifestos are in alphabetical order. Their node sizes reflect the number of imperatives they contain. The imperatives are color-coded and clustered according to five supercategories, as shown just below the middle of the poster. The topmost imperatives are those that connect to takeaways. The bottommost are those that do not. The lines inherit the color of the imperative, which enables a close inspection of a manifesto’s node to see to which supercategory of imperatives it suggests. The lines connected to each manifesto are divided into two groups, the topmost being those that are connected and the bottommost those that are not. This enables an additional reading of how much a given manifesto’s suggestions are represented in the survey.
  3. The area between the takeaways and imperatives contains connecting lines, showing the mapping between them. These lines fade from the color of the takeaway to the color of the imperative. This area also labels the three kinds of connections. The first are those connections between takeaways and imperatives. The second are those takeaways unconnected to imperatives, which are the “Pure Fiction” takeaways that aren’t of concern to the manifestos. The last are those imperatives unconnected to takeaways, the collection of 29 Untold AI imperatives that are the answer to the question posed at the top of the poster.
  4. Just below the big Sankey columns are the five supercategories of Untold AI. Each has a title, a broad description, and a pie chart. The pie chart highlights the portion of imperatives in that supercategory that aren’t seen in the survey, and the caption for the pie chart posits a reason why sci-fi plays out the way it does against the AI science.
  5. At the very bottom of the poster are four tidbits of information that fall out of the larger analysis: Thumbnails of the top 10 shows with AI that stick to the science, the number of shows with AI over time, the production country data, and the aggregate tone over time.

You’ve seen all of this in the posts, but seeing it all together like this encourages a different kind of reflection about it.

Interactive, someday?

Note that it is possible but quite hard to trace the threads leading from, say, a movie to its takeaways to its imperatives to its manifesto, unless you are looking at a very high resolution version of it. One solution to that would be to make the visualization interactive, such that rolling over one node in the diagram would fade away all non-connected nodes and graphs in the visualization, and data brush any related bits below.

A second solution is to print the thing out very large so you can trace these threads with your finger. I’m a big enough nerd that I enjoy poring over this thing in print, so for those who are like me, I’ve made it available via redbubble. I’d recommend the 22×33 if you have good eyesight and can handle small print, or the 31×46 max size otherwise.

Enjoy!

Maybe if I find funds or somehow more time and programming expertise I can make that interactive version possible myself.

Some new bits

Sharp-eyed readers may note that there are some new nodes in there from the prior posts! These come from late-breaking entries, late-breaking realizations, and my finally including the manifesto I was party to.

  • Sundar Pichai published the Google AI Principles just last month, so I worked it in.
  • I finally worked the Juvet Agenda in as a manifesto. (Repeating disclosure: I was one of its authors.) It was hard work, but I’m glad I did it, because it turns out it’s the most-connected manifesto of the lot. (Go, team!)
  • The Juvet Agenda also made me realize that I needed new, related nodes for both takeaways and imperatives:  AI will enable or require new models of governance. (It had a fair number of movies, too.) See the detailed graph for the movies and how everything connects.

A colophon of sorts

  • The data of course was housed in Google Sheets
  • The original Sankey SVG was produced in Flourish
  • I modified the Flourish SVG, added the rest of the data, and did final layout in Adobe Illustrator
  • The poster’s type is mostly Sentinel, a font from Hoefler & Co., because I think it’s lovely, highly readable, and I liked that Sentinels are also a sci-fi AI.