Where we are: To talk about how sci-fi AI attributes correlate, we first have to understand how their attributes are distributed. In the first distribution post, I presented the foundational distributions for sex and gender presentation across sci-fi AI. Today we’ll discuss embodiment.
Another simple measurement is how the AIs are embodied. That is, how to they manifest in the world of the story (or diegesis): Are they walking around, appearing as a screen on a wall, or as pulsing stars in the cosmos?
The categories that emerged from the survey were as follows:
Virtual, where a character only had, for example, a body or face that was generated for presentation to other characters on a screen or via volumetric projection. Joi from Blade Runner 2049 is virtual.
Disembodied, if the AI doesn’t have a particular, or an ad-hoc embodiment. The Machine from Person of Interest is disembodied.
Edgar from Electric Dreams is a Personal computer. In this regard, Edgar is a sui generis, or a category containing only one example.
Architectural: Some AIs are stuck to the walls of a building. HAL 9000 from 2001: A Space Odyssey is architectural.
Vehicular, where a character is embodied in a vehicle of some sort. K.I.T.T. from Knight Rider is vehicular.
Zoomorphic robot, where the robot is built to look something like an animal. Often these characters do not have voice. Muffit from the original Battlestar Galactica television series is an example.
Mechanical robot, where the robot is mechanical (and more mechanical looking than humanoid looking). WALL·E is mechanical.
Anthropomorphic robot, where the robot is proportioned like a human, and has most all the surface features of a human, but is readily identifiable as a robot. The Iron Giant is anthropomorphic.
Indistinguishable from human, where the robot can “pass” as a human. Only detailed or violent inspection will reveal it to be non-human. Aida from Agents of S.H.I.E.L.D. is indistinguishable from humans.
Here’s what that looks like in a bar chart.
Sometimes the details are tricksy
Sci-fi can make these things tricky. For example, the virtual crewmembers of the U.S.S. Callister might be considered indistinguishable from humans—as long as they are wearing clothes. Their unfortunate captain (and captor) had them created in virtual space such that they had no genitals. They are listed as bodily male and bodily female (rather than biologically) even though they are also indistinguishable from human.
Similarly, David from Prometheus has a fingerprint with a subtle Weyland-Yutani logo maker’s mark built into it (see the image below), but since this would only be apparent to someone who knew exactly where to look and for what, David is also listed as indistinguishable from human.
He just has to find crimes that don’t involve fingerprints.
Why so human?
My conjecture to explain the high number of AIs that indistinguishable from human is threefold.
First, it is a matter of production convenience—that is, it is much easier and cheaper to insert a line of dialogue that establishes a character as a human-looking robot, rather than any of the other ways of signaling robotic-ness:
Create a costume like Robbie the Robot
Make a puppet like Teddy from A.I. Artificial Intelligence
Do prosthetic makeup like The Terminator
Create a set piece that syncs with audio like Alphy from Barbarella
Produce special effects, like Ava from Ex Machina
There’s also a fit-to-mediaargument which notes that people are much better and more comfortable at reading the emotional states of people than they are of machines. If catharsis, or the emotional journey, is part of what the art is about, humans work as a medium. (This lack of emotional information in interfaces was played to great effect in 2001: A Space Odyssey, unnerving us with the psychopathy of HAL’s unblinking eye.) Actors, too (I highly suspect) enjoy using their bodies, voices, and faces to do their jobs without the additional layers of prosthetics or puppetry. So we would expect an overweighting of indistinguishable from humans because they are often the best tools for the narrative job, from both the audience’s and the actor’s perspective.
Not a lot of emotive potential here.
There’s another argument—a genre-and-narrative argument—that people are mostly interested in stories about people, and most sci-fi is a speculation about social effects rather than actual technology, and so indistinguishable robots are the best embodiment of what we’re interested in, anyway. Humans, just with different rules.
In the first post of this series, I explained what I was out to learn, what I looked at, and how I tagged it. Ultimately, we want to look at the data and be able to answer questions like “Are female AIs more subservient than male AIs?” And in order to do that, we first have to understand what the distributions are for sex and subservience. So let’s talk distributions.
Distribution is a fancy term for how many of each value we see for a given attribute. For example, if we wanted to look at the distribution of eye color across the world, we would count how many browns, blues, hazels, ambers, green, gray, and reds that we see, (finding a way to deal with heterochromia, etc.) and compare them in a bar chart.
Of course eye color is not of interest in this case. For Gendered AI, we are interested in comparing other attributes to gender presentation. We’ll look at the other attributes in later posts, but we’re going to begin with sex ratio, and that will fill up a post all its own.
Simple sex ratio
Author’s request: With that section title I know some hackles are already raised. Please know this is very tough space to write for. Despite having paid for a number of paid content reviews, I may have made some missteps. I am a n00b writer on these topics, and I respond best to friendly engagement rather than a digital pillory.
The very simple explanation of sex ratio is women-to-men. But of course that’s waaaaay too simple for either the real world or our purposes. At the very (very) least, AI might have no gender, so we need a “none” or “other” category. Let’s start with these very oversimplified numbers and move to more detailed later.
The chart shown below shows the data from the survey focusing on simple categories of female, other, and male. The chart shows that AI characters are strongly overweighted male, with a rough ratio of 2 male : 1 female : 0.75 other. The 2:1 M:F ratio is eerily in line with USC Signal Analysis and Interpretation Laboratory’s finding where speaking roles in 1000 scripts they studied, men’s dialogue, and even the number of characters was double (or over) that for women. This is greatly different than the real-world sex ratios of 1:1 as reported in the Wikipedia article about world sex ratios.
I would talk about the weird discrepancies of just this distribution, but any ranting at this point would be overshadowed by the ranting that happens next. Deep breath.
Having an “other” category isn’t enough. After all, characters in one of these bars can be as different as HAL and Gigolo Joe, and that doesn’t seem right. So, let’s break this oversimplification down into more refined bits.
More detailed gender presentation ratios
First, of course, we should note that characters rarely discuss gender directly, and—at least in this sample—discuss gender dysphoria all of never. Also we can’t reach out to ask any of them directly since they’re fictional. So when I speak of gender, it should be read as “gender presentation,” and unfortunately at this point you are stuck with nothing more scientific than my reading of the following four variables.
Primary sex characteristics, or biological presentation: The presence of masculine or feminine sexual organs. None of the titles I reviewed were pornographic, and full-frontal nudity is pretty rare up until Westworld, so this often comes down to implication. Gigolo Joe, for instance, could not do what must be a key part of his primary function without male sex organs (with all the important caveats that penetrative sex is just one kind of sex), so he is listed as “Masculine” here.
Secondary sex characteristics, or body presentation: These are much more directly observable, and include those other markers of sex, like facial hair and shoulder-to-hip ratio.
Voice presentation: This is my hearing of whether the voice has a lower, masculine register, or a higher, feminine register. (In a few cases I checked on the actor listing in IMDB and did web searches for evidence of self-identification.)
Pronoun presentation: How other characters refer to the AI character with pronouns. R2D2, for instance, has absolutely no sex characteristics, and no voice, but is still referred to as a “he” throughout the Star Wars franchise.
A note on labeling: I’m aware that there are tricky nuances in the labels. After all, how is body not part of one’s biology? But the shorthand proves useful so we can use the shorthand “BIO” and know what it means instead of always having to use the longer phrase “implicit or explicit primary sex characteristics.”
For each AI character, I tagged each of these variables as either Masculine, Fluid, Neutral, Feminine, Unknown, Multiple, Many, or N/A. (The “n/a” may seem weird, but for instance, HAL doesn’t have a body, so primary and secondary sex characteristics are not applicable.)
Socially male, but existentially neutral.
Combining voice and pronouns into “social”
There are plenty of characters with no voice or non-human voices, and a few characters that are not referred to by pronoun. Since these two indicate a social performance of gender, I treated them in the algorithms as an “OR” when considering stacking. That means if either variable was present, and they didn’t contradict, I counted it the presenting aspect. Compare these two examples…
R2D2: N/A Primary, N/A Secondary, neutral voice, male pronoun = alsosocially male
They stack
The main thing to note about how these three variables (counting voice and pronouns as “socially”) played out is that they overwhelmingly stacked. That’s not a term of art, so let me explain. It means that if a character has masculine primary sex characteristics, that invariably meant that he also had masculine secondary sex characteristics, and voice/pronouns. If a character had no evidence of primary sex characteristics, but had feminine secondary sex characteristics, she invariably had feminine voice/pronouns.
It makes more sense if I show you. So, here are six representative examples from the survey of how this monosex stacking looks.
I suspect this is an effect of binary concepts of gender on the part of the makers of the sci-fi, implemented as increasingly detailed costumes for the AI. But when you consider these variables, these 6 are a pale semblance of what could be. Include “fluid” or “nonbinary” as a possibility, and don’t bother with stacking, and there are 58 more possible combinations of these variables.
Click the image for a full-screen spread of possibilities.
Hey, want to feel both hyper-reductive and overwhelmed at the complexity of gender? Try writing a categorization algorithm for analysis.
Anyway, if they hadn’t stacked like they did, I would have had to describe their genders with a four-part-code that would result in 64 genders. But, because they do stack, that meant there were these 6, plus “multiple,” “genderfluid,” “neutral,” and “none,” for a total 9. Note that online lists of genders vary from the 58 available to Facebook’s users to the 229 found on this more creative list (my favorite is “Schrodigender – A gender which you can both feel and not feel” giving a clue to how serious that particular list is.) So while 9 can feel heavy, it does not compare to the complexity of the real world.
OK, given those descriptions of the subcategories, here’s how the numbers played out in the much more detailed analysis of gender presentation in sci-fi AI.
Detailed gender presentation
I’ve noted that we’re here for the correlations, not distributions, but in and of itself, this is remarkable. The subcategories provide a deeper (and more troubling) look into the data, and is necessary because these categories have to be thought of differently. Observe, for example, that the biologically-gendered characters are nearly at parity, while the bodily- and socially-gendered characters skew male. There is a frustrating 2:1 ratio for bodily male:bodily female and an infuriating 5:1 for socially male:socially female.
These ratios bear…discussion.
1 biologically male : 1 biologically female
A harsh interpretation of this stat would read a kind of heterosexual panic, where—when sex or procreation is involved—Hollywood needs to assert loudly over a hastily-ordered beer that whoa whoa whoa: Only AI chicks and AI dudes get it on. Or if they do get it on with people it’s with the right gender.
Or, more charitably I suppose, humans are largely heterosexual, and since there is a rough 1:1 sex ratio in humans, there should be a 1:1 sex ratio in them. (?) It’s a hard thing to second-guess.
It gets darker in the other categories where the sci-fi AI has a body but no biological apparatus. The ratios still skew heavily male. As if, when it comes to just being a person, a total sausagefest is the norm.
I await the disturbing fanfic.
2 bodily male : 1 bodily female
Recall from above that this category is reserved for those AI characters that present a gendered body but do not have gendered reproductive or sexual capabilities. We will discuss the germane-ness and embodiment of these AIs in a later post, but for now we can note that this category of AI character, with its 2:1 ratio is roughly in the middle between the biologically and socially gendered categories, and in-line with the oversimplified distribution seen above.
5 socially male : 1 socially female
This is the category where the only markers of gender are voice and pronouns. In other words, characters for whom a gender seems like an arbitrary choice. WTF is up with a 5:1 ratio? Why are all these “arbitrarily” gendered AI characters guys? We’ll talk about germaneness to the story later, but I want to see if there is some extradiegetic reason first.
Is it the available voice talent?
We have to acknowledge that filmmakers must hire someone to voice their speaking AI characters, even if there are no other markers. Despite the fact that…
…it’s fair to say that most available voice talent is recognizably gendered, and the AI character may just inherit the presentation of its actor. Then you might expect the roles to match the sex ratios in the available talent pool. I couldn’t find any formal studies of this, so I created a throwaway account on voice.com—a major job site for voice actors—and performed separate searches for male and female talent. There I found 42,786 males, and 24,347 female non-union voice actors, around 2:1. (Union actors were closer to 1:1, with 3,079 male and 2,336 female. n.b. The site gives only those two gender options in its search.) Though that’s more anecdotal than I’d like, even the worse ratio of 2:1 still pales compared the 5:1 of socially gendered AI, so no, that’s not it. You might think that explains the “simply” gendered characters, but my suspicion is that the genders of the characters are set in the script and pass down through the process, unquestioned after that.
Is it what sci-fi audiences want?
Might the ratio be some sales rationale, some presumption that sci-fi audiences are mostly men and therefore might only be more interested in male characters? No, of course numbers vary by show and genre, but this article by Victoria McNally shows that there is only a slight majority of men in these audiences (hovering around 60% male and 40% female, rather than 73% male and 17% female, which the 5:1 socially gendered ratio would have you believe.)
Plus the 2018 annual Hollywood Diversity Report by UCLA shows that “new evidence from 2015–16 suggests that America’s increasingly diverse audiences prefer diverse film and television content,” so we would have to greatly exaggerate the connection between the sex ratio of the audience and those we see here.
There has to be some other reason, and I suspect it’s the dark patriarchal notion that “male” is somehow the default gender. Even though it is, literally, not.
Is it that Hollywood itself is mostly white and male?
The 2018 Hollywood Diversity Report shows that gatekeepers, writers, directors, and (points at self) critics are still overwhelmingly white and male. White male writers and directors account for 91.9% and 86.2% if their fields, respectively. This is closer to the 73% male, but still a crappy, crappy excuse for the default assignment of AI as male. Representation matters and this is sorry representation.
P.S. Don’t get uppity, real world
The Global Gender Gap Report issued on 17 DEC 2018 by the World Economic Forum showed (in collaboration with LinkedIn) that women only occupy 22% of jobs in AI professions. (See page viii, 28–35 of that report.)
So yeah.
Pictured: Sci-fi AI, mostly
You probably had a general sense of this disparity from simply being an audience member. But it’s “nice” to have some data to back it up. Be forewarned: It gets worse when we look at correlations. (No, really.) But before we do that, we should look at the rest of the distributions, starting with embodiment in the next post.
Men are machines. Women are bodies. Male is extreme. Women are nuance. General AI has gender. Other AI does not. Male is free-will. Machine is subservience. Male is default. Women when it’s necessary.
At least in screen sci-fi.
Let me explain.
In November of 2018, a tweet thread between Chris Geison and Kathy Baxter called my attention to questions about the gender of AI in sci-fi. Baxter noted that most AI is male, and how female AI is often quite subservient or sexualized. In this thread, Gieson added Cathy Pearl’s observation that embodied AI is often female and male is more often disembodied and regarded as a peer.
I already had a “database” (read: Google Sheet) of AI in screen sci-fi from Untold AI, my 2018 study of the stories screen sci-fi doesn’t tell, but should. So, I thought I could provide some formal analysis to this Gendered AI discussion. To that end I’ve added around 325 AI characters to the Google Sheet, and run some analyses. This series of posts will break it all down for you.
Oh, we’ll come back to this little “guy.”
Now, it can get a little dry to talk about percentages and comparisons and distributions, so I’m going to do my best to keep tying things back to the shows and the characters and the upshot of all this analysis. But the way we get to that upshot is through the numbers, so stick with me. For this first post, I’m going to share what I captured, and what counts as an AI character for purposes of this study.
The following is true in the survey as of 08 APR 2019. The live data, available in Google Sheets, may be updated from this.
The data set
327 AI characters from science fiction (see the full list in the live sheet)
Movies and television shows from 1927 (Metropolis)–2018 (Upgrade)
Call to action: Of course I missed some movies and TV shows. Add them in the comments, including a link to their IMDB page.
The survey that drives this site has always focused on screen sci-fi for its ability to depict interfaces that can be reviewed. Literature is much more free to experiment with ideas than screen sci-fi, and so will have lots of additional examples, but won’t appear in the survey.
Each character is tagged multiple ways. More detail on particular attributes below.
Movie or Show Title and Episode if appropriate
Year
Name
Embodiment
Physicality/Virtuality
Gender Presentation (which is a roll-up of four separately tracked variables)
Appearance or evidence of primary sex characteristics
Appearance of secondary sex characteristics
Voice
Pronouns used by other characters
Subservience to humans
Germane-ness of gender (more on this in its own section)
Goodness
If not free-willed, the gender of the master
Category of AI (Narrow, General, or Super)
Whether their gender presentation changes over time
Genesis, or how the AI came to be. This is mostly used to distinguish AI that are copies of humans (whose gender would thereby be inherited).
Call to action: If you think there’s some critical attribute that I’m missing, pipe up in the comments. I can’t promise I can get to it before the next post, but I can consider it as a future enhancement.
Yes, but which Skynet?
With the exception of the flag marking changed genders, when characters change other attributes over the course of their stories, they are tagged for their final state. For example, the Maschinenmensch from Metropolis begins an anthropomorphic robot, but after Rotwang transfers Maria’s likeness to it, it becomes indistinguishable from human, and so is tagged as such.
If you’re looking at the Sheets data, you’ll see that text values have corresponding numerical columns to allow for easy sorting and graphing data, but I tried to gray them so they don’t distract from a reading of the raw data.
Full disclosure: Possible problems with this data
Sci-fi is a vast supergenre. There are certainly examples missing from the survey, so it should not be regarded as exhaustive. (I tried to get as many as I could.)
I generally target well-known examples rather than limited-release or student projects.
The sci-fi interfaces blog usually eschews comedy that breaks the 4th wall routinely, (e.g. Spaceballs), as this makes for very complicated analysis, and so the survey will be missing these examples as well.
I only speak English fluently, and so have only reviewed shows in English, with English dubbing, or with English subtitling.
I am not a data scientist. I’m a smart guy who tried his best, but may have made some errors in the formulas.
I am not an expert in gender issues. I may make unintentional errors in discussing or categorizing genders, use insensitive language, or have naive errors in my thinking. I have engaged a professional sensitivity review, but of course they might not catch everything, either.
I am a progressive, liberal, (imperfect, see above) feminist. Though I tried not to, my bias may have colored how I coded the examples and of course the interpretation of this data.
I have to go on a LOT of common-case presumptions. For example, men can have breasts for many reasons, but I used the presence of breasts as one marker of female-ness. I suspect this is a disservice to the real complexity of gender and sex in the world, but presuming the audience sees gender as primarily binary, it marks how these characters are likely perceived rather than what they are.
I’m not too worried about these caveats, though, since what we’re aiming for here isn’t precision engineering specs, but rather to get a numbers-based sense of the big patterns in screen sci-fi, and for that, a little bit of noise in the numbers is OK.
Lastly, not every character that you think might qualify does, so I should explain my rationale for what got in and what got left out.
What counts as an AI character?
I’ve tried to be strict about what counts as AI in that the intelligence of the character must be housed in non-biological circuitry. This leaves out some characters that on a cursory consideration would seem like a natural fit. For an example, compare The Stepford Wives (1975) and The Stepford Wives (2004). The wives in the original were robots through and through—mechanical, lookalike replacements of the original humans. But the wives in the remake were cyborgs, with robotic bodies housing their original, human brains. This means that in the original, the wives count as AI and appear in the survey. But because of this cyborg technicality, none of the “robotic” characters from the remake make it in. Not even the little cyborg dog.
Meanwhile, Rachel and Deckard, replicants from the Blade Runner universe, had a baby (according to Blade Runner 2049) so we can generalize and say replicants are capable of wholly biological reproductive acts. Given this you might think they’re out of the survey, but, since they are fabricated, they get into the survey.
Also, T-800s Terminators (the Arnold kind) get in, because even with their wetware bodies, the intelligence they carry is non-biological.
I know, it’s complex and sometimes counter-intuitive. Such is data.
OK, so looking at those attributes for those characters, the first thing we should look at is the distributions. This included all sorts of questions like: How many AI present as men? How many as women? How many are nonbinary? What kinds of bodies do they have? Who is master of whom?
It’s thrilling, thrilling data analysis action, so stay tuned.
Now we come to the end of Idiocracy, if not yet the idiocracy.
This film never got broad release. There are stories about its being supressed by the studio because of the way the film treated brands.
I don’t know what they’re talking about.
But whatever the reason, I’m happy to do my part in helping it get more awareness. Because despite its expositive principle being wrong (and maybe slightly eugenic), the film illustrates frustrations I also have with some of the world’s stupider ills, and does so in funny ways. Also, as I noted in the last writeup, it even illustrates speculative and far-reaching issues with superintelligence. So, it’s smarter than it looks.
I’d recommend lots and lots more people see this, generally, if only to reinforce the demonization of idiocy and make more people want to be not that. So first let me say: If you haven’t yet, see the film. Help others see it. Make People Valorize Enlightenment Again.
Now, let’s turn to the interfaces.
Sci: B (3 of 4) How believable are the interfaces?
This rating is tough. After all, the interfaces are appropriately idiotic. But, we have to ask: Are they the right kind of idiotic, given a diegesis where everyone is a moron and civilization is propped up by technologies created by smart people who died off? Well…mostly.
The FloorMaster is a believable example of narrow AI breaking down. The Carl’s Junior, Insurance Slot machine, and OmniBro are all believable once you accept that part of the Idiocracy is an inhumane, hypercapitalist panopticon. The IQ test has problems, like most do. The Time Masheen is believably an older ride that has had its dioramas replaced by the idiots. These are all believable.
The sleeping pods are in between. As a prototype, you might expect the unlabeled interface and lack of niceties. But the pods break believability by magically having enough resources (e.g. five billion calories, between them) to keep their occupants alive and healthy for 500 times their initially-planned run.
And some of the interfaces just could not have been created either by the dead, smart people, or the idiots. These are technology jokes that break the fourth wall, and earn it the grade it gets.
Fi: A+ (4 of 4) How well do the interfaces inform the narrative of the story?
The film knocks this out of the park. The interfaces are a key part of illustrating how it is that idiots manage to survive at all, and how stupidity from the top-down and the bottom-up gets into everything. Just fantastic.
Everything.
Interfaces: B (3 of 4) How well do the interfaces equip the characters to achieve their goals?
This one is also complicated. The interfaces almost universally serve to thwart the users, but we have to cut them some slack, because that’s part of their narrative point. (See, this is why it’s so difficult to review comedy.)
For instance, the Healthmaster Inferno likely does more to infect patients than to help cure them. (This has a historical precedent, as doctors used to reject the notion that they had to wash their hands between patients because harumph they were gentlemen and gentlemen are clean.) And while this is terrible usability, with no affordances, constraints, or safeguards, if the technology had worked, it wouldn’t help tell such a funny and disturbing story.
Then there are technologies like the St. God’s Intake interface that would pass a usability test, but serve to keep their users as mere babysitters for a technology that does the work, and would serve to keep them stuck in the same job, never improving. Come to think of it, this is a metaphor for the role of technology in the film: It just serves to keep them stupid by trying to provide everything for them. That’s a thought with troubling implications, unless we go about it smartly.
And, hilariously, there is one function in the film that is particularly brilliant, and points out how prudish we are not to implement it today. (The fart fan.)
Anyway, the tech that is broken is so obviously broken (the IPPA machine being perhaps the best example) that I’m not counting this against the film’s Interfaces ratings. Real world designers should not mimic these or draw inspiration, but the stupidity is so deliberate and apparent, I don’t believe anyone would. In fact, the film leads them to look for why the technologies are stupid and do not that, so it scores high marks.
Final Grade A- (10 of 12), Blockbuster.
Good job, team Idiocracy.
A quick note to close out this set of reviews. People who like Idiocracy may be interested to know it is a spiritual inheritor of a 1951 story called The Marching Morons. The text hasn’t aged well, but it’s still worth a read if you liked this movie. Similar premise, similar difficulties.
Compare freely
“We need the rockets and trick speedometers and cities because, while you and your kind were being prudent and foresighted and not having children, the migrant workers, slum dwellers and tenant farmers were shiftlessly and short-sightedly having children—breeding, breeding. My God, how they bred!”
The Marching Morons, by C.M. Kornbluth, 1951
This short story is over 50 years old. I’m just going to guess that since intelligence is relative, even as average intelligence continues to rise, there will always be grousing by the intelligent about the less intelligent. And I think I’m OK with that. Or at least, the effects of it. I hope you are, too.
It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform
But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.
Not the obvious AIs
There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…
NARRATOR
Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.
At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”
The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.
I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy…
No. I don’t mean those AIs. I mean that you should rewatch the film understanding that Joe and Rita, the lead characters, are Super AIs in the context of Idiocracy.
The protagonists are super AIs
The literature distinguishes between three supercategories of artificial intelligence.
Narrow AI, which is the AI we have in the world now. It’s much better than humans in some narrow domain. But it can’t handle new situations. You can’t ask a roboinvestor to help plan a meal, for example, even though it’s very very good at investing.
General AI, definitionally meaning “human like” in it’s ability to generalize from one domain of knowledge to handle novel situations. If this exists in the world, it’s being kept very secret. It probably does not.
Super AI, the intelligence of which dwarfs our own. Again, this probably doesn’t exist in the world, but if it did, it’s being kept very secret. Or maybe even keeping itself secret. The difference between a bird’s intelligence and a human’s is a good way to think about the difference between our intelligence and a superintelligence. It will be able to out-think us at every step. We may not even be able to understand the language in which asks its questions.
Illustration by the author (often used when discussing agentive technology.)
Now the connection to Joe and Rita should be apparent. Though theirs is not an artificial intelligence, the difference between their smarts and that of Idiocracy approaches that same uncanny scale.
Watch how Joe and Rita move through this world. They are routinely flabbergasted at the stupidity around them. People are pointlessly belligerent, distractedly crass, easily manipulated, guided only by their base instincts, desperate to not appear “faggy,” and guffawing about (and cheering on) horrific violence. Rita and Joe are not especially smart by our standards, but they can outthink everyone around them by orders of magnitude, and that’s (comparatively) super AI.
The people of Idiocracy have idioted themselves into a genuine ecological crisis. They need to stop poisoning their environment because, at the very least, it’s killing them. But what about jobs! What about profits! Does this sound familiar?
Pictured: Us.
Joe doesn’t have any problem figuring out what’s wrong. He just tastes what’s being sprayed in the fields, and it’s obvious to him. His biggest problem is that the people he’s trying to serve are too dumb to understand the explanation (much less their culpability). He has to lie and feed them some bullshit reason and then manage people’s frustration that it doesn’t work instantly, even though he knows and we know it will work given time.
In this role as superintelligences, our two protagonists illustrate key critical concerns we have about superintelligent AIs:
Economic control
Social manipulation
Uncontainability
Cooperation by “multis.”
Economic control
Rita finds it trivially easy to bilk one idiot out of money and gain economic power. She could use her easy lucre to, in turn, control the people around her. Fortunately she is a benign superintelligence.
Yeah baby I could wait two days.
In the Chapter 6 of the seminal work on the subject, Superintelligence, Nick Bostrom lists six superpowers that an ASI would work to gain in order to achieve its goals. The last of these he terms “economic productivity” using which the ASI can “generate wealth which can be used to buy influence, services, resources (including hardware), etc.” This scene serves as a lovely illustration of that risk.
Of course you’re wondering what the other five are, so rather than making you go hunt for them…
Intelligence amplification, to bootstrap its own intelligence
Strategizing, to achieve distant goals and overcome intelligent opposition
Social manipulation, to leverage external resources by recruiting human support, to enable a boxed AI to persuade its gatekeepers to let it out, and to persuade states and organizations to adopt some course of action.
Hacking, so the AI can expropriate computational resources over the internet, exploit security holes to escape cybernetic confinement, steal financial resources, and hijack infrastructure like military robots, etc.
Technology research, to create a powerful military force, to create surveillance systems, and to enable automated space colonization.
Economic productivity, to generate wealth which can be used to buy influence, services, resources (including hardware), etc.
Social manipulation
Joe demonstrates the second of these, social manipulation, repeatedly throughout the film.
He convinces the cabinet to switch to watering crops by telling them he can talk to plants.
He convinces the guard to let him escape prison (more on this below).
Joe’s not perfect at it. Early in the film he tries reason to convince the court of his innocence, and fails. Later he fails to convince the crowd to release him in Rehabilitation. An actual ASI would have an easier time of these things.
Uncontainability
The only way they contain Joe in the early part of the film is with a physical cage, and that doesn’t last long. He finds it trivially easy to escape their prison using, again, social manipulation.
JOE
Hi. Excuse me. I’m actually supposed to be getting out of prison today, sir.
GUARD
Yeah. You’re in the wrong line, dumb ass. Over there.
JOE
I’m sorry. I am being a big dumb ass. Sorry.
GUARD (to other guard)
Hey, uh, let this dumb ass through.
Elizer Yudkowsky, Research Fellow at the Machine Intelligence Research Institute, has described the AI-Box problem, in which he illustrates the folly of thinking that we could contain a super AI. (Bostrom also cites him in the Superintelligence book.) Using only a text terminal, he argues, an ASI can convince an even a well-motivated human to release it. He has even run social experiments where one participant played the unwilling human, and he played the ASI, and both times the human relented. And while Elizer is a smart guy, he is not an ASI, which would have an even easier time of it. This scene illustrates how easily an ASI would thwart our attempts to cage it.
Cooperation between multis
Chapter 11 of Bostrom’s book focuses on how things might play out if instead of only one ASI in the world, a “singleton” there are many ASIs, or “multis.” (Colossus: The Forbin Project and Person of Interest also explore these scenarios with artificial superintelligences.)
In this light, Joe and Rita are multis who unite over shared circumstances and woes, and manage to help each other out in their struggle against the idiots. Whatever advantage the general intelligences have over the individual ASIs are significantly diminished when they are working together.
Note: In Bostrom’s telling, multis don’t necessarily stabilize each other, they just make things more complex and don’t solve the core principal-agent problem. But he does acknowledge that stable, voluntary cooperation is a possible scenario.
Cold comfort ending
At the end of Idiocracy, we can take some cold comfort that Rita and Joe have a moral sense, a sense of self-preservation, and sympathy for fellow humans. All they wind up doing is becoming rulers of the world and living out their lives. (Oh god are their kids Von Neumann probes?) The implication is that, as smart as they are, they will still be outpopulated by the idiots of that world.
Imagine this story is retold where Joe and Rita are psychopaths obsessed with making paper clips, with their superintelligent superpowers and our stupidity. The idiots would be enslaved to paper clip making before they could ask whether or not it’s fake news.
Or even less abstractly, there is a deleted “stinger” scene at the end of some DVDs of the film where Rita’s pimp UPGRAYEDD somehow winds up waking up from his own hibernation chamber right there in 2505, and strolls confidently into town. The implied sequel would deal with an amoral ASI (UPGRAYEDD) hostile to its mostly-benevolent ASI leaders (Rita and Joe). It does not foretell fun times for the Idiocracy.
For me, this interpretation of the film is important to “redeem” it, since its big takeaway—that is, that people are getting dumber over time—is known to be false. The Flynn Effect, named for its discoverer James R. Flynn, is the repeatedly-confirmed observation that measurements of intelligence are rising, linearly, over time, and have been since measurements began. To be specific, this effect is not seen in general intelligence but rather the subset of fluid, or analytical intelligence measures. The rate is about 3 IQ points per decade.
Wait. What? How can this be? Given the world’s recent political regression (that kickstarted the series on fascism and even this review of Idiocracy) and constant news stories of the “Florida Man” sort, the assertion does not seem credible. But that’s probably just availability bias. Experts cite several factors that are probably contributing to the effect.
Better health
Better nutrition
More and better education
Rising standards of living
The thing that Idiocracy points to—people of lower intelligence outbreeding people of higher intelligence—was seen as not important. Given the effect, this story might be better told not about a time traveler heading forwards, but rather heading backwards to some earlier era. Think Idiocracy but amongst idiots of the Renaissance.
Since I know a lot of smart people who took this film to be an exposé of a dark universal pattern that, if true, would genuinely sour your worldview and dim your sense of hope, it seems important to share this.
So go back and rewatch this marvelous film, but this time, dismiss the doom and gloom of declining human intelligence, and watch instead how Idiocracy illustrates some key risks (if not all of them) that super artificial intelligence poses to the world. For it really is a marvelously accessible shorthand to some of the critical reasons we ought to be super cautious of the possibility.
In the prior Idiocracy post I discussed the car interface, especially in terms of how it informs the passengers what is happening when it is remotely shut down. Today let’s talk about the passive interface that shuts it down: Namely, Joe’s tattoo and the distance-scanning vending machine.
It’s been a while since that prior post, so here’s a recap of what’s happening in Idiocracy in this scene:
When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”
Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading PULL OVER.
It’s a fast scene and the beat feels more like a filmmaker’s excuse to get them out of the car and on foot as they hunt for the Time Masheen. I breezed by it in an earlier post, but it bears some more investigation.
This is a class of transaction where, like taxes and advertising, the subject is an unwilling and probably uncooperative participant. But this same interface has to work for payment, in which the subject is a willing participant. Keep this in mind as we look first at the proximate problem, i.e. locating the fugitive for apprehension; and at the ultimate goal, i.e. how a culture deals with crime.
A quick caveat: While it’s fair to say I’m an expert on interaction design, I’m Just a Guy when it comes to criminology and jurisprudence. And these are ideas with some consequence. Feel free to jump in and engage in friendly debate on any of these points.
Proximate problem: Finding the fugitive
The red scan is fast, but it’s very noticable. The sudden flash of light, the red color. This could easily tip a fugitive off and cause them to redouble efforts at evasion, maybe even covering up the tattoo, making the law’s job of apprehending them that much harder. Better would be some stealthier means of detection like RFID chips. I know, that’s not as cinegenic, so the movie version would instead use image recognition, showing the point of view from the vending machine camera (machine point of view or MPOV), with some UI clues showing it identifying, zooming in to, and confirming the barcode.
Yes, that’s a shout-out.
So we can solve stealth-detection cinematically, using tropes. But anytime a designer is asked to consider a scenario, it is a good idea to see if the problem can be more effectively addressed somewhere higher up the goal chain. Is stealth-detection really better?
Goal chain
Why is the system locating him? To tell authorities so they can go there and apprehend him.
Why are they apprehending him? He has shown an inability to regulate damaging anti-social behavior (in the eyes of the law, anyway) and the offender must be incarcerated.
Why do we try to incarcerate criminals? To minimize potential damage to society while the offender is rehabilitated.
Why do we try to rehabilitate criminals? Well, in the Idiocracy, it’s an excuse for damnatio ad vehiculum, that is, violent public spectacle based on the notion that jurisprudence is about punishment-as-deterrent. (Pro-tip: That doesn’t work. Did I say that doesn’t work? Because that doesn’t work.) In a liberal democracy like ours, it’s because we understand that the mechanisms of law are imperfect and we don’t want the state to enact irreversible capital punishment when it could be wrong, and, moreover, that human lives have intrinsic value. We should try to give people who have offended a chance to demonstrate an understanding of their crime and the willingness to behave lawfully in the future. Between incarceration and rehabilitation, we seek to minimize crime.
Why do we try to minimize crime? (This ought to be self-evident, but juuust in case…) Humans thrive when they do not need to guard against possible attack by every other human they encounter. They can put their resources towards the pursuit of happiness rather than the defense of encroachment. Such lawful societies benefit from network effects.
The MPOV suggestion above fixes the problem at the low level of detection, but each step in the goal chain invites design at a more effective level. It’s fun to look at each of these levels and imagine an advanced-technology solution (and even findsci-fiexamples of each), but for this post, let’s look at the last one, minimizing crime, in the context of the tattoo scanner.
Ultimate problem: Preventing crime
In his paper “Deterrence in the Twenty-First Century,” Daniel Nagin reviewed state of the art criminology findings and listed five things about deterrence. Number one in his list is that the chance of being caught is a vastly more effective deterrent than even draconian punishment.
Research shows clearly that the chance of being caught is a vastly more effective deterrent than even draconian punishment.
How might we increase the evident chance of being caught?
Fund police forces well so they are well-staffed, well-trained, and have a near-constant, positive presence in communities, and impressive capture rates. Word would get around.
Nagin himself suggests concentrating police presence in criminal hotspots, ensuring that they have visible handcuffs and walkie-talkies.
Another way might be media: Of making sure that potential criminals hear an overwhelming number of stories through their network of criminals being captured successfully. This could involve editorial choice, or even media manipulation, filtering to ensure that “got caught” narratives appear in feeds more than “got away with it” ones. But we’re hopefully becoming more media savvy as a result of Recent Things, and this seems more deceptive than persuasive.
The other way is to increase the sense of observation. And that leads us (as so many things do) to the panopticon.
The Elaboratory*
The Panopticon is almost a trope at this point, but that’s what this scene points to. If you’re not familiar, it is an idea about the design of buildings in which “a number of persons are meant to be kept under inspection,” conceived in the late 1700s by Samuel Bentham and formalized by his brother James in letters to their father. Here is a useful illustration.
*Elaboratory was one of the alternate terms he suggested for the idea. It didn’t catch on since it didn’t have the looming all-seeing-eye ring of the other term.
Elevation, section, and plan as drawn by Willey Reveley, 1791
The design of the panopticon is circular, with prisoners living in isolated cells along the perimeter. The interior wall of each cell is open to view so the inmate can be observed by a person in a central tower or “inspector’s lodge.” Things are structured so the inmates cannot tell whether or not they are being observed. (Bentham suggested louvers.) Over time, the idea goes, the inmate internalizes the unseen authority as a constant presence, and begins to regulate themselves, behaving as they believe the guard would have them behave. Bentham thought this was ideal from an efficacy and economic standpoint.
“Ideal perfection, if that were the object, would require that each person should actually [be under the eyes of the persons who should inspect them], during every instant of time.”
—Jeremey Bentham
It’s an idea that has certainly enjoyed currency. If you hadn’t come across the idea via Bentham, you may have come across it via Foucault in Discipline and Punish, who regarded it not as a money-saving design, but as an illustration of the effect of power. Or maybe Orwell, who did not use the term, but extended it to all of society in 1984. Or perhaps you heard it from Shoshana Zuboff, who in The Age of the Smart Machine reconceived it for information technology in a work environment.
Umm…Carol? Why aren’t you at your centrifuge?
In Benjamen Walker’s podcast Theory of Everything, he dedicates an episode to the argument that as a metaphor it needs to be put away, since…
It builds on one-way observation, and modern social media has us sharing information about ourselves willingly, all the time. The diagram is more dream catcher than bicycle wheel. We volunteer ourselves to the inspector, any inspector, and can become inspectors to anyone else any time. Sousveillance. Stalking.
Most modern uses of the metaphor are anti-government, but surveillance capitalism is a more pernicious problem (here in the West), where advertising uses all the information it can to hijack your reward systems and schlorp money out of you.
Bentham regarded it as a tool for behavior modification, but the metaphor is not used to talk about how surveillance changes us and our identities, but rather as a violation of privacy rights.
It’s a good series, check it out, and hat tip to Brother-from-a-Scottish-Mother John V Willshire for pointing me in its direction.
To Walker’s list I will add another major difference: Panopticon inmates must know they are being watched. It’s critical to the desired internalization of authority. But modern surveillance tries its best to be invisible despite the fact that it gathers an enormous amount of information. (Fortunately it often fails to be invisible, and social media channels can be used to expose the surveillance.)
Guns are bad.
But then, Idiocracy
In Idiocracy, this interface—of the tattoo and the vending machine—is what puts this squarely back in Bentham’s metaphor. The ink is in a place that will be seen very often by the owner, and a place that’s very difficult to casually hide. (I note that the overwhelming majority of Hillfinger [sic] shirts in the movie are even short-sleeved.) So it serves as that permanent—and permanently-visible—identifier. You are being watched. (Holy crap now I have yet another reason to love Person of Interest. It’s adding to our collective media impression the notion of AI surveillance. Anyway…) In this scene, it’s a clear signal that he and his co-offenders could see, which means they would tell their friends this story of how easily Joe was caught. It’s pretty cunningly designed as a conspicuous signal.
Imagine how this might work throughout that world. As people went around their business in the Idiocracy, stochastic flashes of light on their and other people’s wrists keep sending a signal that everyone is being watched. It’s crappy surveillance which we don’t like for all the reasons we don’t like it, but it illustrates why stealth-detection may not be the ideal for crime preventions and why this horrible tattoo might be the thing that a bunch of doomed eggheads might have designed for the future when all that was left was morons. Turns out at least for the Idiocracy, this is a pretty well-designed signal for deterrence, which is the ultimate goal of this interface.
By any short description of its plot, this film should be amazing and meta. Like Kung Fury or Galaxy Lords, but, let’s be frank, it is so not that. Someone at Netflix should produce a reboot and it would probably be amazing. No, instead, this film has an actor in a robotic Truman Capote getup smashing through dozens of cardboard sets and flailing vaguely in the direction of characters who dutifully scream and drop from the non-contact karate chop.
And hugs. Robot assassins need hugs, too.
It is a pathetic paean to its source material, the much more well-done Cybernauts from The Avengers, (the British one with younger Olenna, not the Marvel one with the cosmic purple snap crackle and pop.)
Sci: F (0 of 4) How believable are the interfaces?
The mission slot has some nice affordances, but deep strategic flaws. The mission card is a copy by someone who didn’t quite understand what they were looking at. The trivium bracelet and remote just break all believability, earning the film a flat zero.
Fi: B (3 of 4) How well do the interfaces inform the narrative of the story?
ID card goes in slot, evil robot finds that person. Bracelet roboticizes people, remote controls them. As dumb (and derivative) as the technologies are, the interfaces help you understand the kindergarten-minded rules for technology in this diegesis.
Interfaces: F (0 of 4) How well do the interfaces equip the characters to achieve their goals?
Recall that these interfaces all serve the bad guy. The mission slot interface is actually quite nice for its simplicity, but loses any credit since it ultimately becomes a paper trail of evidence against him, all in one convenient robot just waiting for authorities to uncover. The bracelet might get props for being easy to get on, if it wasn’t also as easy to get off again and need tailoring for each new victim. The remotes are also quite nice for their simplicity and even visual hierarchy, but only by virtue of apologetics and thinking of it as a prototype. All knobs and modes needed labeling, anyway. So, a goose egg.
FIN
Final Grade F (3 of 12), Dreck.
Don’t bother. Or do bother, but only to get a schadenfreude chuckle out of the ordeal. Or maybe some tripping material from the janky transfer.
So, loyal readers may rightly ask themselves why on earth I reviewed this pile of metallic crap, which is unknown, uninfluential, and rightly condemned to the trash bin of cinematic B-movie history. One glance at the Youtube transfer (or perhaps the directors oeuvre) should have made all this clear, yes. Well, here are three reasons.
It’s the film’s 50th anniversary, which is adorable.
I try not to judge a book by its cover, and delight in trying to find truffles in oubliettes.
It was a very lightweight way (only four interfaces!) to begin a year dedicated to AI in sci-fi.
In case that last bit didn’t land, let me reiterate outside a bullet list: All posts in 2019 on this blog will focus on the topic of AI in sci-fi. And this film belongs in a category of one of our oldest kinds of fictional AIs, the Judaic story of the Golem.
Hit Points: 178(17d10+ 85) Special attack: Unreasonable interpretation
It’s been told time and again in different ways, but in most tellings, the golem is a construct that mindlessly obeys whatever instruction it is given, and in its mindless interpretation, does grave damage, even turning back on its maker. Other shows utilizing this trope include Metropolis, Battlestar Galactica, the Alien franchise, The Sorceror’s Apprentice, and 2001: A Space Odyssey. I even think that Arabic stories of djinn fulfill the same purpose. Each illustrates how agents that ruthlessly pursue goals—with neither the human sense of reasonableness or the ethical concern for human wellbeing—can go devastatingly awry.
Golem stories illustrate how agents that ruthlessly pursue goals—with neither the human sense of reasonableness or the ethical concern for human wellbeing—can go devastatingly awry.
—This article, like, just now
They are conservative tales in the apolitical sense that they imply we should be very very cautious when engaging these kinds of machines. Don’t start until you’re absolutely sure. This is a key concern for AI. How do we ensure that the intelligences we build do what we want them to, reasonably? How can we encode a concern for humanity?
Aw, hell, no.
Luchadores doesn’t provide any answers, just a warning, some awesome masks, and an occasional piledriver. But we’ll be on the lookout as we continue looking at other examples of sci-fi AI.
Given that the last review I completed was the Star Wars Holiday Special, which was also Dreck, maybe it’s high time I complete a good movie. OK, then. That means back to Idiocracy. And yes, in that tale of stupidity, there is a surprising tale of super intelligence.
Once a victim is wearing a Trivium Bracelet, any of Orlak’s henchmen can control the wearer’s actions. The victim’s expression is blank, suggesting that their consciousness is either comatose, twilit, or in some sort of locked in state. Their actions are controlled via a handheld remote control.
We see the remote control in use in four places in Las Luchadoras vs El Robot Asesino.
One gets clapped on Dr. Chavez to test it.
One goes on Gemma to demonstrate it.
One is removed from the robot.
One goes on Berthe to transform her to Black Electra.
The control token in Las Luchadras is a bracelet that slaps on and instantly renders its wearer an automaton, subject to the remote control.
Here’s something to note about this speculative technology. Orlak could have sold this, just this, to law enforcement around the world and made himself a very rich and powerful person. But the movie makes clear he is a mad engineer, not a mad businessperson, so we have to move on.
From Orlak’s point of view, getting the bracelet on its victim should be very easy. Fortunately, it does just that. Orlak can slap it on in a flick. But it’s also trivially easy for a bystander to remove, which seems like…a design oversight. It should work more like a handcuff, that requires a key to remove. It can’t look like a handcuff, of course, since Orlak wants it to go unnoticed. But in addition to the security, the handcuff function would enable the device to fit wrists of many sizes. As it is, it appears to be tailor-made to an individual.
As the diagram illustrates, not all wrists are made the same, and it would not help Orlak to have to carry around a sizing set when he hasn’t had time to secretly get the victim’s measurements.
Lastly, the audience might have benefited from seeing some visual connection between the bracelet and the remote, like a shared material that had an unusual color or glow, but Orlak would not want this connection since it could help someone identify him as the controller.
To provide the Victim Cards to the Robot Asesino, Orlak inserts it into an open slot in the robot’s chest, which then illuminates, confirming that the instructions have been received.
There is, I must admit, a sort of lovely, morbid poetry to a cardiogram being inserted into a slot where the robot heart would be to give the robot instructions to end the beating of the human heart described in the cardiogram. And we don’t see a lot of poetry in sci-fi interface designs. So, props for that.
The illumination is a nice bit of feedback, but I think it could convey the information in more useful and cinegenic ways.
In this new scenario…
Orlak has the robot pull back its coat
The chamfered slot is illuminated, signaling “card goes here.”
As Orlak inserts the target card, the slot light dims as the chest-cavity light brightens, signaling “I have the card.”
After a moment, the chest-cavity light turns blood red, signaling confirmation of the victim and the new dastardly mission.
When the robot returns to Orlak after completing a mission, the red light would dim as the slot light illuminates again, signaling that it is ready for its next mission.
These changes improve the interface by first drawing the user’s locus of attention exactly where it needs to go, and then distinguishing the internal system states as they happen. It would also work for the audience, who understands by association that red means danger.
The shape of the slot is pretty good for its base usability. It has clear affordances with its placement, orientation, and metallic lining. There’s plenty of room to insert the target card. It might benefit from a fillet or chamfer for the slot, to help avoid accidentally crumpling the paper cards when they are aimed poorly.
In addition to the tactical questions of illumination and shape of the slot, I have a few strategic questions.
There is no authorization in evidence. Can just anyone specify a target? Why doesn’t Gaby use her luchadora powers to Spin-A-Roonie a target card with Orlak’s face on it and let the robot save the day? Maybe the robot has a whitelist of heartbeats, and would fight to resist anyone else, but that’s just me making stuff up.
Also I’m not sure why the card stays in the robot. That leaves a discoverable paper trail of its crimes, perfect for a Scooby to hand over to the federales. Maybe the robot has some incinerator or shredder inside? If not, it would be better from Orlak’s perspective to design it as an insert-and-hold slot, which would in turn require a redesign of the card to have some obvious spot to hold it, and a bump-in on the slot to make way for fingers. Then he could remove the incriminating evidence and destroy it himself and not worry whether the robot’s paper shredder was working or not.
Another problem is that, since the robot doesn’t talk, it would be difficult to find out who its current target is at any given time. Since anyone can supply a target, Orlak can’t just rely on his memory to be certain. If the card was going to stay inside, it would be better to have it displayed so it’s easy to check.
How would Orlak cancel a target?
It is unclear how Orlak specifies whether the target is to be kidnapped or killed even though some are kidnapped and some are killed.
It’s also unclear about how Orlak might rescind or change an order once given.
It is also unclear how the assassin finds its target. Does it have internal maps with addresses? Or does it have unbelievably good hearing that can listen to every sound nearby, isolate the particular heartbeat in question, and just head in that direction, destroying any walls it encounters? Or can it reasonably navigate human cities and interiors to maintain its disguise? Because that would be some amazing technology for 1969. This last is admittedly not an interface question, but a backworlding question for believability.
So there’s a lot missing from the interface.
It’s the robot assassin designer’s job to not just tick a box to tell themselves that they have provided feedback, but to push through the scenarios of use to understand in detail how to convey to the evil scientist what’s happening with his murderous intent.