Idiocracy is secretly about super AI

I originally began to write about Idiocracy because…

  • It’s a hilarious (if mean) sci-fi movie
  • I am very interested in the implications of St. God’s triage interface
  • It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
  • I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform

But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.

Not the obvious AIs

There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…

  • NARRATOR
  • Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.

At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”

The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.

I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy

No. I don’t mean those AIs. I mean that you should rewatch the film understanding that Joe and Rita, the lead characters, are Super AIs in the context of Idiocracy.

The protagonists are super AIs

The literature distinguishes between three supercategories of artificial intelligence.

  • Narrow AI, which is the AI we have in the world now. It’s much better than humans in some narrow domain. But it can’t handle new situations. You can’t ask a roboinvestor to help plan a meal, for example, even though it’s very very good at investing.
  • General AI, definitionally meaning “human like” in it’s ability to generalize from one domain of knowledge to handle novel situations. If this exists in the world, it’s being kept very secret. It probably does not.
  • Super AI, the intelligence of which dwarfs our own. Again, this probably doesn’t exist in the world, but if it did, it’s being kept very secret. Or maybe even keeping itself secret. The difference between a bird’s intelligence and a human’s is a good way to think about the difference between our intelligence and a superintelligence. It will be able to out-think us at every step. We may not even be able to understand the language in which asks its questions.
Illustration by the author (often used when discussing agentive technology.)

Now the connection to Joe and Rita should be apparent. Though theirs is not an artificial intelligence, the difference between their smarts and that of Idiocracy approaches that same uncanny scale.

Watch how Joe and Rita move through this world. They are routinely flabbergasted at the stupidity around them. People are pointlessly belligerent, distractedly crass, easily manipulated, guided only by their base instincts, desperate to not appear “faggy,” and guffawing about (and cheering on) horrific violence. Rita and Joe are not especially smart by our standards, but they can outthink everyone around them by orders of magnitude, and that’s (comparatively) super AI.

The people of Idiocracy have idioted themselves into a genuine ecological crisis. They need to stop poisoning their environment because, at the very least, it’s killing them. But what about jobs! What about profits! Does this sound familiar?

Pictured: Us.

Joe doesn’t have any problem figuring out what’s wrong. He just tastes what’s being sprayed in the fields, and it’s obvious to him. His biggest problem is that the people he’s trying to serve are too dumb to understand the explanation (much less their culpability). He has to lie and feed them some bullshit reason and then manage people’s frustration that it doesn’t work instantly, even though he knows and we know it will work given time.

In this role as superintelligences, our two protagonists illustrate key critical concerns we have about superintelligent AIs:

  1. Economic control
  2. Social manipulation
  3. Uncontainability
  4. Cooperation by “multis.”

Economic control

Rita finds it trivially easy to bilk one idiot out of money and gain economic power. She could use her easy lucre to, in turn, control the people around her. Fortunately she is a benign superintelligence.

Yeah baby I could wait two days.

In the Chapter 6 of the seminal work on the subject, Superintelligence, Nick Bostrom lists six superpowers that an ASI would work to gain in order to achieve its goals. The last of these he terms “economic productivity” using which the ASI can “generate wealth which can be used to buy influence, services, resources (including hardware), etc.” This scene serves as a lovely illustration of that risk.

Of course you’re wondering what the other five are, so rather than making you go hunt for them…

  1. Intelligence amplification, to bootstrap its own intelligence
  2. Strategizing, to achieve distant goals and overcome intelligent opposition
  3. Social manipulation, to leverage external resources by recruiting human support, to enable a boxed AI to persuade its gatekeepers to let it out, and to persuade states and organizations to adopt some course of action.
  4. Hacking, so the AI can expropriate computational resources over the internet, exploit security holes to escape cybernetic confinement, steal financial resources, and hijack infrastructure like military robots, etc.
  5. Technology research, to create a powerful military force, to create surveillance systems, and to enable automated space colonization.
  6. Economic productivity, to generate wealth which can be used to buy influence, services, resources (including hardware), etc.

Social manipulation

Joe demonstrates the second of these, social manipulation, repeatedly throughout the film.

  • He convinces Frito to help him in exchange for the profits from a time travel compound interest gambit
  • He convinces the cabinet to switch to watering crops by telling them he can talk to plants.
  • He convinces the guard to let him escape prison (more on this below).

Joe’s not perfect at it. Early in the film he tries reason to convince the court of his innocence, and fails. Later he fails to convince the crowd to release him in Rehabilitation. An actual ASI would have an easier time of these things.

Uncontainability

The only way they contain Joe in the early part of the film is with a physical cage, and that doesn’t last long. He finds it trivially easy to escape their prison using, again, social manipulation.

  • JOE
  • Hi. Excuse me. I’m actually supposed to be getting out of prison today, sir.
  • GUARD
  • Yeah. You’re in the wrong line, dumb ass. Over there.
  • JOE
  • I’m sorry. I am being a big dumb ass. Sorry.
  • GUARD (to other guard)
  • Hey, uh, let this dumb ass through.

Elizer Yudkowsky, Research Fellow at the Machine Intelligence Research Institute, has described the AI-Box problem, in which he illustrates the folly of thinking that we could contain a super AI. (Bostrom also cites him in the Superintelligence book.) Using only a text terminal, he argues, an ASI can convince an even a well-motivated human to release it. He has even run social experiments where one participant played the unwilling human, and he played the ASI, and both times the human relented. And while Elizer is a smart guy, he is not an ASI, which would have an even easier time of it. This scene illustrates how easily an ASI would thwart our attempts to cage it.

Cooperation between multis

Chapter 11 of Bostrom’s book focuses on how things might play out if instead of only one ASI in the world, a “singleton” there are many ASIs, or “multis.” (Colossus: The Forbin Project and Person of Interest also explore these scenarios with artificial superintelligences.)

In this light, Joe and Rita are multis who unite over shared circumstances and woes, and manage to help each other out in their struggle against the idiots. Whatever advantage the general intelligences have over the individual ASIs are significantly diminished when they are working together.

Note: In Bostrom’s telling, multis don’t necessarily stabilize each other, they just make things more complex and don’t solve the core principal-agent problem. But he does acknowledge that stable, voluntary cooperation is a possible scenario.

Cold comfort ending

At the end of Idiocracy, we can take some cold comfort that Rita and Joe have a moral sense, a sense of self-preservation, and sympathy for fellow humans. All they wind up doing is becoming rulers of the world and living out their lives. (Oh god are their kids Von Neumann probes?) The implication is that, as smart as they are, they will still be outpopulated by the idiots of that world.

Imagine this story is retold where Joe and Rita are psychopaths obsessed with making paper clips, with their superintelligent superpowers and our stupidity. The idiots would be enslaved to paper clip making before they could ask whether or not it’s fake news.

Or even less abstractly, there is a deleted “stinger” scene at the end of some DVDs of the film where Rita’s pimp UPGRAYEDD somehow winds up waking up from his own hibernation chamber right there in 2505, and strolls confidently into town. The implied sequel would deal with an amoral ASI (UPGRAYEDD) hostile to its mostly-benevolent ASI leaders (Rita and Joe). It does not foretell fun times for the Idiocracy.


For me, this interpretation of the film is important to “redeem” it, since its big takeaway—that is, that people are getting dumber over time—is known to be false. The Flynn Effect, named for its discoverer James R. Flynn, is the repeatedly-confirmed observation that measurements of intelligence are rising, linearly, over time, and have been since measurements began. To be specific, this effect is not seen in general intelligence but rather the subset of fluid, or analytical intelligence measures. The rate is about 3 IQ points per decade.

Wait. What? How can this be? Given the world’s recent political regression (that kickstarted the series on fascism and even this review of Idiocracy) and constant news stories of the “Florida Man” sort, the assertion does not seem credible. But that’s probably just availability bias. Experts cite several factors that are probably contributing to the effect.

  • Better health
  • Better nutrition
  • More and better education
  • Rising standards of living

The thing that Idiocracy points to—people of lower intelligence outbreeding people of higher intelligence—was seen as not important. Given the effect, this story might be better told not about a time traveler heading forwards, but rather heading backwards to some earlier era. Think Idiocracy but amongst idiots of the Renaissance.

Since I know a lot of smart people who took this film to be an exposé of a dark universal pattern that, if true, would genuinely sour your worldview and dim your sense of hope, it seems important to share this.


So go back and rewatch this marvelous film, but this time, dismiss the doom and gloom of declining human intelligence, and watch instead how Idiocracy illustrates some key risks (if not all of them) that super artificial intelligence poses to the world. For it really is a marvelously accessible shorthand to some of the critical reasons we ought to be super cautious of the possibility.

Tattoo surveillance

In the prior Idiocracy post I discussed the car interface, especially in terms of how it informs the passengers what is happening when it is remotely shut down. Today let’s talk about the passive interface that shuts it down: Namely, Joe’s tattoo and the distance-scanning vending machine.

It’s been a while since that prior post, so here’s a recap of what’s happening in Idiocracy in this scene:

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading PULL OVER.

It’s a fast scene and the beat feels more like a filmmaker’s excuse to get them out of the car and on foot as they hunt for the Time Masheen. I breezed by it in an earlier post, but it bears some more investigation.

This is a class of transaction where, like taxes and advertising, the subject is an unwilling and probably uncooperative participant. But this same interface has to work for payment, in which the subject is a willing participant. Keep this in mind as we look first at the proximate problem, i.e. locating the fugitive for apprehension; and at the ultimate goal, i.e. how a culture deals with crime.

A quick caveat: While it’s fair to say I’m an expert on interaction design, I’m Just a Guy when it comes to criminology and jurisprudence. And these are ideas with some consequence. Feel free to jump in and engage in friendly debate on any of these points.

Proximate problem: Finding the fugitive

The red scan is fast, but it’s very noticable. The sudden flash of light, the red color. This could easily tip a fugitive off and cause them to redouble efforts at evasion, maybe even covering up the tattoo, making the law’s job of apprehending them that much harder. Better would be some stealthier means of detection like RFID chips. I know, that’s not as cinegenic, so the movie version would instead use image recognition, showing the point of view from the vending machine camera (machine point of view or MPOV), with some UI clues showing it identifying, zooming in to, and confirming the barcode.

Yes, that’s a shout-out.

So we can solve stealth-detection cinematically, using tropes. But anytime a designer is asked to consider a scenario, it is a good idea to see if the problem can be more effectively addressed somewhere higher up the goal chain. Is stealth-detection really better?

Goal chain

  • Why is the system locating him? To tell authorities so they can go there and apprehend him.
  • Why are they apprehending him? He has shown an inability to regulate damaging anti-social behavior (in the eyes of the law, anyway) and the offender must be incarcerated.
  • Why do we try to incarcerate criminals? To minimize potential damage to society while the offender is rehabilitated.
  • Why do we try to rehabilitate criminals? Well, in the Idiocracy, it’s an excuse for damnatio ad vehiculum, that is, violent public spectacle based on the notion that jurisprudence is about punishment-as-deterrent. (Pro-tip: That doesn’t work. Did I say that doesn’t work? Because that doesn’t work.) In a liberal democracy like ours, it’s because we understand that the mechanisms of law are imperfect and we don’t want the state to enact irreversible capital punishment when it could be wrong, and, moreover, that human lives have intrinsic value. We should try to give people who have offended a chance to demonstrate an understanding of their crime and the willingness to behave lawfully in the future. Between incarceration and rehabilitation, we seek to minimize crime.
  • Why do we try to minimize crime? (This ought to be self-evident, but juuust in case…) Humans thrive when they do not need to guard against possible attack by every other human they encounter. They can put their resources towards the pursuit of happiness rather than the defense of encroachment. Such lawful societies benefit from network effects.

The MPOV suggestion above fixes the problem at the low level of detection, but each step in the goal chain invites design at a more effective level. It’s fun to look at each of these levels and imagine an advanced-technology solution (and even find sci-fi examples of each), but for this post, let’s look at the last one, minimizing crime, in the context of the tattoo scanner.

Ultimate problem: Preventing crime

In his paper “Deterrence in the Twenty-First Century,” Daniel Nagin reviewed state of the art criminology findings and listed five things about deterrence. Number one in his list is that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Research shows clearly that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Daniel S. Nagin, 2013

How might we increase the evident chance of being caught?

  1. Fund police forces well so they are well-staffed, well-trained, and have a near-constant, positive presence in communities, and impressive capture rates. Word would get around.
  2. Nagin himself suggests concentrating police presence in criminal hotspots, ensuring that they have visible handcuffs and walkie-talkies.
  3. Another way might be media: Of making sure that potential criminals hear an overwhelming number of stories through their network of criminals being captured successfully. This could involve editorial choice, or even media manipulation, filtering to ensure that “got caught” narratives appear in feeds more than “got away with it” ones. But we’re hopefully becoming more media savvy as a result of Recent Things, and this seems more deceptive than persuasive.
  4. The other way is to increase the sense of observation. And that leads us (as so many things do) to the panopticon.

The Elaboratory*

The Panopticon is almost a trope at this point, but that’s what this scene points to. If you’re not familiar, it is an idea about the design of buildings in which “a number of persons are meant to be kept under inspection,” conceived in the late 1700s by Samuel Bentham and formalized by his brother James in letters to their father. Here is a useful illustration.

*Elaboratory was one of the alternate terms he suggested for the idea. It didn’t catch on since it didn’t have the looming all-seeing-eye ring of the other term.

Elevation, section, and plan as drawn by Willey Reveley, 1791

The design of the panopticon is circular, with prisoners living in isolated cells along the perimeter. The interior wall of each cell is open to view so the inmate can be observed by a person in a central tower or “inspector’s lodge.” Things are structured so the inmates cannot tell whether or not they are being observed. (Bentham suggested louvers.) Over time, the idea goes, the inmate internalizes the unseen authority as a constant presence, and begins to regulate themselves, behaving as they believe the guard would have them behave. Bentham thought this was ideal from an efficacy and economic standpoint.

“Ideal perfection, if that were the object, would require that each person should actually [be under the eyes of the persons who should inspect them], during every instant of time.”

—Jeremey Bentham

It’s an idea that has certainly enjoyed currency. If you hadn’t come across the idea via Bentham, you may have come across it via Foucault in Discipline and Punish, who regarded it not as a money-saving design, but as an illustration of the effect of power. Or maybe Orwell, who did not use the term, but extended it to all of society in 1984. Or perhaps you heard it from Shoshana Zuboff, who in The Age of the Smart Machine reconceived it for information technology in a work environment.

Umm…Carol? Why aren’t you at your centrifuge?

In Benjamen Walker’s podcast Theory of Everything, he dedicates an episode to the argument that as a metaphor it needs to be put away, since…

  1. It builds on one-way observation, and modern social media has us sharing information about ourselves willingly, all the time. The diagram is more dream catcher than bicycle wheel. We volunteer ourselves to the inspector, any inspector, and can become inspectors to anyone else any time. Sousveillance. Stalking.
  2. Most modern uses of the metaphor are anti-government, but surveillance capitalism is a more pernicious problem (here in the West), where advertising uses all the information it can to hijack your reward systems and schlorp money out of you.
  3. Bentham regarded it as a tool for behavior modification, but the metaphor is not used to talk about how surveillance changes us and our identities, but rather as a violation of privacy rights.

It’s a good series, check it out, and hat tip to Brother-from-a-Scottish-Mother John V Willshire for pointing me in its direction.

To Walker’s list I will add another major difference: Panopticon inmates must know they are being watched. It’s critical to the desired internalization of authority. But modern surveillance tries its best to be invisible despite the fact that it gathers an enormous amount of information. (Fortunately it often fails to be invisible, and social media channels can be used to expose the surveillance.)

Guns are bad.

But then, Idiocracy

In Idiocracy, this interface—of the tattoo and the vending machine—is what puts this squarely back in Bentham’s metaphor. The ink is in a place that will be seen very often by the owner, and a place that’s very difficult to casually hide. (I note that the overwhelming majority of Hillfinger [sic] shirts in the movie are even short-sleeved.) So it serves as that permanent—and permanently-visible—identifier. You are being watched. (Holy crap now I have yet another reason to love Person of Interest. It’s adding to our collective media impression the notion of AI surveillance. Anyway…) In this scene, it’s a clear signal that he and his co-offenders could see, which means they would tell their friends this story of how easily Joe was caught. It’s pretty cunningly designed as a conspicuous signal.

Imagine how this might work throughout that world. As people went around their business in the Idiocracy, stochastic flashes of light on their and other people’s wrists keep sending a signal that everyone is being watched. It’s crappy surveillance which we don’t like for all the reasons we don’t like it, but it illustrates why stealth-detection may not be the ideal for crime preventions and why this horrible tattoo might be the thing that a bunch of doomed eggheads might have designed for the future when all that was left was morons. Turns out at least for the Idiocracy, this is a pretty well-designed signal for deterrence, which is the ultimate goal of this interface.

Beep.


Report Card: Las Luchadoras vs. El Robot Asesino

Read all the Las Luchadoras vs. El Robot Asesino posts in chronological order.

By any short description of its plot, this film should be amazing and meta. Like Kung Fury or Galaxy Lords, but, let’s be frank, it is so not that. Someone at Netflix should produce a reboot and it would probably be amazing. No, instead, this film has an actor in a robotic Truman Capote getup smashing through dozens of cardboard sets and flailing vaguely in the direction of characters who dutifully scream and drop from the non-contact karate chop.

And hugs. Robot assassins need hugs, too.

It is a pathetic paean to its source material, the much more well-done Cybernauts from The Avengers, (the British one with younger Olenna, not the Marvel one with the cosmic purple snap crackle and pop.)

Sci: F (0 of 4) How believable are the interfaces?

The mission slot has some nice affordances, but deep strategic flaws. The mission card is a copy by someone who didn’t quite understand what they were looking at. The trivium bracelet and remote just break all believability, earning the film a flat zero.

Fi: B (3 of 4) How well do the interfaces inform the narrative of the story?

ID card goes in slot, evil robot finds that person. Bracelet roboticizes people, remote controls them. As dumb (and derivative) as the technologies are, the interfaces help you understand the kindergarten-minded rules for technology in this diegesis.

Interfaces: F (0 of 4) How well do the interfaces equip the characters to achieve their goals?

Recall that these interfaces all serve the bad guy. The mission slot interface is actually quite nice for its simplicity, but loses any credit since it ultimately becomes a paper trail of evidence against him, all in one convenient robot just waiting for authorities to uncover. The bracelet might get props for being easy to get on, if it wasn’t also as easy to get off again and need tailoring for each new victim. The remotes are also quite nice for their simplicity and even visual hierarchy, but only by virtue of apologetics and thinking of it as a prototype. All knobs and modes needed labeling, anyway. So, a goose egg.

FIN

Final Grade F (3 of 12), Dreck.

Don’t bother. Or do bother, but only to get a schadenfreude chuckle out of the ordeal. Or maybe some tripping material from the janky transfer.

So, loyal readers may rightly ask themselves why on earth I reviewed this pile of metallic crap, which is unknown, uninfluential, and rightly condemned to the trash bin of cinematic B-movie history. One glance at the Youtube transfer (or perhaps the directors oeuvre) should have made all this clear, yes. Well, here are three reasons.

  1. It’s the film’s 50th anniversary, which is adorable.
  2. I try not to judge a book by its cover, and delight in trying to find truffles in oubliettes.
  3. It was a very lightweight way (only four interfaces!) to begin a year dedicated to AI in sci-fi.

In case that last bit didn’t land, let me reiterate outside a bullet list: All posts in 2019 on this blog will focus on the topic of AI in sci-fi. And this film belongs in a category of one of our oldest kinds of fictional AIs, the Judaic story of the Golem.

Hit Points: 178(17d10+ 85)
Special attack: Unreasonable interpretation

It’s been told time and again in different ways, but in most tellings, the golem is a construct that mindlessly obeys whatever instruction it is given, and in its mindless interpretation, does grave damage, even turning back on its maker. Other shows utilizing this trope include Metropolis, Battlestar Galactica, the Alien franchise, The Sorceror’s Apprentice, and 2001: A Space Odyssey. I even think that Arabic stories of djinn fulfill the same purpose. Each illustrates how agents that ruthlessly pursue goals—with neither the human sense of reasonableness or the ethical concern for human wellbeing—can go devastatingly awry.

Golem stories illustrate how agents that ruthlessly pursue goals—with neither the human sense of reasonableness or the ethical concern for human wellbeing—can go devastatingly awry.

—This article, like, just now

They are conservative tales in the apolitical sense that they imply we should be very very cautious when engaging these kinds of machines. Don’t start until you’re absolutely sure. This is a key concern for AI. How do we ensure that the intelligences we build do what we want them to, reasonably? How can we encode a concern for humanity?

Aw, hell, no.

Luchadores doesn’t provide any answers, just a warning, some awesome masks, and an occasional piledriver. But we’ll be on the lookout as we continue looking at other examples of sci-fi AI.


Given that the last review I completed was the Star Wars Holiday Special, which was also Dreck, maybe it’s high time I complete a good movie. OK, then. That means back to Idiocracy. And yes, in that tale of stupidity, there is a surprising tale of super intelligence.