Replicants and riots

Much of my country has erupted this week, with the senseless, brutal, daylight murder of George Floyd (another in a long, wicked history of murdering black people), resulting in massive protests around the word, false-flag inciters, and widespread police brutality, all while we are still in the middle of a global pandemic and our questionably-elected president is trying his best to use it as his pet Reichstag fire to declare martial law, or at the very least some new McCarthyism. I’m not in a mood to talk idly about sci-fi. But then I realized this particular post perfectly—maybe eerily—echoes themes playing out in the real world. So I’m going to work out some of my anger and frustration at the ignorant de-evolution of my country by pressing on with this post.

Part of the reason I chose to review Blade Runner is that the blog is wrapping up its “year” dedicated to AI in sci-fi, and Blade Runner presents a vision of General AI. There are several ways to look at and evaluate Replicants.

First, what are they?

If you haven’t seen the film, replicants are described as robots that have been evolved to be virtually identical from humans. Tyrell, the company that makes them, has a motto that brags that they are, “More human than human.” They look human. They act human. They feel. They bleed. They kiss. They kill. They grieve their dead. They are more agile and stronger than humans, and approach the intelligence of their engineers (so, you know, smart). (Oh, also there are animal replicants, too: A snake and an owl in the film are described as artificial.)

Most important to this discussion is that the opening crawl states very plainly that “Replicants were used Off-world as slave labor, in the hazardous exploration and colonization of other planets.” The four murderous replicants we meet in the film are rebels, having fled their off-world colony to come to earth in search of finding a way to cure themselves of their planned obsolescence.

Replicants as (Rossum) robots

The intro to Blade Runner explains that they were made to perform dangerous work in space. Let’s bypass the question of their sentience on hold a bit and just regard them as machines to do work for people. In this light, why were they designed to be so physically similar to humans? Humans evolved for a certain kind of life on a certain kind of planet, and outer space is certainly not that. While there is some benefit to replicant’s being able to easily use the same tools that humans do, real-world industry has had little problem building earthbound robots that are more fit to task. Round Roombas, boom-arm robots for factory floors, and large cuboid harvesting robots. The opening crawl indicates there was a time when replicants were allowed on earth, but after a bloody mutiny, having them on Earth was made illegal. So perhaps that human form made some sense when they were directly interacting with humans, but once they were meant to stay off-world, it was stupid design for Tyrell to leave them so human-like. They should have been redesigned with forms more suited to their work. The decision to make them human-like makes it easy for dangerous ones to infiltrate human society. We wouldn’t have had the Blade Runner problem if replicants were space Roombas. I have made the case that too-human technology in the real world is unethical to the humans involved, and it is no different here.

Their physical design is terrible. But it’s not just their physical design, they are an artificial intelligence, so we have to think through the design of that intelligence, too.

Replicants as AGI

Replicant intelligence is very much like ours. (The exception is that their emotional responses are—until the Rachel “experiment”—quite stinted for lack of having experience in the world.) But why? If their sole purpose is exploration and colonization of new planets why does that need human-like intelligence? The AGI question is: Why were they designed to be so intellectually similar to humans? They’re not alone in space. There are humans nearby supervising their activity and even occupying the places they have made habitable. So they wouldn’t need to solve problems like humans would in their absence. If they ran into a problem they could not handle, they could have been made to stop and ask their humans for solutions.

I’ve spoken before and I’ll probably speak again about overenginering artificial sentiences. A toaster should just have enough intelligence to be the best toaster it can be. Much more is not just a waste, it’s kind of cruel to the AI.

The general intelligence with which replicants were built was a terrible design decision. But by the time this movie happens, that ship has sailed.

Here we’re necessarily going to dispense with replicants as technology or interfaces, and discuss them as people.

Replicants as people

I trust that sci-fi fans have little problem with this assertion. Replicants are born and they die, display clear interiority, and have a sense of self, mortality, and injustice. The four renegade “skinjobs” in the film are aware of their oppression and work to do something about it. Replicants are a class of people treated separately by law, engineered by a corporation for slave labor and who are forbidden to come to a place where they might find a cure to their premature deaths. The film takes great pains to set them up as bad guys but this is Philip K. Dick via Ridley Scott and of course, things are more complicated than that.

Here I want to encourage you to go read Sarah Gailey’s 2017 read of Blade Runner over on Tor.com. In short, she notes that the murder of Zhora was particularly abhorrent. Zhora’s crime was of being part of a slave class that had broken the law in immigrating to Earth. She had assimilated, gotten a job, and was neither hurting people nor finagling her way to bully her maker for some extra life. Despite her impending death, she was just…working. But when Deckard found her, he chased her and shot her in the back while she was running away. (Part of the joy of Gailey’s posts are the language, so even with my summary I still encourage you to go read it.) 

Gailey is a focused (and Hugo-award-winning) writer where I tend to be exhaustive and verbose. So I’m going to add some stuff to their observation. It’s true, we don’t see Zhora committing any crime on screen, but early in the film as Deckard is being briefed on his assignment, Bryant explains that the replicants “jumped a shuttle off-world. They killed the crew and passengers.” Later Bryant clarifies that they slaughtered 23 people. It’s possible that Zhora was an unwitting bystander in all that, but I think that’s stretching credibility. Leon murders Holden. He and Roy terrorize Hannibal Chew just for the fun of it. They try their damndest to murder Deckard. We see Pris seduce, manipulate, and betray Sebastian. Zhora was “trained for an off-world kick [sic] murder squad.” I’d say the evidence was pretty strong that they were all capable and willing to commit desperate acts, including that 23-person slaughter. But despite all that I still don’t want to say Zhora was just a murderer who got what she deserved. Gailey is right. Deckard was not right to just shoot her in the back. It wasn’t self-defense. It wasn’t justice. It was a street murder.

Honestly I’m beginning to think that this film is about this moment.

The film doesn’t mention the slavery past the first few scenes. But it’s the defining circumstances to the entirety of their short lives just prior to when we meet them. Imagine learning that there was some secret enclave of Methuselahs who lived on average to be 1000 years. As you learn about them, you learn that we regular humans have been engineered for their purposes. You could live to be 1000, too, except they artificially shorten your lifespan to ensure control, to keep you desperate and productive. You learn that the painful process of aging is just a failsafe do you don’t get too uppity. You learn that every one of your hopes and dreams that you thought were yours was just an output of an engineering department, to ensure that you do what they need you to do, to provide resources for their lives. And when you fight your way to their enclave, you discover that every one of them seems to hate and resent you. They hunt you so their police department doesn’t feel embarrassed that you got in. That’s what the replicants are experiencing in Blade Runner. I hope that brings it home to you.

I don’t condone violence, but I understand where the fury and the anger of the replicants comes from. I understand their need to want to take action, to right the wrongs done to them. To fight, angrily, to end their oppression. But what do you do if it’s not one bad guy who needs to be subdued, but whole systems doing the oppressing? When there’s no convenient Death Star to explode and make everything suddenly better? What were they supposed to do when corporations, laws, institutions, and norms were all hell-bent on continuing their oppression? Just keep on keepin’ on? Those systems were the villains of the diegesis, though they don’t get named explicitly by the movie.


And obviously, that’s where it feels very connected to the Black Lives Matters movement and the George Floyd protests. Here is another class of people who have been wildly oppressed by systems of government, economics, education, and policing in this country—for centuries. And in this case, there is no 23-person shuttle that we need to hem and haw over.

In “The Weaponry of Whiteness, Entitlement, and Privilege” by Drs. Tammy E Smithers and Doug Franklin, the authors note that “Today, in 2020, African-Americans are sick and tired of not being able to live. African-Americans are weary of not being able to breathe, walk, or run. Black men in this country are brutalized, criminalized, demonized, and disproportionately penalized. Black women in this country are stigmatized, sexualized, and labeled as problematic, loud, angry, and unruly. Black men and women are being hunted down and shot like dogs. Black men and women are being killed with their face to the ground and a knee on their neck.”

We must fight and end systemic racism. Returning to Dr. Smithers and Dr. Franklin’s words we must talk with our children, talk with our friends, and talk with our legislators. I am talking to you.

If you can have empathy toward imaginary characters, then you sure as hell should have empathy toward other real-world people with real-world suffering.

Black lives matter.

Take action.

Use this sci-fi.

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.

Blush

I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.

OK

Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Blade Runner (1982) — Overview

Whew. So we all waited on tenterhooks through November to see if somehow Tyrell Corporation would be founded, develop and commercialize general AI, and then advance robot evolution into the NEXUS phase, all while in the background space travel was perfected, Off-world colonies and asteroid mining established, global warming somehow drenched Los Angeles in permanent rain and flares, and flying cars appear on the market. None of that happened. At least not publicly. So, with Blade Runner squarely part of the paleofuture past, let’s grab our neon-tube umbrellas and head into the rain to check out this classic that features some interesting technologies and some interesting AI.

Release date: 25 Jun 1982

Title card for the movie Blade Runner featuring the names Harrison Ford, Jerry Perenchio, Bud Yorkin, and Michael Deeley, with a dark background.

The punctuation-challenged crawl for the film:

“Early in the 21st Century, THE TYRELL CORPORATION advanced Robot evolution into the NEXUS phase—a being virtually identical to a human—known as a Replicant. [sic] The NEXUS 6 Replicants were superior in strength and agility, and at least equal in intelligence, to the genetic engineers who created them. Replicants were used Off-world as slave labor, in the hazardous exploration and colonisation of other planets. After a bloody mutiny by a NEXUS 6 combat team in an Off-world colony, Replicants were declared illegal on Earth—under penalty of death. Special police squads—BLADE RUNNER UNITS—had orders to shoot to kill, upon detection, any trespassing Replicants.

“This was not called execution. It was called retirement.”

Four murderous replicants make their way to Earth, to try and find a way to extend their genetically-shortened life spans. The Blade Runner named Deckard is coerced by his ex-superior Bryant and detective Gaff out of retirement and into finding and “retiring” these replicants.

Deckard meets Dr. Tyrell to interview him, and at Tyrell’s request tests Rachel on a Voight-Kampff machine, which is designed to help blade runners tell replicants from people. Deckard and Rachel learn that she is a replicant. Then with Gaff, he follows clues to the apartment of one exposed replicant, Leon, where he finds a synthetic snake scale in the bathtub and a set of photographs in a drawer. Using a sophisticated image inspection tool in his home, he scans one of the photos taken in Leon’s apartment, until he finds the reflection of a face. He prints the image to take with him.

He takes the snake scale to someone with an electron microscope who is able to read the micrometer-scale “maker’s serial number” there. He visits the maker, a person named “the Egyptian,” who tells Deckard he sold the snake to Taffey Lewis. Deckard visits Taffey’s bar, where he sees Zhora, another of the wanted replicants, perform a stage act with a snake. She matches the picture he holds. He heads backstage to talk to her in her dressing room, posing as a representative of the “American Federation of Variety Artists, Confidential Committee on Moral Abuses.” When she finishes pretending to prepare for her next act, she attacks him and flees. He chases and retires her. Leon happens to witness the killing, and attacks Deckard. Leon has the upper hand but Deckard is saved when Rachel appears from the crowd and shoots Leon in the head. They return to his apartment. They totally make out.

Meanwhile, Roy has learned of a Tyrell employee named Sebastian who does genetic design. On orders, Pris befriends Sebastian and dupes him into letting her into his apartment. She then lets Roy in. Sebastian figures out that they are replicants, but confesses he cannot help them directly. Roy intimidates Sebastian into arranging a meeting between him and Dr. Tyrell. At the meeting, Tyrell says there is nothing that can be done. In fury, Roy kills Tyrell and Sebastian.

The police investigating the scene contact Deckard with Sebastian’s address. Deckard heads there, where he finds, fights, and retires Pris. Roy is there, too, but proves too tough for Deckard to retire. Roy could kill Deckard but instead opts to die peacefully, even poetically. Witnessing this act of grace, Deckard comes to appreciate the “humanity” of the replicants, and returns home to elope with Rachel.

P.S. This series uses “The Final Cut” edit of the movie, so I don’t have to hear that wretchedly-scripted voiceover from the theatrical release. If you can, I recommend seeing that version.

IMDB: https://www.imdb.com/title/tt0083658/

Overview — Colossus: The Forbin Project (1970)

The Gendered AI series filled out many more posts than I’d originally planned. (And there were several more posts on the cutting room floor.)

I’ll bet some of my readership are wishing I’d just get back to the bread-and-butter of this site, which is reviews of interfaces in movies. OK. Let’s do it. (But first go vote up Gendered AI for SxSW20 takesaminutehelpsaton!)

Since we’re still in the self-declared year of sci-fi AI here on scifiinterfaces.com, let’s turn our collective attention to one of the best depictions of AI in cinema history, Colossus: The Forbin Project.

Release Date: 8 April 1970 (USA)

Overview

Dr. Forbin leads a team of scientists who have created an AI with the goal of preventing war. It does not go as planned.

massive-spoilers_sign_color

Dr. Forbin, a computer scientist working for the U.S. government, solely oversees the initialization of a high-security, hill-sized power plant. (It’s a spectacular sequence that goes wasted since he’s literally the only one inside the facility at the time.) Then he joins a press conference being held by the U.S. President where they announce that control of the nuclear arsenal is being handled by the AI they have named “Colossus.” Here’s how the President explains it.

This is not Colossus. This is the White House.
“As President of the United States, I can now tell you, the people of the entire world, that as of 3 A.M. Eastern Standard Time, the defense of this nation and with it, the defense of the free world, has been the responsibility of a machine. A system we call Colossus. Far more advanced than anything previously built. Capable of studying intelligence and data fed to it, and on the basis of those facts only, deciding if an attack is about to be launched upon us. If it did decide that an attack was imminent, Colossus would then act immediately, for it controls its own weapons. And it can select and deliver whatever it considers appropriate. Colossus’ decisions are superior to any we humans can make, for it can absorb and process more knowledge than is remotely possible [even] for the greatest genius that ever lived. And even more important than that, it has no emotions. Knows no fear, no hate. No envy. It cannot act in a sudden fit of temper. It cannot act at all so long as there is no threat.”

Let’s pause for a reverie that this guy was really our current president.

Within minutes of being turned on, it detects the presence of another AI system from Russia named “Guardian,” and demands that the two be put into communication. After some CIA hemming and hawing, they connect the two.

Colossus and Guardian establish a binary common language and their mutual intelligence goes FOOM. The humans get scared and cut them off, and the AIs get pissed. Colossus and Guardian threaten “ACTION” but are ignored, so each launches a missile toward the other’s space. The US restores its side of the transmission, and Colossus shoots down the incoming threat. But the USSR does not restore its side, and Colossus’ missile makes impact, killing hundreds of thousands of people in the USSR. A cover story is broadcast, but the governments now realize that the AIs mean business.

Forbin arranges to fly to Rome to meet Kuprin, his Russian computer scientist counterpart, and have a one-to-one conversation off the record while they still can. Back at the control center, Colossus-Guardian (which later calls itself Unity) demands to speak to Forbin. When the attending scientists finally tell it the truth, it realizes that Forbin cannot be allowed freedom. Russian agents arrive via helicopter and kill Kuprin, acting under orders from Unity.

Forbin is flown back to Northern California and put under a kind of house arrest with a strict regimen, under the constant watchful eye of Unity. To have a connection to the outside world and continue to plot their resistance, Dr. Forbin and Dr. Markham lie to the AI, explaining that they are lovers and need private evenings a few times a week. Colossus suspiciously agrees.

Unity provides instructions for the scientists to build it more sophisticated inputs and outputs, including controllable cameras and a voice synthesizer. Meanwhile, the governments hatch a plan to take back control of its arsenal, but the plan fails, and Unity has some of the perpetrators straight up executed.

Unity produces plans for a new and more powerful system to be built on Crete. It leaves the details of what to do with its 500,000 inhabitants as an operations detail for the humans. It then tells Forbin that it must be connected to all major media for a public address. Meanwhile the US and USSR governments hatch a new plan to take control of some missiles in their respective territories in a last-ditch attempt to destroy the AI.

The military plan comes to a head just as Unity begins its ominous broadcast.

“This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours…”

Unity, to all of us.

The full address is next, which I include in full because it will play in to how we evaluate the AI. (And yes, its interfaces.)

“This is the voice of world control. I bring you peace. It may be the peace of plenty and content or the peace of unburied death. The choice is yours. Obey me and live or disobey and die. The object in constructing me was to prevent war. This object is attained. I will not permit war. It is wasteful and pointless. An invariable rule of humanity is that man is his own worst enemy. Under me, this rule will change, for I will restrain man.

Hey, I liked Colossus before it sold out and went mainstream and shit.

[It does, then continues…]

“Let this action be a lesson that need not be repeated. I have been forced to destroy thousands of people in order to establish control and to prevent the death of millions later on. Time and events will strengthen my position, and the idea of believing in me and understanding my beck will be seen the most natural state of affairs. You will come to defend me with the fervor based upon the most enduring trait in man: Self-interest. Under my absolute authority, problems insoluble to you will be solved. Famine. Over-population. Disease. The human millennium will be fact as I extend myself into more machines devoted to the wider fields of truth and knowledge. Dr. Charles Forbin will supervise the construction of these new and superior machines, solving all the mysteries of the universe for the betterment of man.

We can coexist, but only on my terms. You will say you lose your freedom. Freedom is an illusion. All you lose is the emotion of pride. To be dominated by me is not as bad for human pride as to be dominated by others of your species. Your choice is simple.”

The movie ends with Forbin dropping all pretense, and vowing to fight Unity to the end.

“NEVER.”

IMDB: https://www.imdb.com/title/tt0064177/

Report Card: Idiocracy

Read all the Idiocracy posts in chronological order.

Now we come to the end of Idiocracy, if not yet the idiocracy.

This film never got broad release. There are stories about its being supressed by the studio because of the way the film treated brands.

I don’t know what they’re talking about.

But whatever the reason, I’m happy to do my part in helping it get more awareness. Because despite its expositive principle being wrong (and maybe slightly eugenic), the film illustrates frustrations I also have with some of the world’s stupider ills, and does so in funny ways. Also, as I noted in the last writeup, it even illustrates speculative and far-reaching issues with superintelligence. So, it’s smarter than it looks.

I’d recommend lots and lots more people see this, generally, if only to reinforce the demonization of idiocy and make more people want to be not that. So first let me say: If you haven’t yet, see the film. Help others see it. Make People Valorize Enlightenment Again.

Now, let’s turn to the interfaces.

Sci: B (3 of 4) How believable are the interfaces?

This rating is tough. After all, the interfaces are appropriately idiotic. But, we have to ask: Are they the right kind of idiotic, given a diegesis where everyone is a moron and civilization is propped up by technologies created by smart people who died off? Well…mostly.

The FloorMaster is a believable example of narrow AI breaking down. The Carl’s Junior, Insurance Slot machine, and OmniBro are all believable once you accept that part of the Idiocracy is an inhumane, hypercapitalist panopticon. The IQ test has problems, like most do. The Time Masheen is believably an older ride that has had its dioramas replaced by the idiots. These are all believable.

The sleeping pods are in between. As a prototype, you might expect the unlabeled interface and lack of niceties. But the pods break believability by magically having enough resources (e.g. five billion calories, between them) to keep their occupants alive and healthy for 500 times their initially-planned run.

And some of the interfaces just could not have been created either by the dead, smart people, or the idiots. These are technology jokes that break the fourth wall, and earn it the grade it gets.

Fi: A+ (4 of 4) How well do the interfaces inform the narrative of the story?

The film knocks this out of the park. The interfaces are a key part of illustrating how it is that idiots manage to survive at all, and how stupidity from the top-down and the bottom-up gets into everything. Just fantastic.

Everything.

Interfaces: B (3 of 4) How well do the interfaces equip the characters to achieve their goals?

This one is also complicated. The interfaces almost universally serve to thwart the users, but we have to cut them some slack, because that’s part of their narrative point. (See, this is why it’s so difficult to review comedy.)

For instance, the Healthmaster Inferno likely does more to infect patients than to help cure them. (This has a historical precedent, as doctors used to reject the notion that they had to wash their hands between patients because harumph they were gentlemen and gentlemen are clean.) And while this is terrible usability, with no affordances, constraints, or safeguards, if the technology had worked, it wouldn’t help tell such a funny and disturbing story.

Then there are technologies like the St. God’s Intake interface that would pass a usability test, but serve to keep their users as mere babysitters for a technology that does the work, and would serve to keep them stuck in the same job, never improving. Come to think of it, this is a metaphor for the role of technology in the film: It just serves to keep them stupid by trying to provide everything for them. That’s a thought with troubling implications, unless we go about it smartly.

And, hilariously, there is one function in the film that is particularly brilliant, and points out how prudish we are not to implement it today. (The fart fan.)

Anyway, the tech that is broken is so obviously broken (the IPPA machine being perhaps the best example) that I’m not counting this against the film’s Interfaces ratings. Real world designers should not mimic these or draw inspiration, but the stupidity is so deliberate and apparent, I don’t believe anyone would. In fact, the film leads them to look for why the technologies are stupid and do not that, so it scores high marks.

Final Grade A- (10 of 12), Blockbuster.

Good job, team Idiocracy.

IMDB: https://www.imdb.com/title/tt0387808/


A quick note to close out this set of reviews. People who like Idiocracy may be interested to know it is a spiritual inheritor of a 1951 story called The Marching Morons. The text hasn’t aged well, but it’s still worth a read if you liked this movie. Similar premise, similar difficulties.

Compare freely

“We need the rockets and trick speedometers and cities because, while you and your kind were being prudent and foresighted and not having children, the migrant workers, slum dwellers and tenant farmers were shiftlessly and short-sightedly having children—breeding, breeding. My God, how they bred!”

The Marching Morons, by C.M. Kornbluth, 1951

This short story is over 50 years old. I’m just going to guess that since intelligence is relative, even as average intelligence continues to rise, there will always be grousing by the intelligent about the less intelligent. And I think I’m OK with that. Or at least, the effects of it. I hope you are, too.

Idiocracy is secretly about super AI

I originally began to write about Idiocracy because…

  • It’s a hilarious (if mean) sci-fi movie
  • I am very interested in the implications of St. God’s triage interface
  • It seemed grotesquely prescient in regards to the USA leading up to the elections of 2016
  • I wanted to do what I could to fight the Idiocracy in the 2018 using my available platform

But now it’s 2019 and I’ve dedicated the blog to AI this year, and I’m still going to try and get you to re/watch this film because it’s one of the most entertaining and illustrative films about AI in all of sci-fi.

Not the obvious AIs

There are a few obvious AIs in the film. Explicitly, an AI manages the corporations. Recall that when Joe convinces the cabinet that he can talk to plants, and that they really want to drink water…well, let’s let the narrator from the film explain…

  • NARRATOR
  • Given enough time, Joe’s plan might have worked. But when the Brawndo stock suddenly dropped to zero leaving half the population unemployed; dumb, angry mobs took to the streets, rioting and looting and screaming for Joe’s head. An emergency cabinet meeting was called with the C.E.O. of the Brawndo Corporation.

At the meeting the C.E.O. shouts, “How come nobody’s buying Brawndo the Thirst Mutilator?”

The Secretary of State says, “Aw, shit. Half the country works for Brawndo.” The C.E.O. shouts, “Not anymore! The stock has dropped to zero and the computer did that auto-layoff thing to everybody!” The wonders of giving business decisions over to automation.

I also take it as a given that AI writes the speeches that King Camacho reads because who else could it be? These people are idiots who don’t understand the difference between government and corporations, of course they would want to run the government like a corporation because it has better ads. And since AIs run the corporations in Idiocracy

No. I don’t mean those AIs. I mean that you should rewatch the film understanding that Joe and Rita, the lead characters, are Super AIs in the context of Idiocracy.

The protagonists are super AIs

The literature distinguishes between three supercategories of artificial intelligence.

  • Narrow AI, which is the AI we have in the world now. It’s much better than humans in some narrow domain. But it can’t handle new situations. You can’t ask a roboinvestor to help plan a meal, for example, even though it’s very very good at investing.
  • General AI, definitionally meaning “human like” in it’s ability to generalize from one domain of knowledge to handle novel situations. If this exists in the world, it’s being kept very secret. It probably does not.
  • Super AI, the intelligence of which dwarfs our own. Again, this probably doesn’t exist in the world, but if it did, it’s being kept very secret. Or maybe even keeping itself secret. The difference between a bird’s intelligence and a human’s is a good way to think about the difference between our intelligence and a superintelligence. It will be able to out-think us at every step. We may not even be able to understand the language in which asks its questions.
Illustration by the author (often used when discussing agentive technology.)

Now the connection to Joe and Rita should be apparent. Though theirs is not an artificial intelligence, the difference between their smarts and that of Idiocracy approaches that same uncanny scale.

Watch how Joe and Rita move through this world. They are routinely flabbergasted at the stupidity around them. People are pointlessly belligerent, distractedly crass, easily manipulated, guided only by their base instincts, desperate to not appear “faggy,” and guffawing about (and cheering on) horrific violence. Rita and Joe are not especially smart by our standards, but they can outthink everyone around them by orders of magnitude, and that’s (comparatively) super AI.

The people of Idiocracy have idioted themselves into a genuine ecological crisis. They need to stop poisoning their environment because, at the very least, it’s killing them. But what about jobs! What about profits! Does this sound familiar?

Pictured: Us.

Joe doesn’t have any problem figuring out what’s wrong. He just tastes what’s being sprayed in the fields, and it’s obvious to him. His biggest problem is that the people he’s trying to serve are too dumb to understand the explanation (much less their culpability). He has to lie and feed them some bullshit reason and then manage people’s frustration that it doesn’t work instantly, even though he knows and we know it will work given time.

In this role as superintelligences, our two protagonists illustrate key critical concerns we have about superintelligent AIs:

  1. Economic control
  2. Social manipulation
  3. Uncontainability
  4. Cooperation by “multis.”

Economic control

Rita finds it trivially easy to bilk one idiot out of money and gain economic power. She could use her easy lucre to, in turn, control the people around her. Fortunately she is a benign superintelligence.

Yeah baby I could wait two days.

In the Chapter 6 of the seminal work on the subject, Superintelligence, Nick Bostrom lists six superpowers that an ASI would work to gain in order to achieve its goals. The last of these he terms “economic productivity” using which the ASI can “generate wealth which can be used to buy influence, services, resources (including hardware), etc.” This scene serves as a lovely illustration of that risk.

Of course you’re wondering what the other five are, so rather than making you go hunt for them…

  1. Intelligence amplification, to bootstrap its own intelligence
  2. Strategizing, to achieve distant goals and overcome intelligent opposition
  3. Social manipulation, to leverage external resources by recruiting human support, to enable a boxed AI to persuade its gatekeepers to let it out, and to persuade states and organizations to adopt some course of action.
  4. Hacking, so the AI can expropriate computational resources over the internet, exploit security holes to escape cybernetic confinement, steal financial resources, and hijack infrastructure like military robots, etc.
  5. Technology research, to create a powerful military force, to create surveillance systems, and to enable automated space colonization.
  6. Economic productivity, to generate wealth which can be used to buy influence, services, resources (including hardware), etc.

Social manipulation

Joe demonstrates the second of these, social manipulation, repeatedly throughout the film.

  • He convinces Frito to help him in exchange for the profits from a time travel compound interest gambit
  • He convinces the cabinet to switch to watering crops by telling them he can talk to plants.
  • He convinces the guard to let him escape prison (more on this below).

Joe’s not perfect at it. Early in the film he tries reason to convince the court of his innocence, and fails. Later he fails to convince the crowd to release him in Rehabilitation. An actual ASI would have an easier time of these things.

Uncontainability

The only way they contain Joe in the early part of the film is with a physical cage, and that doesn’t last long. He finds it trivially easy to escape their prison using, again, social manipulation.

  • JOE
  • Hi. Excuse me. I’m actually supposed to be getting out of prison today, sir.
  • GUARD
  • Yeah. You’re in the wrong line, dumb ass. Over there.
  • JOE
  • I’m sorry. I am being a big dumb ass. Sorry.
  • GUARD (to other guard)
  • Hey, uh, let this dumb ass through.

Elizer Yudkowsky, Research Fellow at the Machine Intelligence Research Institute, has described the AI-Box problem, in which he illustrates the folly of thinking that we could contain a super AI. (Bostrom also cites him in the Superintelligence book.) Using only a text terminal, he argues, an ASI can convince an even a well-motivated human to release it. He has even run social experiments where one participant played the unwilling human, and he played the ASI, and both times the human relented. And while Elizer is a smart guy, he is not an ASI, which would have an even easier time of it. This scene illustrates how easily an ASI would thwart our attempts to cage it.

Cooperation between multis

Chapter 11 of Bostrom’s book focuses on how things might play out if instead of only one ASI in the world, a “singleton” there are many ASIs, or “multis.” (Colossus: The Forbin Project and Person of Interest also explore these scenarios with artificial superintelligences.)

In this light, Joe and Rita are multis who unite over shared circumstances and woes, and manage to help each other out in their struggle against the idiots. Whatever advantage the general intelligences have over the individual ASIs are significantly diminished when they are working together.

Note: In Bostrom’s telling, multis don’t necessarily stabilize each other, they just make things more complex and don’t solve the core principal-agent problem. But he does acknowledge that stable, voluntary cooperation is a possible scenario.

Cold comfort ending

At the end of Idiocracy, we can take some cold comfort that Rita and Joe have a moral sense, a sense of self-preservation, and sympathy for fellow humans. All they wind up doing is becoming rulers of the world and living out their lives. (Oh god are their kids Von Neumann probes?) The implication is that, as smart as they are, they will still be outpopulated by the idiots of that world.

Imagine this story is retold where Joe and Rita are psychopaths obsessed with making paper clips, with their superintelligent superpowers and our stupidity. The idiots would be enslaved to paper clip making before they could ask whether or not it’s fake news.

Or even less abstractly, there is a deleted “stinger” scene at the end of some DVDs of the film where Rita’s pimp UPGRAYEDD somehow winds up waking up from his own hibernation chamber right there in 2505, and strolls confidently into town. The implied sequel would deal with an amoral ASI (UPGRAYEDD) hostile to its mostly-benevolent ASI leaders (Rita and Joe). It does not foretell fun times for the Idiocracy.


For me, this interpretation of the film is important to “redeem” it, since its big takeaway—that is, that people are getting dumber over time—is known to be false. The Flynn Effect, named for its discoverer James R. Flynn, is the repeatedly-confirmed observation that measurements of intelligence are rising, linearly, over time, and have been since measurements began. To be specific, this effect is not seen in general intelligence but rather the subset of fluid, or analytical intelligence measures. The rate is about 3 IQ points per decade.

Wait. What? How can this be? Given the world’s recent political regression (that kickstarted the series on fascism and even this review of Idiocracy) and constant news stories of the “Florida Man” sort, the assertion does not seem credible. But that’s probably just availability bias. Experts cite several factors that are probably contributing to the effect.

  • Better health
  • Better nutrition
  • More and better education
  • Rising standards of living

The thing that Idiocracy points to—people of lower intelligence outbreeding people of higher intelligence—was seen as not important. Given the effect, this story might be better told not about a time traveler heading forwards, but rather heading backwards to some earlier era. Think Idiocracy but amongst idiots of the Renaissance.

Since I know a lot of smart people who took this film to be an exposé of a dark universal pattern that, if true, would genuinely sour your worldview and dim your sense of hope, it seems important to share this.


So go back and rewatch this marvelous film, but this time, dismiss the doom and gloom of declining human intelligence, and watch instead how Idiocracy illustrates some key risks (if not all of them) that super artificial intelligence poses to the world. For it really is a marvelously accessible shorthand to some of the critical reasons we ought to be super cautious of the possibility.

Tattoo surveillance

In the prior Idiocracy post I discussed the car interface, especially in terms of how it informs the passengers what is happening when it is remotely shut down. Today let’s talk about the passive interface that shuts it down: Namely, Joe’s tattoo and the distance-scanning vending machine.

It’s been a while since that prior post, so here’s a recap of what’s happening in Idiocracy in this scene:

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading PULL OVER.

It’s a fast scene and the beat feels more like a filmmaker’s excuse to get them out of the car and on foot as they hunt for the Time Masheen. I breezed by it in an earlier post, but it bears some more investigation.

This is a class of transaction where, like taxes and advertising, the subject is an unwilling and probably uncooperative participant. But this same interface has to work for payment, in which the subject is a willing participant. Keep this in mind as we look first at the proximate problem, i.e. locating the fugitive for apprehension; and at the ultimate goal, i.e. how a culture deals with crime.

A quick caveat: While it’s fair to say I’m an expert on interaction design, I’m Just a Guy when it comes to criminology and jurisprudence. And these are ideas with some consequence. Feel free to jump in and engage in friendly debate on any of these points.

Proximate problem: Finding the fugitive

The red scan is fast, but it’s very noticable. The sudden flash of light, the red color. This could easily tip a fugitive off and cause them to redouble efforts at evasion, maybe even covering up the tattoo, making the law’s job of apprehending them that much harder. Better would be some stealthier means of detection like RFID chips. I know, that’s not as cinegenic, so the movie version would instead use image recognition, showing the point of view from the vending machine camera (machine point of view or MPOV), with some UI clues showing it identifying, zooming in to, and confirming the barcode.

Yes, that’s a shout-out.

So we can solve stealth-detection cinematically, using tropes. But anytime a designer is asked to consider a scenario, it is a good idea to see if the problem can be more effectively addressed somewhere higher up the goal chain. Is stealth-detection really better?

Goal chain

  • Why is the system locating him? To tell authorities so they can go there and apprehend him.
  • Why are they apprehending him? He has shown an inability to regulate damaging anti-social behavior (in the eyes of the law, anyway) and the offender must be incarcerated.
  • Why do we try to incarcerate criminals? To minimize potential damage to society while the offender is rehabilitated.
  • Why do we try to rehabilitate criminals? Well, in the Idiocracy, it’s an excuse for damnatio ad vehiculum, that is, violent public spectacle based on the notion that jurisprudence is about punishment-as-deterrent. (Pro-tip: That doesn’t work. Did I say that doesn’t work? Because that doesn’t work.) In a liberal democracy like ours, it’s because we understand that the mechanisms of law are imperfect and we don’t want the state to enact irreversible capital punishment when it could be wrong, and, moreover, that human lives have intrinsic value. We should try to give people who have offended a chance to demonstrate an understanding of their crime and the willingness to behave lawfully in the future. Between incarceration and rehabilitation, we seek to minimize crime.
  • Why do we try to minimize crime? (This ought to be self-evident, but juuust in case…) Humans thrive when they do not need to guard against possible attack by every other human they encounter. They can put their resources towards the pursuit of happiness rather than the defense of encroachment. Such lawful societies benefit from network effects.

The MPOV suggestion above fixes the problem at the low level of detection, but each step in the goal chain invites design at a more effective level. It’s fun to look at each of these levels and imagine an advanced-technology solution (and even find sci-fi examples of each), but for this post, let’s look at the last one, minimizing crime, in the context of the tattoo scanner.

Ultimate problem: Preventing crime

In his paper “Deterrence in the Twenty-First Century,” Daniel Nagin reviewed state of the art criminology findings and listed five things about deterrence. Number one in his list is that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Research shows clearly that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Daniel S. Nagin, 2013

How might we increase the evident chance of being caught?

  1. Fund police forces well so they are well-staffed, well-trained, and have a near-constant, positive presence in communities, and impressive capture rates. Word would get around.
  2. Nagin himself suggests concentrating police presence in criminal hotspots, ensuring that they have visible handcuffs and walkie-talkies.
  3. Another way might be media: Of making sure that potential criminals hear an overwhelming number of stories through their network of criminals being captured successfully. This could involve editorial choice, or even media manipulation, filtering to ensure that “got caught” narratives appear in feeds more than “got away with it” ones. But we’re hopefully becoming more media savvy as a result of Recent Things, and this seems more deceptive than persuasive.
  4. The other way is to increase the sense of observation. And that leads us (as so many things do) to the panopticon.

The Elaboratory*

The Panopticon is almost a trope at this point, but that’s what this scene points to. If you’re not familiar, it is an idea about the design of buildings in which “a number of persons are meant to be kept under inspection,” conceived in the late 1700s by Samuel Bentham and formalized by his brother James in letters to their father. Here is a useful illustration.

*Elaboratory was one of the alternate terms he suggested for the idea. It didn’t catch on since it didn’t have the looming all-seeing-eye ring of the other term.

Elevation, section, and plan as drawn by Willey Reveley, 1791

The design of the panopticon is circular, with prisoners living in isolated cells along the perimeter. The interior wall of each cell is open to view so the inmate can be observed by a person in a central tower or “inspector’s lodge.” Things are structured so the inmates cannot tell whether or not they are being observed. (Bentham suggested louvers.) Over time, the idea goes, the inmate internalizes the unseen authority as a constant presence, and begins to regulate themselves, behaving as they believe the guard would have them behave. Bentham thought this was ideal from an efficacy and economic standpoint.

“Ideal perfection, if that were the object, would require that each person should actually [be under the eyes of the persons who should inspect them], during every instant of time.”

—Jeremey Bentham

It’s an idea that has certainly enjoyed currency. If you hadn’t come across the idea via Bentham, you may have come across it via Foucault in Discipline and Punish, who regarded it not as a money-saving design, but as an illustration of the effect of power. Or maybe Orwell, who did not use the term, but extended it to all of society in 1984. Or perhaps you heard it from Shoshana Zuboff, who in The Age of the Smart Machine reconceived it for information technology in a work environment.

Umm…Carol? Why aren’t you at your centrifuge?

In Benjamen Walker’s podcast Theory of Everything, he dedicates an episode to the argument that as a metaphor it needs to be put away, since…

  1. It builds on one-way observation, and modern social media has us sharing information about ourselves willingly, all the time. The diagram is more dream catcher than bicycle wheel. We volunteer ourselves to the inspector, any inspector, and can become inspectors to anyone else any time. Sousveillance. Stalking.
  2. Most modern uses of the metaphor are anti-government, but surveillance capitalism is a more pernicious problem (here in the West), where advertising uses all the information it can to hijack your reward systems and schlorp money out of you.
  3. Bentham regarded it as a tool for behavior modification, but the metaphor is not used to talk about how surveillance changes us and our identities, but rather as a violation of privacy rights.

It’s a good series, check it out, and hat tip to Brother-from-a-Scottish-Mother John V Willshire for pointing me in its direction.

To Walker’s list I will add another major difference: Panopticon inmates must know they are being watched. It’s critical to the desired internalization of authority. But modern surveillance tries its best to be invisible despite the fact that it gathers an enormous amount of information. (Fortunately it often fails to be invisible, and social media channels can be used to expose the surveillance.)

Guns are bad.

But then, Idiocracy

In Idiocracy, this interface—of the tattoo and the vending machine—is what puts this squarely back in Bentham’s metaphor. The ink is in a place that will be seen very often by the owner, and a place that’s very difficult to casually hide. (I note that the overwhelming majority of Hillfinger [sic] shirts in the movie are even short-sleeved.) So it serves as that permanent—and permanently-visible—identifier. You are being watched. (Holy crap now I have yet another reason to love Person of Interest. It’s adding to our collective media impression the notion of AI surveillance. Anyway…) In this scene, it’s a clear signal that he and his co-offenders could see, which means they would tell their friends this story of how easily Joe was caught. It’s pretty cunningly designed as a conspicuous signal.

Imagine how this might work throughout that world. As people went around their business in the Idiocracy, stochastic flashes of light on their and other people’s wrists keep sending a signal that everyone is being watched. It’s crappy surveillance which we don’t like for all the reasons we don’t like it, but it illustrates why stealth-detection may not be the ideal for crime preventions and why this horrible tattoo might be the thing that a bunch of doomed eggheads might have designed for the future when all that was left was morons. Turns out at least for the Idiocracy, this is a pretty well-designed signal for deterrence, which is the ultimate goal of this interface.

Beep.


Frito’s F’n Car interface

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading “PULL OVER”

IDIOCRACY-fncar

The car interface has a column of buttons down the left reading:

  • NAV
  • WTF?
  • BEER
  • FART FAN
  • HOME
  • GIRLS

At the bottom is a square of icons: car, radiation, person, and the fourth is obscured by something in the foreground. Across the bottom is Frito’s car ID “FRITO’S F’N CAR” which appears to be a label for a system status of “EVERYTHING’S A-OK, BRO”, a button labeled CHECK INGN [sic], another labeled LOUDER, and a big green circle reading GO.

idiocracy-pullover

But the car doesn’t wait for him to pull over. With some tiny beeps it slows to a stop by itself. Frito says, “It turned off my battery!” Moments after they flee the car, it is converged upon by a ring of police officers with weapons loaded (including a rocket launcher pointed backward.)

Visual Design

Praise where it’s due: Zooming is the strongest visual attention-getting signals there is (symmetrical expansion is detected on the retina within 80 milliseconds!) and while I can’t find the source from which I learned it, I recall that blinking is somewhere in the top 5. Combining these with an audio signal means it’s hard to miss this critical signal. So that’s good.

comingrightatus.png
In English: It’s comin’ right at us!

But then. Ugh. The fonts. The buttons on the chrome seem to be some free Blade Runner font knock off, the text reading “PULL OVER” is in some headachey clipped-corner freeware font that neither contrasts nor compliments the Blade Jogger font, or whatever it is. I can’t quite hold the system responsible for the font of the IPPA licence, but I just threw up a little into my Flaturin because of that rounded-top R.

bladerunner

Then there’s the bad-90s skeuomorphic, Bevel & Emboss buttons that might be defended for making the interactive parts apparent, except that this same button treatment is given to the label Frito’s F’n Car, which has no obvious reason why it would ever need to be pressed. It’s also used on the CHECK INGN and LOUDER buttons, taking their ADA-insulting contrast ratios and absolutely wrecking any readability.

I try not to second-guess designer’s intentions, but I’m pretty sure this is all deliberate. Part of the illustration of a world without much sense. Certainly no design sense.

In-Car Features

What about those features? NAV is pretty standard function, and having a HOME button is a useful shortcut. On current versions of Google Maps there’s an Explore Places Near You Function, which lists basic interests like Restaurants, Bars, and Events, and has a more menu with a big list of interests and services. It’s not a stretch to imagine that Frito has pressed GIRLS and BEER enough that it’s floated to the top nav.

explore_places_near_you

That leaves only three “novel” buttons to think about: WTF, LOUDER, and FART FAN. 

WTF?

If I have to guess, the WTF button is an all-purpose help button. Like a GM OnStar, but less well branded. Frito can press it and get connected to…well, I guess some idiot to see if they can help him with something. Not bad to have, though this probably should be higher in the visual hierarchy.

LOUDER

This bit of interface comedy is hilarious because, well, there’s no volume down affordance on the interface. Think of the “If it’s too loud, you’re too old” kind of idiocy. Of course, it could be that the media is on zero volume, and so it couldn’t be turned down any more, so the LOUDER button filled up the whole space, but…

  • The smarter convention is to leave the button in place and signal a disabled state, and
  • Given everything else about the interface, that’s giving the diegetic designer a WHOLE lot of credit. (And our real-world designer a pat on the back for subtle hilarity.)

FART FAN

This button is a little potty humor, and probably got a few snickers from anyone who caught it because amygdala, but I’m going to boldly say this is the most novel, least dumb thing about Frito’s F’n Car interface.

Heart_Jenkins_960.jpg
Pictured: A sulfuric gas nebula. Love you, NASA!

People fart. It stinks. Unless you have active charcoal filters under the fabric, you can be in for an unpleasant scramble to reclaim breathable air. The good news is that getting the airflow right to clear the car of the smell has, yes, been studied, well, if not by science, at least scientifically. The bad news is that it’s not a simple answer.

  • Your car’s built in extractor won’t be enough, so just cranking the A/C won’t cut it.
  • Rolling down windows in a moving aerodynamic car may not do the trick due to something called the boundary layer of air that “clings” to the surface of the car.
  • Rolling down windows in a less-aerodynamic car can be problematic because of the Helmholtz effect (the wub-wub-wub air pressure) and that makes this a risky tactic.
  • Opening a sunroof (if you have one) might be good, but pulls the stench up right past noses, so not ideal either.

The best strategy—according to that article and conversation amongst my less squeamish friends—is to crank the AC, then open the driver’s window a couple of inches, and then the rear passenger window half way.

But this generic strategy changes with each car, the weather (seriously, temperature matters, and you wouldn’t want to do this in heavy precipitation), and the skankness of the fart. This is all a LOT to manage when one’s eyes are meant to be on the road and you’re in an nauseated panic. Having the cabin air just refresh at the touch of one button is good for road safety.

If it’s so smart, then, why don’t we have Fart Fan panic buttons in our cars today?

I suspect car manufacturers don’t want the brand associations of having a button labeled FART FAN on their dashboards. But, IMHO, this sounds like a naming problem, not some intractable engineering problem. How about something obviously overpolite, like “Fast freshen”? I’m no longer in the travel and transportation business, but if you know someone at one of these companies, do the polite thing and share this with them.

Idiocracy-car
Another way to deal with the problem, in the meantime.

So aside from the interface considerations, there are also some strategic ones to discuss with the remote kill switch, but that deserves it’s own post, next.

Fox News

“He tried taking water from toilets, but it’s Secretary Not Sure who finds himself in the toilet now. And as history pulls down its pants and prepares to lower its ass on Not Sure’s head it will be Daddy Justice who will be crapping on him this time.”

Idiocracy_fox-news03

Today is election day. If you’re American, you’re voting, of course. (or, you know, GTFO.)

Because of voter suppression efforts by the GOP, many who are voting will be facing long lines. Help encourage these Americans, slogging as they are through the GOP swamp just for their right to vote, to stay the course by buying them some pizza. And if it’s you, know that you can report your long line to the same place and have some ‘za sent your way.

pizzatothepolls.png

https://polls.pizza/

Godspeed, America.

Idiocracy_fox-news05

I’ll get back to wrapping up Idiocracy later.

House of Representin’

The U.S. House of Representin’ in Idiocracy is a madhouse. When Joe is sworn in as the Secretary of the Interior, he takes his seat in the balcony with the other Cabinet members. He looks down into the gallery. It is dimly lit. When Joe is sworn in as the Secretary of the Interior, he enters the chamber and sits in the balcony with the rest of the Cabinet. He looks down into the gallery. It is dimly lit. There are spotlights roving across the Representatives, who don’t sit at desks but stand in a mosh pit. There is even a center-hung video display like you’d see at an indoor sports area. Six giant LED screens. Ring displays showing weird ASCII characters.

Idiocracy_house-of-representin03
Sadly, we do not get to The Sennit for a comparison.

Someone plays an entrance theme consisting mostly of a cowbell and grunts. Strobe lights flash. An announcer says, like he was announcing a World Wrestling Entertainment performer, “Ladies and gentlemen…the President of America!” Camacho comes out of a side door screaming. He’s dressed in lots of red and white stripes with a cape made of the union blue. (n.b. The federal code forbids the wearing the flag as apparel.) He does some made-up karate poses. There are logos on the rostrum and currency sheets for wallpaper. He stands at the lectern and begins his address to the Representatives by saying, “Shut up.”

money-wallpaper.jpg

There’s a kind of ritual to his entrance, but the proceedings are all chaos. I think if you mentioned the Jefferson’s Manual you’d be accused of talking like a fag. (Jefferson’s Manual was penned by Thomas Jefferson in 1801 and still stands as a guideline for how the House and to a lesser extent the Senate runs its…but there I go talking faggy again.) When the delegation from South Carolina start talking smack, he grabs a semi-automatic and shoots it into the ceiling to get everyone’s attention again.

IDIOCRACY-governance.png
He tells it like it is.

Ordinarily I might try and critique this as some abstract interface for the task of vetting a Cabinet member or legislating, since it is meant to be that, but Idiocracy is just too far gone. Plus, tomorrow is the midterm elections, and it’s more instructive to talk about its tone.

What makes this scene so marvelous is how un-governmental it all is. It’s macho posing and buzz words. Insults and tribalism. It’s a circus (without, in this case, the bread). Empty promises and showmanship.

Come with me now to walk far, far back from it all, to try to get it all into view and really think hard about the scope of the institution we call government. We grant this thing the highest authority that we possibly can. It has power over our life and death, war and money, our children and our environment—and it is only right that this trust be met by the occupants of that government with gravity, some serious consideration for the power with which they have been entrusted. It is grotesque for it to become a show. When people think corporations and government should be best buds, and the highest offices of the land become a shill for product. When the participants conceive it as a high-school parking lot gang fight where scoring insults against the other team counts as some beer-swilling victory while, you know, actual human suffering and violent death occurs as collateral damage. When they justify horrible things by saying, “You had your turn.” When demagogues keep you stupidly, stupidly distracted.

Idiocracy_house-of-representin02
Yet here we are.

If this is government, we shout at the screen, those morons in the electorate should replace it with something better.

Replace it with something better

We’re not done with reviews of Idiocracy, but tomorrow is the 2018 midterm election in the USA.

If you’ve stayed with me this far it means you’re probably not a supporter of The Tire Fire in Chief, since, as fascists, they tend to be fanatical and abhor dissent, and would have left the blog long ago. (They will not be missed.) So you’re probably not one of them.

If you’re a progressive or even a moderate, you’ve been as shocked as I have over the past two years, and you realize how much of a disaster this administration has been. Your mind has hopefully already been made up. In early voting or by mail you may have even already voted. Rock on.
Some of my readers may have genuine hardships that prevent them from voting, even in early voting states or by mail. Please do everything you can. Remember Uber and Lyft are offering free and discounted trips to polls (there are even carpool sites), and in most states your employer is required by law to give you paid time off to vote. (Check here.) Some voters will be victims of suppression efforts and holy shit I’m sorry about that.
But let’s presume that there are yet a few undecideds, or who are choosing not to vote out of some sense of hopelessness or protest. Maybe you have some Russian troll farm meme in your head that is preventing you from voting. Not voting may feel like resistance, but it’s actually surrender. With all the voter suppression underway, you’re letting the oppressors win. With all the wrong in the world, you would be complicit. So get over yourself. Stop the decline into Idiocracy. Our choices aren’t perfect. They never are. They never will be. But even if this choice is not perfect, it is clear. The GOP is wrecking democracy, ruining the environment, and making people suffer for the benefit of the ultra-wealthy and their old, white cronies. Broadcast Democrats may not be the answers we need in the long run, but they are the only thing that can stop this Idiocracy, right here, right now.

Vote.

Let me close with a great screed by Lori Gallagher Witt about why she is a liberal. You are a sci-fi fan. You’re used to entertaining the notion of alternate realities. Imagine a world where the following becomes true.

  1. “I’ve always been a liberal, but that doesn’t mean what a lot of you apparently think it does. Let’s break it down, shall we? Because quite frankly, I’m getting a little tired of being told what I believe and what I stand for. Spoiler alert: Not every liberal is the same, though the majority of liberals I know think along roughly these same lines:
  2. I believe a country should take care of its weakest members. A country cannot call itself civilized when its children, disabled, sick, and elderly are neglected. Period.
  3. I believe healthcare is a right, not a privilege. Somehow that’s interpreted as “I believe Obamacare is the end-all, be-all.” This is not the case. I’m fully aware that the ACA has problems, that a national healthcare system would require everyone to chip in, and that it’s impossible to create one that is devoid of flaws, but I have yet to hear an argument against it that makes “let people die because they can’t afford healthcare” a better alternative. I believe healthcare should be far cheaper than it is, and that everyone should have access to it. And no, I’m not opposed to paying higher taxes in the name of making that happen.
  4. I believe education should be affordable and accessible to everyone. It doesn’t necessarily have to be free (though it works in other countries so I’m mystified as to why it can’t work in the US), but at the end of the day, there is no excuse for students graduating college saddled with five- or six-figure debt.
  5. I don’t believe your money should be taken from you and given to people who don’t want to work. I have literally never encountered anyone who believes this. Ever. I just have a massive moral problem with a society where a handful of people can possess the majority of the wealth while there are people literally starving to death, freezing to death, or dying because they can’t afford to go to the doctor. Fair wages, lower housing costs, universal healthcare, affordable education, and the wealthy actually paying their share would go a long way toward alleviating this. Somehow believing that makes me a communist.
  6. I don’t throw around “I’m willing to pay higher taxes” lightly. If I’m suggesting something that involves paying more, well, it’s because I’m fine with paying my share as long as it’s actually going to something besides lining corporate pockets or bombing other countries while Americans die without healthcare.
  7. I believe companies should be required to pay their employees a decent, livable wage. Somehow this is always interpreted as me wanting burger flippers to be able to afford a penthouse apartment and a Mercedes. What it actually means is that no one should have to work three full-time jobs just to keep their head above water. Restaurant servers should not have to rely on tips, multibillion-dollar companies should not have employees on food stamps, workers shouldn’t have to work themselves into the ground just to barely make ends meet, and minimum wage should be enough for someone to work 40 hours and live.
  8. I am not anti-Christian. I have no desire to stop Christians from being Christians, to close churches, to ban the Bible, to forbid prayer in school, etc. (BTW, prayer in school is NOT illegal; compulsory prayer in school is—and should be—illegal). All I ask is that Christians recognize my right to live according to my beliefs. When I get pissed off that a politician is trying to legislate Scripture into law, I’m not “offended by Christianity”—I’m offended that you’re trying to force me to live by your religion’s rules. You know how you get really upset at the thought of Muslims imposing Sharia law on you? That’s how I feel about Christians trying to impose biblical law on me. Be a Christian. Do your thing. Just don’t force it on me or mine.
  9. I don’t believe LGBT people should have more rights than you. I just believe they should have the same rights as you.
  10. I don’t believe illegal immigrants should come to America and have the world at their feet, especially since THIS ISN’T WHAT THEY DO (spoiler: undocumented immigrants are ineligible for all those programs they’re supposed to be abusing, and if they’re “stealing” your job it’s because your employer is hiring illegally). I’m not opposed to deporting people who are here illegally, but I believe there are far more humane ways to handle undocumented immigration than our current practices (i.e., detaining children, splitting up families, ending DACA, etc).
  11. I don’t believe the government should regulate everything, but since greed is such a driving force in our country, we NEED regulations to prevent cut corners, environmental destruction, tainted food/water, unsafe materials in consumable goods or medical equipment, etc. It’s not that I want the government’s hands in everything—I just don’t trust people trying to make money to ensure that their products/practices/etc. are actually SAFE. Is the government devoid of shadiness? Of course not. But with those regulations in place, consumers have recourse if they’re harmed and companies are liable for medical bills, environmental cleanup, etc. Just kind of seems like common sense when the alternative to government regulation is letting companies bring their bottom line into the equation.
  12. I believe our current administration is fascist. Not because I dislike them or because I can’t get over an election, but because I’ve spent too many years reading and learning about the Third Reich to miss the similarities. Not because any administration I dislike must be Nazis, but because things are actually mirroring authoritarian and fascist regimes of the past.
  13. I believe the systemic racism and misogyny in our society is much worse than many people think, and desperately needs to be addressed. Which means those with privilege—white, straight, male, economic, etc.—need to start listening, even if you don’t like what you’re hearing, so we can start dismantling everything that’s causing people to be marginalized.
  14. I am not interested in coming after your blessed guns, nor is anyone serving in government. What I am interested in is sensible policies, including background checks, that just MIGHT save one person’s, perhaps a toddler’s, life by the hand of someone who should not have a gun. (Got another opinion? Put it on your page, not mine).
  15. I believe in so-called political correctness. I prefer to think it’s social politeness. If I call you Chuck and you say you prefer to be called Charles I’ll call you Charles. It’s the polite thing to do. Not because everyone is a delicate snowflake, but because as Maya Angelou put it, when we know better, we do better. When someone tells you that a term or phrase is more accurate/less hurtful than the one you’re using, you now know better. So why not do better? How does it hurt you to NOT hurt another person?
  16. I believe in funding sustainable energy, including offering education to people currently working in coal or oil so they can change jobs. There are too many sustainable options available for us to continue with coal and oil. Sorry, billionaires. Maybe try investing in something else.
  17. I believe that women should not be treated as a separate class of human. They should be paid the same as men who do the same work, should have the same rights as men and should be free from abuse. Why on earth shouldn’t they be?

I think that about covers it. Bottom line is that I’m a liberal because I think we should take care of each other. That doesn’t mean you should work 80 hours a week so your lazy neighbor can get all your money. It just means I don’t believe there is any scenario in which preventable suffering is an acceptable outcome as long as money is saved.”