Deckard’s Elevator

This is one of those interactions that happens over a few seconds in the movie, but turns out to be quite deep—and broken—on inspection.

When Deckard enters his building’s dark, padded elevator, a flat voice announces, “Voice print identification. Your floor number, please.” He presses a dark panel, which lights up in response. He presses the 9 and 7 keys on a keypad there as he says, “Deckard. 97.” The voice immediately responds, “97. Thank you.” As the elevator moves, the interface confirms the direction of travel with gentle rising tones that correspond to the floor numbers (mod 10), which are shown rising up a 7-segment LED display. We see a green projection of the floor numbers cross Deckard’s face for a bit until, exhausted, he leans against the wall and out of the projection. When he gets to his floor, the door opens and the panel goes dark.

A need for speed

An aside: To make 97 floors in 20 seconds you have to be traveling at an average of around 47 miles per hour. That’s not unheard of today. Mashable says in a 2014 article about the world’s fastest elevators that the Hitachi elevators in Guangzhou CTF Finance Building reach up to 45 miles per hour. But including acceleration and deceleration adds to the total time, so it takes the Hitachi elevators around 43 seconds to go from the ground floor to their 95th floor. If 97 is Deckard’s floor, it’s got to be accelerating and decelerating incredibly quickly. His body doesn’t appear to be suffering those kinds of Gs, so unless they have managed to upend Newton’s basic laws of motion, something in this scene is not right. As usual, I digress.

The input control is OK

The panel design is nice and was surprising in 1982, because few people had ridden in elevators serving nearly a hundred floors. And while most in-elevator panels have a single button per floor, it would have been an overwhelming UI to present the rider of this Blade Runner complex with 100 floor buttons plus the usual open door, close door, emergency alert buttons, etc. A panel that allows combinatorial inputs reduces the number of elements that must be displayed and processed by the user, even if it slows things down, introduces cognitive overhead, and adds the need for error-handling. Such systems need a “commit” control that allows them to review, edit, and confirm the sequence, to distinguish, say, “97” from “9” and “7.” Not such an issue from the 1st floor, but a frustration from 10–96. It’s not clear those controls are part of this input.

Deckard enters 8675309, just to see what will happen.

I’m a fan of destination dispatch elevator systems that increase efficiency (with caveats) by asking riders to indicate their floor outside the elevator and letting the algorithm organize passengers into efficient groups, but that only works for banks of elevators. I get the sense Deckard’s building is a little too low-rent for such luxuries. There is just one in his building, and in-elevator controls work fine for those situations, even if they slow things down a bit.

The feedback is OK

The feedback of the floors is kind of nice in that the 7-segment numbers rise up helping to convey the direction of movement. There is also a subtle, repeating, rising series of tones that accompany the display. Most modern elevators rely on the numeracy of its passengers and their sense of equilibrium to convey this information, but sure, this is another way to do it. Also, it would be nice if the voice system would, for the visually impaired, say the floor number when the door opens.

Though the projection is dumb

I’m not sure why the little green projection of the floor numbers runs across Deckard’s face. Is it just a filmmaker’s conceit, like the genetic code that gets projected across the velociraptors head in Jurassic Park?

Pictured: Sleepy Deckard. Dumb projection.

Or is it meant to be read as diegetic, that is, that there is a projector in the elevator, spraying the floor numbers across the faces of its riders? True to the New Criticism stance of this blog, I try very hard to presume that everything is diegetic, but I just can’t make that make sense. There would be much better ways to increase the visibility of the floor numbers, and I can’t come up with any other convincing reason why this would exist.

If this was diegetic, the scene would have ended with a shredded projector.

But really, it falls apart on the interaction details

Lastly, this interaction. First, let’s give it credit where credit is due. The elevator speaks clearly and understands Deckard perfectly. No surprise, since it only needs to understand a very limited number of utterances. It’s also nice that it’s polite without being too cheery about it. People in LA circa 2019 may have had a bad day and not have time for that shit.

Where’s the wake word?

But where’s the wake word? This is a phrase like “OK elevator” or “Hey lift” that signals to the natural language system that the user is talking to the elevator and not themselves, or another person in the elevator, or even on the phone. General AI exists in the Blade Runner world, and that might allow an elevator to use contextual cues to suss this out, but there are zero clues in the film that this elevator is sentient.

There are of course other possible, implicit “wake words.” A motion detector, proximity sensor, or even weight sensor could infer that a human is present, and start the elevator listening. But with any of these implicit “wake words,” you’d still need feedback for the user to know when it was listening. And some way to help them regain attention if they got the first interaction wrong, and there would be zero affordances for this. So really, making an explicit wake word is the right way to go.

It might be that touching the number panel is the attention signal. Touch it, and the elevator listens for a few seconds. That fits in with the events in the scene, anyway. The problem with that is the redundancy. (See below.) So if the solution was pressing a button, it should just be a “talk” button rather than a numeric keypad.

It may be that the elevator is always listening, which is a little dark and would stifle any conversation in the elevator less everyone end up stuck in the basement, but this seems very error prone and unlikely.

Deckard: *Yawns* Elevator: Confirmed. Silent alarm triggered.

This issue is similar to the one discussed in Make It So Chapter 5, “Gestural Interfaces” where I discussed how a user tells a computer they are communicating to it with gestures, and when they aren’t. 

Where are the paralinguistics?

Humans provide lots of signals to one another, outside of the meaning of what is actually being said. These communication signals are called paralinguistics, and one of those that commonly appears in modern voice assistants is feedback that the system is listening. In the Google Assistant, for example, the dots let you know when it’s listening to silence and when it’s hearing your voice, providing implicit confirmation to the user that the system can hear them. (Parsing the words, understanding the meaning, and understanding the intent are separate, subsequent issues.)

Fixing this in Blade Runner could be as simple as turning on a red LED when the elevator is listening, and varying the brightness with Deckard’s volume. Maybe add chimes to indicate the starting-to-listen and no-longer-listening moments. This elevator doesn’t have anything like that, and it ought to.

Why the redundancy?

Next, why would Deckard need to push buttons to indicate “97” even while he’s saying the same number as part of the voice print? Sure, it could be that the voice print system was added later and Deckard pushes the numbers out of habit. But that bit of backworlding doesn’t buy us much.

It might be a need for redundant, confirming input. This is useful when the feedback is obscure or the stakes are high, but this is a low-stakes situation. If he enters the wrong floor, he just has to enter the correct floor. It would also be easy to imagine the elevator would understand a correction mid-ride like “Oh wait. Elevator, I need some ice. Let’s go to 93 instead.” So this is not an interaction that needs redundancy.

It’s very nice to have the discrete input as accessibility for people who cannot speak, or who have an accent that is unrecognizable to the system, or as a graceful degradation in case the speech recognition fails, but Deckard doesn’t fit any of this. He would just enter and speak his floor.

Why the personally identifiable information?

If we were designing a system and we needed, for security, a voice print, we should protect the privacy of the rider by not requiring personally identifiable information. It’s easy to imagine the spoken name being abused by stalkers and identity thieves riding the elevator with him. (And let’s not forget there is a stalker on the elevator with him in this very scene.)

This young woman, for example, would abuse the shit out of such information.

Better would be some generic phrase that stresses the parts of speech that a voiceprint system would find most effective in distinguishing people.

Tucker Saxon has written an article for VoiceIt called “Voiceprint Phrases.” In it he notes that a good voiceprint phrase needs some minimum number of non-repeating phonemes. In their case, it’s ten. A surname and a number is rarely going to provide that. “Deckard. 97,” happens to have exactly 10, but if he lived on the 2nd floor, it wouldn’t. Plus, it has that personally identifiable information, so is a non-starter.

What would be a better voiceprint phrase for this scene? Some of Saxon’s examples in the article include, “Never forget tomorrow is a new day” and “Today is a nice day to go for a walk.” While the system doesn’t care about the meaning of the phrase, the humans using it would be primed by the content, and so it would just add to the dystopia of the scene if Deckard had to utter one of these sunshine-and-rainbows phrases in an elevator that was probably an uncleaned murder scene. but I think we can do it one better.

(Hey Tucker, I would love use VoiceIt’s tools to craft a confirmed voiceprint phrase, but the signup requires that I permit your company to market to me via phone and email even though I’m just a hobbyist user, so…hard no.)

Deckard: Hi, I’m Deckard. My bank card PIN code is 3297. The combination lock to my car spells “myothercarisaspinner” and my computer password is “unicorn.” 97 please.

Here is an alternate interaction that would have solved a lot of these problems.

  • Voice print identification, please.
  • Have you considered life in the offworld colonies?
  • Confirmed. Floor?
  • 97

Which is just a punch to the gut considering Deckard is stuck here and he knows he’s stuck, and it’s salt on the wound to have to repeat fucking advertising just to get home for a drink.

So…not great

In total, this scene zooms by and the audience knows how to read it, and for that, it’s fine. (And really, it’s just a setup for the moment that happens right after the elevator door opens. No spoilers.) But on close inspection, from the perspective of modern interaction design, it needs a lot of work.

St. God’s: Healthmaster Inferno

After Joe goes through triage, he is directed to the “diagnosis area to the right.” He waits in a short queue, and then enters the diagnosis bay.

The attendant wears a SMARTSPEEK that says, “Your illness is very important to us. Welcome to the Healthmaster Inferno.”

The attendant, DR. JAGGER, holds three small metal probes, and hands each one to him in turn saying, “Uh, this one goes in your mouth. This one goes in your ear. And this one goes up your butt.” (Dark side observation about the St. God’s: Apparently what it takes to become a doctor in Idiocracy is an ability to actually speak to patients and not just let the SMARTSPEEK do all the talking.)

Joe puts one in his mouth and is getting ready to insert the rest, when a quiet beeping causes the attendant to pause and correct himself. “Shit. Hang on a second.” He takes the mouth one back and hands him another one. “This one…No.” He gathers them together, and unable to tell them apart, he shuffles them trying to figure it out, saying “This one. This one goes in your mouth.” Joe reluctantly puts the offered probe into his mouth and continues.

The diagnosis is instant (and almost certainly UNKNOWN). SMARTSPEEK says, “Thank you for waiting. Dr. Lexus will be with you shortly.”


The probes

The probes are rounded, metal cylinders, maybe a decimeter in length. They look like 3.6mm audio plugs with the tips ground off. The interface-slash-body-horror joke is that we in the audience know that you shouldn’t cross-contaminate between those orifices in a single person, much less between multiple people, and the probes look identical. (Not only that, but they aren’t cleaned or used with a sterile disposable sheath, etc.) So Joe’s not sure what he’s about to have to put in his mouth, and DR. JAGGER is too dumb to know or care.


The bay

Modeled on car wash aethetics, the bay is a molded-plastic arch, about 4 meters to a side. The inside has a bunch of janky and unsanitary looking medical probes and tools. Around the entrance of the bay are an array of backlit signs, clockwise from 7 o’ clock:

  • Form one line | Do not push
  • (Two right-facing arrows, one blue, one orange)
  • (A stop sign)
  • (A hepatitis readout, from Hepatitis A to Hepatitis F, which does not exist.)
  • Tumor | E-Coli | Just gas | Tapeworm | Unknown
  • Gout | Lice | Leprosy | Malaria
  • (Three left-facing arrows, orange, blue, and magenta)
  • (The comp created for the movie tells…) Be probe ready | Thank you!

Theoretically, the lights help patients understand what to do and what their diagnosis is. But the instruction panels don’t seem to change, and once the patient is inside the bay, they can no longer see the diagnosis panels. The people in the queue and the lobby, however, can. So not only does it rob the patients of any bodily privacy (as they’re having to ram a probe up their rears), but it also robs them of any privacy about their diagnosis. HIPPA and GDPR are rolling around in their then-500 year old graves.


A better solution would of course focus on hygiene first, offering a disposable sheath for the probes. They should still be sterilized between patients.

Because this is such as visceral reminder, I’m nominating this as the top anti-example of affordances and constraints for new designers.

Better affordances

Second would be changing the design of the probes such that they were easy to distinguish between them. Color, shape, and labeling are initial ideas.

Better constraints

Third would be to constrain the probes so that…

  • The butt probe can’t reach up beyond the butt (maybe tying the cable to the floor? Though that means it’s likely to drop to the ground, which is clearly not sterile in this place, so maybe tying it the wall and having it klaxon loudly if it’s above butt height.)
  • The mouth probe can’t reach below the head (maybe tying the cable to the ceiling)
  • The ear probe should be smaller and ear-shaped rather than some huge eardrum-piercing thing.

And while modesty is clearly not an issue for people of Idiocracy, convention, modesty, and the law require us in our day to make this a LOT more private.

Prevention > remedy

Note that there is an error beep when Joe puts the wrong probe in his butt. Like many errors, by that time it is too late. It makes engineering sense for the machine to complain when there is a problem. It makes people sense to constrain so that errors are not possible, or at the very least, put the alarm where it will dissuade from error.

Also, can we turn the volume up on those quiet beeps to, say, 80 decibels? I think everyone’s interested in more of an alarm than a whisper for this.


A hidden, eviscerating joke

In addition to the base comedy—of treating diagnosis like a carwash, the interaction design of the missing affordances and constraints, and the poop humor of sticking a butt probe in your mouth—there is yet another layer of stupid evident here. Many of the diseases listed on the “proscenium” of the bay are ones that can be caused by, yep, ingesting feces. (Hepatitis A, Hepatitis E, tapeworm, E. “boli.”) Enjoy the full, appetizing list on Wikipedia. It’s a whole other layer of funny, and hearkens back to stories of when late-1800s doctors took umbrage at Ignaz Semmelweis’ suggestions that they wash their hands. (*huffgrumble* But we’re gentlemen! *monocle pop*) This is that special kind of stupid when people are the cause of their own problems, and refuse to believe it because they are either proud…or idiots.

But of course, we’re so much wiser today. People are never, say, duped into voting for some sense of tribal identity despite mountains of evidence that they are voting against their community, or even their own self-interest.

Fighting the unsanitary butt plugs of the Idiocracy

“Action by action, day by day, group by group, Indivisibles are remaking our democracy. They make calls. They show up. They speak with their neighbors. They organize. And through that work, they’ve built hundreds of mini-movements in support of their local values. And now, after practice, training, and repetition, they’ve built lasting power on their home turf and a massive, collective political muscle ready to be exercised each and every day in every corner of the country.”


Donate or join the phone bankers at Indivisible to talk people into voting, and perhaps some sanity into Idiocrats. Indivisible’s mission is “to cultivate and lift up a grassroots movement of local groups to defeat the Trump agenda, elect progressive leaders, and realize bold progressive policies.”

Cyberspace: Bulletin Board

Johnny finds he needs a favor from a friend in cyberspace. We see Johnny type something on his virtual keyboard, then selects from a pull down menu.


A quick break in the action: In this shot we are looking at the real world, not the virtual, and I want to mention how clear and well-defined all the physical actions by actor Keanu Reeves are. I very much doubt that the headset he is wearing actually worked, so he is doing this without being able to see anything.

Will regular users of virtual reality systems be this precise with their gestures? Datagloves have always been expensive and rare, making studies difficult. But several systems offer submillimeter gestural tracking nowadays: version 2 of Microsoft Kinect, Google’s Soli, and Leap Motion are a few, and much cheaper and less fragile than a dataglove. Using any of these for regular desktop application tasks rather than games would be an interesting experiment.

Back in the film, Johnny flies through cyberspace until he finds the bulletin board of his friend. It is an unfriendly glowing shape that Johnny tries to expand or unfold without success.

JM-36-bboard-A-animated Continue reading

The HoverChair Social Network


The other major benefit to the users of the chair (besides the ease of travel and lifestyle) is the total integration of the occupant’s virtual social life, personal life, fashion (or lack-thereof), and basic needs in one device. Passengers are seen talking with friends remotely, not-so-remotely, playing games, getting updated on news, and receiving basic status updates. The device also serves as a source of advertising (try blue! it’s the new red!).

A slight digression: What are the ads there for? Considering that the Axiom appears to be an all-inclusive permanent resort model, the ads could be an attempt to steer passengers to using resources that the ship knows it has a lot of. This would allow a reprieve for heavily used activities/supplies to be replenished for the next wave of guests, instead of an upsell maneuver to draw more money from them. We see no evidence of exchange of money or other economic activity while on-board the Axiom

OK, back to the social network.


It isn’t obvious what the form of authentication is for the chairs. We know that the chairs have information about who the passenger prefers to talk to, what they like to eat, where they like to be aboard the ship, and what their hobbies are. With that much information, if there was no constant authentication, an unscrupulous passenger could easily hop in another person’s chair, “impersonate” them on their social network, and play havoc with their network. That’s not right.

It’s possible that the chair only works for the person using it, or only accesses the current passenger’s information from a central computer in the Axiom, but it’s never shown. What we do know is that the chair activates when a person is sitting on it and paying attention to the display, and that it deactivates as soon as that display is cut or the passenger leaves the chair.

We aren’t shown what happens when the passenger’s attention is drawn away from the screen, since they are constantly focused on it while the chair is functioning properly.

If it doesn’t already exist, the hologram should have an easy to push button or gesture that can dismiss the picture. This would allow the passenger to quickly interact with the environment when needed, then switch back to the social network afterwards.

And, for added security in case it doesn’t already exist, biometrics would be easy for the Axiom. Tracking the chair user’s voice, near-field chip, fingerprint on the control arm, or retina scan would provide strong security for what is a very personal activity and device. This system should also have strong protection on the back end to prevent personal information from getting out through the Axiom itself.

Social networks hold a lot of very personal information, and the network should have protections against the wrong person manipulating that data. Strong authentication can prevent both identity theft and social humiliation.

Taking the occupant’s complete attention

While the total immersion of social network and advertising seems dystopian to us (and that’s without mentioning the creepy way the chair removes a passenger’s need for most physical activity), the chair looks genuinely pleasing to its users.

They enjoy it.

But like a drug, their enjoyment comes at the detriment of almost everything else in their lives. There seem to be plenty of outlets on the ship for active people to participate in their favorite activities: Tennis courts, golf tees, pools, and large expanses for running or biking are available but unused by the passengers of the Axiom.

Work with the human need

In an ideal world a citizen is happy, has a mixture of leisure activities, and produces something of benefit to the civilization. In the case of this social network, the design has ignored every aspect of a person’s life except moment-to-moment happiness.

This has parallels in goal driven design, where distinct goals (BNL wants to keep people occupied on the ship, keep them focused on the network, and collect as much information as possible about what everyone is doing) direct the design of an interface. When goal-driven means data driven, then the data being collected instantly becomes the determining factor of whether a design will succeed or fail. The right data goals means the right design. Wrong data goals mean the wrong design.

Instead of just occupying a person’s attention, this interface could have instead been used to draw people out and introduce them to new activities at intervals driven by user testing and data. The Axiom has the information and power, perhaps even the responsibility, to direct people to activities that they might find interesting. Even though the person wouldn’t be looking at the screen constantly, it would still be a continuous element of their day. The social network could have been their assistant instead of their jailer.

One of the characters even exclaims that she “didn’t even know they had a pool!”. Indicating that she would have loved to try it, but the closed nature of the chair’s social network kept her from learning about it and enjoying it. By directing people to ‘test’ new experiences aboard the Axiom and releasing them from its grip occasionally, the social network could have acted as an assistant instead of an attention sink.


Moment-to-moment happiness might have declined, but overall happiness would have gone way up.

The best way for designers to affect the outcome of these situations is to help shape the business goals and metrics of a project. In a situation like this, after the project had launched a designer could step in and point out those moments were a passenger was pleasantly surprised, or clearly in need of something to do, and help build a business case around serving those needs.

The obvious moments of happiness (that this system solves for so well) could then be augmented by serendipitous moments of pleasure and reward-driven workouts.

We must build products for more than just fleeting pleasure


As soon as the Axiom lands back on Earth, the entire passenger complement leaves the ship (and the social network) behind.

It was such a superficial pleasure that people abandoned it without hesitation when they realized that there was something more rewarding to do. That’s a parallel that we can draw to many current products. The product can keep attention for now, but something better will come along and then their users will abandon them.


A company can produce a product or piece of software that fills a quick need and initially looks successful. But, that success falls apart as soon as people realize that they have larger and tougher problems that need solving.

Ideally, a team of designers at BNL would have watched after the initial launch and continued improving the social network. By helping people continue to grow and learn new skills, the social network could have kept the people aboard the Axiom it top condition both mentally and physically. By the time Wall-E came around, and life finally began to return to Earth, the passengers would have been ready to return and rebuild civilization on their own.

To the designers of a real Axiom Social Network: You have the chance to build a tool that can save the world.

We know you like blue! Now it looks great in Red!

The Hover Chair


The Hover Chair is a ubiquitous, utilitarian, all-purpose assisting device. Each passenger aboard the Axiom has one. It is a mix of a beach-side deck chair, fashion accessory, and central connective device for the passenger’s social life. It hovers about knee height above the deck, providing a low surface to climb into, and a stable platform for travel, which the chair does a lot of.

A Universal Wheelchair

We see that these chairs are used by everyone by the time that Wall-E arrives on the Axiom. From BNL’s advertising though, this does not appear to be the original. One of the billboards on Earth advertising the Axiom-class ships shows an elderly family member using the chair, allowing them to interact with the rest of the family on the ship without issue. In other scenes, the chairs are used by a small number of people relaxing around other more active passengers.

At some point between the initial advertising campaign and the current day, use went from the elderly and physically challenged, to a device used 24/7 by all humans on-board the Axiom. This extends all the way down to the youngest children seen in the nursery, though they are given modified versions to more suited to their age and disposition. BNL shows here that their technology is excellent at providing comfort as an easy choice, but that it is extremely difficult to undo that choice and regain personal control.

But not a perfect interaction

Continue reading