Deckard’s Elevator

This is one of those interactions that happens over a few seconds in the movie, but turns out to be quite deep—and broken—on inspection.

When Deckard enters his building’s dark, padded elevator, a flat voice announces, “Voice print identification. Your floor number, please.” He presses a dark panel, which lights up in response. He presses the 9 and 7 keys on a keypad there as he says, “Deckard. 97.” The voice immediately responds, “97. Thank you.” As the elevator moves, the interface confirms the direction of travel with gentle rising tones that correspond to the floor numbers (mod 10), which are shown rising up a 7-segment LED display. We see a green projection of the floor numbers cross Deckard’s face for a bit until, exhausted, he leans against the wall and out of the projection. When he gets to his floor, the door opens and the panel goes dark.

A need for speed

An aside: To make 97 floors in 20 seconds you have to be traveling at an average of around 47 miles per hour. That’s not unheard of today. Mashable says in a 2014 article about the world’s fastest elevators that the Hitachi elevators in Guangzhou CTF Finance Building reach up to 45 miles per hour. But including acceleration and deceleration adds to the total time, so it takes the Hitachi elevators around 43 seconds to go from the ground floor to their 95th floor. If 97 is Deckard’s floor, it’s got to be accelerating and decelerating incredibly quickly. His body doesn’t appear to be suffering those kinds of Gs, so unless they have managed to upend Newton’s basic laws of motion, something in this scene is not right. As usual, I digress.

The input control is OK

The panel design is nice and was surprising in 1982, because few people had ridden in elevators serving nearly a hundred floors. And while most in-elevator panels have a single button per floor, it would have been an overwhelming UI to present the rider of this Blade Runner complex with 100 floor buttons plus the usual open door, close door, emergency alert buttons, etc. A panel that allows combinatorial inputs reduces the number of elements that must be displayed and processed by the user, even if it slows things down, introduces cognitive overhead, and adds the need for error-handling. Such systems need a “commit” control that allows them to review, edit, and confirm the sequence, to distinguish, say, “97” from “9” and “7.” Not such an issue from the 1st floor, but a frustration from 10–96. It’s not clear those controls are part of this input.

Deckard enters 8675309, just to see what will happen.

I’m a fan of destination dispatch elevator systems that increase efficiency (with caveats) by asking riders to indicate their floor outside the elevator and letting the algorithm organize passengers into efficient groups, but that only works for banks of elevators. I get the sense Deckard’s building is a little too low-rent for such luxuries. There is just one in his building, and in-elevator controls work fine for those situations, even if they slow things down a bit.

The feedback is OK

The feedback of the floors is kind of nice in that the 7-segment numbers rise up helping to convey the direction of movement. There is also a subtle, repeating, rising series of tones that accompany the display. Most modern elevators rely on the numeracy of its passengers and their sense of equilibrium to convey this information, but sure, this is another way to do it. Also, it would be nice if the voice system would, for the visually impaired, say the floor number when the door opens.

Though the projection is dumb

I’m not sure why the little green projection of the floor numbers runs across Deckard’s face. Is it just a filmmaker’s conceit, like the genetic code that gets projected across the velociraptors head in Jurassic Park?

Pictured: Sleepy Deckard. Dumb projection.

Or is it meant to be read as diegetic, that is, that there is a projector in the elevator, spraying the floor numbers across the faces of its riders? True to the New Criticism stance of this blog, I try very hard to presume that everything is diegetic, but I just can’t make that make sense. There would be much better ways to increase the visibility of the floor numbers, and I can’t come up with any other convincing reason why this would exist.

If this was diegetic, the scene would have ended with a shredded projector.

But really, it falls apart on the interaction details

Lastly, this interaction. First, let’s give it credit where credit is due. The elevator speaks clearly and understands Deckard perfectly. No surprise, since it only needs to understand a very limited number of utterances. It’s also nice that it’s polite without being too cheery about it. People in LA circa 2019 may have had a bad day and not have time for that shit.

Where’s the wake word?

But where’s the wake word? This is a phrase like “OK elevator” or “Hey lift” that signals to the natural language system that the user is talking to the elevator and not themselves, or another person in the elevator, or even on the phone. General AI exists in the Blade Runner world, and that might allow an elevator to use contextual cues to suss this out, but there are zero clues in the film that this elevator is sentient.

There are of course other possible, implicit “wake words.” A motion detector, proximity sensor, or even weight sensor could infer that a human is present, and start the elevator listening. But with any of these implicit “wake words,” you’d still need feedback for the user to know when it was listening. And some way to help them regain attention if they got the first interaction wrong, and there would be zero affordances for this. So really, making an explicit wake word is the right way to go.

It might be that touching the number panel is the attention signal. Touch it, and the elevator listens for a few seconds. That fits in with the events in the scene, anyway. The problem with that is the redundancy. (See below.) So if the solution was pressing a button, it should just be a “talk” button rather than a numeric keypad.

It may be that the elevator is always listening, which is a little dark and would stifle any conversation in the elevator less everyone end up stuck in the basement, but this seems very error prone and unlikely.

Deckard: *Yawns* Elevator: Confirmed. Silent alarm triggered.

This issue is similar to the one discussed in Make It So Chapter 5, “Gestural Interfaces” where I discussed how a user tells a computer they are communicating to it with gestures, and when they aren’t. 

Where are the paralinguistics?

Humans provide lots of signals to one another, outside of the meaning of what is actually being said. These communication signals are called paralinguistics, and one of those that commonly appears in modern voice assistants is feedback that the system is listening. In the Google Assistant, for example, the dots let you know when it’s listening to silence and when it’s hearing your voice, providing implicit confirmation to the user that the system can hear them. (Parsing the words, understanding the meaning, and understanding the intent are separate, subsequent issues.)

Fixing this in Blade Runner could be as simple as turning on a red LED when the elevator is listening, and varying the brightness with Deckard’s volume. Maybe add chimes to indicate the starting-to-listen and no-longer-listening moments. This elevator doesn’t have anything like that, and it ought to.

Why the redundancy?

Next, why would Deckard need to push buttons to indicate “97” even while he’s saying the same number as part of the voice print? Sure, it could be that the voice print system was added later and Deckard pushes the numbers out of habit. But that bit of backworlding doesn’t buy us much.

It might be a need for redundant, confirming input. This is useful when the feedback is obscure or the stakes are high, but this is a low-stakes situation. If he enters the wrong floor, he just has to enter the correct floor. It would also be easy to imagine the elevator would understand a correction mid-ride like “Oh wait. Elevator, I need some ice. Let’s go to 93 instead.” So this is not an interaction that needs redundancy.

It’s very nice to have the discrete input as accessibility for people who cannot speak, or who have an accent that is unrecognizable to the system, or as a graceful degradation in case the speech recognition fails, but Deckard doesn’t fit any of this. He would just enter and speak his floor.

Why the personally identifiable information?

If we were designing a system and we needed, for security, a voice print, we should protect the privacy of the rider by not requiring personally identifiable information. It’s easy to imagine the spoken name being abused by stalkers and identity thieves riding the elevator with him. (And let’s not forget there is a stalker on the elevator with him in this very scene.)

This young woman, for example, would abuse the shit out of such information.

Better would be some generic phrase that stresses the parts of speech that a voiceprint system would find most effective in distinguishing people.

Tucker Saxon has written an article for VoiceIt called “Voiceprint Phrases.” In it he notes that a good voiceprint phrase needs some minimum number of non-repeating phonemes. In their case, it’s ten. A surname and a number is rarely going to provide that. “Deckard. 97,” happens to have exactly 10, but if he lived on the 2nd floor, it wouldn’t. Plus, it has that personally identifiable information, so is a non-starter.

What would be a better voiceprint phrase for this scene? Some of Saxon’s examples in the article include, “Never forget tomorrow is a new day” and “Today is a nice day to go for a walk.” While the system doesn’t care about the meaning of the phrase, the humans using it would be primed by the content, and so it would just add to the dystopia of the scene if Deckard had to utter one of these sunshine-and-rainbows phrases in an elevator that was probably an uncleaned murder scene. but I think we can do it one better.

(Hey Tucker, I would love use VoiceIt’s tools to craft a confirmed voiceprint phrase, but the signup requires that I permit your company to market to me via phone and email even though I’m just a hobbyist user, so…hard no.)

Deckard: Hi, I’m Deckard. My bank card PIN code is 3297. The combination lock to my car spells “myothercarisaspinner” and my computer password is “unicorn.” 97 please.

Here is an alternate interaction that would have solved a lot of these problems.

  • ELEVATOR
  • Voice print identification, please.
  • DECKARD
  • SIGHS
  • DECKARD
  • Have you considered life in the offworld colonies?
  • ELEVATOR
  • Confirmed. Floor?
  • DECKARD
  • 97

Which is just a punch to the gut considering Deckard is stuck here and he knows he’s stuck, and it’s salt on the wound to have to repeat fucking advertising just to get home for a drink.

So…not great

In total, this scene zooms by and the audience knows how to read it, and for that, it’s fine. (And really, it’s just a setup for the moment that happens right after the elevator door opens. No spoilers.) But on close inspection, from the perspective of modern interaction design, it needs a lot of work.

Talking to a Puppet

As mentioned, Johnny in the last phone conversation in the van is not talking to the person he thinks he is. The film reveals Takahashi at his desk, using his hand as if he were a sock puppeteer—but there is no puppet. His desk is emitting a grid of green light to track the movement of his hand and arm.

jm-22-puppet-call-c

The Make It So chapter on gestural interfaces suggests Takahashi is using his hand to control the mouth movements of the avatar. I’d clarify this a bit. Lip synching by human animators is difficult even when not done in real time, and while it might be possible to control the upper lip with four fingers, one thumb is not enough to provide realistic motion of the lower lip.

Instead I suggest that the same computer modifying his voice is also providing the fine mouth movements, using the same camera that must be present for the video phone calls. So what are the hand motions for? They provide cues as to how fast or slow Takahashi wants his puppet to speak, further disguising his own speech patterns. And the arm position could provide different body language for the avatar as a whole, to ensure for example that the puppet avatar does not react with surprise or anger even if Takahashi himself expresses those emotions.

We saw this avatar in a phone call once before, when Johnny dialed into an internal phone number from the phone booth. But we’ve also seen the video image of Takahashi himself when he called Street Preacher. Perhaps the avatar is an option for incoming calls, just as today we can assign custom ringtones to individual callers on our mobiles. For outgoing calls, an important person such as Takahashi would be more likely to use his true face to impress the callee.

Video phones have been predicted in science fiction fiction and film for a very long time now, but have never achieved wide scale usage. Human communication is richer and more expressive when we can see each other, so why are we resistant? One reason is that in the real world we don’t have makeup artists following us around to ensure we look our best at all times. Donald Norman suggested in chapter 8 of his book Things That Make Us Smart that real time video enhancement would solve this problem, but then if we’re all going to be presenting false avatars to each other, why bother?

A Cringing Computer

After the call ends, Anna, a personality uploaded into a mainframe, appears on the screen. Takahashi is annoyed by this and makes a sweeping arm gesture to get rid of her, detected by the green light grid. The computer screen actually sinks into the desk in response.

jm-22-puppet-call-animated

This is discussed in chapter 10 of the book as an interface handling emotional input. I’d like to add that this is also an emotional output, the computer seeming to hide itself from an angry user. Given how often current day users express the wish to beat their computers with heavy blunt objects, perhaps that is exactly what it is doing.

Computers in film and TV often have annoying personalities, which is surprising for (presumably) commercial products. Another cringing computer, emphasised by being named “Slave”, made regular appearances in season 4 of Blake’s 7. Would users feel more comfortable if their computer systems gave the appearance of being afraid every time they had to report an error? It’s worth considering.

Hotel Remote

The Internet 2021 shot that begins the film ends in a hotel suite, where it wakes up lead character Johnny. This is where we see the first real interface in the film. It’s also where this discussion gets more complicated.

A note on my review strategy

As a 3D graphics enthusiast, I’d be happy just to analyze the cyberspace scenes, but when you write for Sci Fi Interfaces, there is a strict rule that every interface in a film must be subjected to inspection. And there are a lot of interfaces in Johnny Mnemonic. (Curse your exhaustive standards, Chris!)

A purely chronological approach which would spend too much time looking at trees and not enough at the forest. So I’ll be jumping back and forth a bit, starting with the gadgets and interfaces that appear only once, then moving on to the recurring elements, variations on a style or idea that are repeated during the film.

Description

The wakeup call arrives in the hotel room as a voice announcement—a sensible if obvious choice for someone who is asleep—and also as text on a wall screen, giving the date, time, and temperature. The voice is artificial sounding but pleasant rather than grating, letting you know that it’s a computer and not some hotel employee who let himself in. The wall display functions as both a passive television and an interactive computer monitor. Johnny picks up a small remote control to silence the wake up call.

jm-2-check-email-a

This remote is a small black box like most current-day equivalents, but with a glowing red light at one end. At the time of writing blue lights and indicators are popular for consumer electronics, apparently following the preference set by science fiction films and noted in Make It So. Johnny Mnemonic is an outlier in using red lights, as we’ll see more of these as the film progresses. Here the glow might be some kind of infrared or laser beam that sends a signal, or it might simply indicate the right way to orient the control in the hand for the controls to make sense.

First thing every morning: Messages

After silencing the alarm, Johnny, like so many of us today, checks his email. (In 1995 doing so before even getting out of bed might have been intended to show his detachment from humanity. Today, it seems perfectly natural!) He uses the remote to switch the display to the hotel “Message Centre”. We see his thumb move around, so the remote must have multiple buttons, but can’t tell whether this is a simple arrow keypad or something more complicated.

jm-2-check-email-b-adjusted

The message centre of the New Darwin Inn system both displays the text message visually and also speaks it aloud in the same synthesized voice that woke him up. Voiceovers are common in films so the audience doesn’t have to try to read the cinema screen, but in this case it would be genuinely useful. Guests could start doing something else without needing to pay full attention to the display.

Is it necessary for Johnny to explicitly switch to the Message Center? The system could have displayed this message automatically after the wakeup call, or shown the 2021 equivalent of his InBox.  On the other hand, this is a giant, clearly visible screen and Johnny was not alone in the suite. Johnny, and other guests, might wish to keep their communications private.

As Johnny has no messages, he uses the remote to switch the display to a TV channel.

The hotel room “phone” call

Next he uses the remote to make a phone call. He starts by using the remote to dial the number, which appears on the display. We can’t see whether he is typing numbers directly, or using arrow keys and an Enter or OK button to navigate around the onscreen keypad. It’s certainly convenient for guests to be able to make a call without getting out of bed, but a voice recognition interface might be even easier. We’ll see a phone system that accepts voice commands later on, so perhaps using the remote is just a preference.

jm-3-phone-hotel-a-adjusted

What is the strange blue window to the right of the keypad? It’s there because all phone calls in 2021 are in fact video calls. The equivalent to a busy waiting tone in this world is a video splash screen. These can be customized by the recipient, here showing the company name, Dataflow.

jm-3-phone-hotel-b-adjusted

And finally both parties can see and hear each other. Note  also the graphical reverse, stop, and play buttons at the bottom right of the keypad. These imply some sort of recording capability, but we never see them used.

jm-3-phone-hotel-c-adjusted

Next

I’ll discuss the 2021 phone system in more detail later on, so for now we just need to know that this phone call is the setup that sends Johnny to Beijing for his next, and hopefully last, job.

Fueling stations

BttF_041

Fueling stations are up on a raised platform. Cars can ride or land there and approach a central column. A rotating overhead arm maneuvers a liquid fuel dispensing robot into place near the car while a synthesized voice crudely welcomes the driver, delivers a marketing slogan, and announces its actions, i.e. checking oil, and checking landing gear.”

This seems like a pretty good robot solution. It’s efficient, and keeps the pilot informed of status. I presume payment happens as automatically, but we don’t see it.

The biggest improvement I’d make is to the horribly synthesized voice. Sure it conveys that this is a robot, but where movies optimize for the first time user, that crap would get tiring on a frequent use. Pilots could also save time out of their day and do a bit of environmental good if refueling could happen at home using an technology readily available as an off-the-shelf appliance. But where would one find such a thing?