Alien / Blade Runner crossover

I’m interrupting my review of the Prometheus interfaces for a post to share this piece of movie trivia. A few months ago, a number of blogs were all giddy with excitement by the release of the Prometheus Blu-Ray, because it gave a little hint that the Alien world and the Blade Runner world were one and the same. Hey internets, if you’d paid attention to the interfaces, you’d realize that this was already well established by 1982, or 30 years before.

A bit of interface evidence that Alien and Blade Runner happen in the same universe.

The Museum of One Film

Years ago in grad school I heard a speaker tell of a possibly-imaginary museum called the Museum of One Painting. In that telling, the museum was a long hall. The current One Painting (they were occasionally switched out) was hung at the far end from the entrance. As you walked the length of it, to your left you would see paintings and exhibits of the things that had influenced the One Painting. Then at the end you would spend time with the One Painting. On your way out, to your left you could see paintings and other artworks that were influenced by the One Painting.

Fplan-matte.png

Ah yes, I believe this was made during Henry Hillinick’s Robbie the Robot phase.
Hat tip to the awesome Matte Shot.

I loved this museum concept. It was about depth of understanding. It provided context. It focused visits on building a shared understanding that you could discuss with other visitors, even if they’d gone a week before you. My kind of art museum. I fell in love with this concept pretty hard and began to believe in the intervening decades that it was just a fable, constructed by wishful museum theorists.

Nope. Today I searched for it, and found it. It’s real. It’s housed in a small building in Penza, Russia. The reality is a little different than how I had it described (or, rather, how I wrote it to memory), and I think it’s only in Russian (mine is a pittance), and given recent politics I’m not sure I’d be welcome there; but its beautiful core concept is intact. A deep dive into a single painting at a time.

Penza.png

Just a quick 9-hour drive from Moscow.

The whole reason I bring this up on the blog is because the awesome American cinema chain Alamo Drafthouse is doing something like this, but for film. And not just any film, but for the upcoming Blade Runner 2049. Their Road to Nowhere series examines dystopian films that influenced or were influenced by Blade Runner. Some you probably know and love. (Metropolis! Logan’s Run! The Fifth Element!) Some I’d never heard of but now want to. (1990: The Bronx Warriors! Hardware! Prayer of the Rollerboys!)

Road to Nowhere.png

I try not to be a gushing fanboy for anything on this blog, but I gotta hand it to the Drafthouse for this. This is my kind of film nerdery. If I was to run a film series, it would be just like this, only with some sci-fi interface analysis and redesign meetups thrown in for good measure. Just thrilled that it’s happening, and there’s a Drafthouse near me in San Francisco. (Sorry if there’s not one near you, but maybe there’s something similar?)

Anyway, I was not paid by anyone to write this. Just…just happy and nerding out. Hope to see you there.

Blade Runner (1982)

Whew. So we all waited on tenterhooks through November to see if somehow Tyrell Corporation would be founded, develop and commercialize general AI, and then advance robot evolution into the NEXUS phase, all while in the background space travel was perfected, Off-world colonies and asteroid mining established, global warming somehow drenched Los Angeles in permanent rain and flares, and flying cars appear on the market. None of that happened. At least not publicly. So, with Blade Runner squarely part of the paleofuture past, let’s grab our neon-tube umbrellas and head into the rain to check out this classic that features some interesting technologies and some interesting AI.

Release date: 25 Jun 1982

The punctuation-challenged crawl for the film:

“Early in the 21st Century, THE TYRELL CORPORATION advanced Robot evolution into the NEXUS phase—a being virtually identical to a human—known as a Replicant. [sic] The NEXUS 6 Replicants were superior in strength and agility, and at least equal in intelligence, to the genetic engineers who created them. Replicants were used Off-world as slave labor, in the hazardous exploration and colonisation of other planets. After a bloody mutiny by a NEXUS 6 combat team in an Off-world colony, Replicants were declared illegal on Earth—under penalty of death. Special police squads—BLADE RUNNER UNITS—had orders to shoot to kill, upon detection, any trespassing Replicants.

“This was not called execution. It was called retirement.”

Four murderous replicants make their way to Earth, to try and find a way to extend their genetically-shortened life spans. The Blade Runner named Deckard is coerced by his ex-superior Bryant and detective Gaff out of retirement and into finding and “retiring” these replicants.

Deckard meets Dr. Tyrell to interview him, and at Tyrell’s request tests Rachel on a Voight-Kampff machine, which is designed to help blade runners tell replicants from people. Deckard and Rachel learn that she is a replicant. Then with Gaff, he follows clues to the apartment of one exposed replicant, Leon, where he finds a synthetic snake scale in the bathtub and a set of photographs in a drawer. Using a sophisticated image inspection tool in his home, he scans one of the photos taken in Leon’s apartment, until he finds the reflection of a face. He prints the image to take with him.

He takes the snake scale to someone with an electron microscope who is able to read the micrometer-scale “maker’s serial number” there. He visits the maker, a person named “the Egyptian,” who tells Deckard he sold the snake to Taffey Lewis. Deckard visits Taffey’s bar, where he sees Zhora, another of the wanted replicants, perform a stage act with a snake. She matches the picture he holds. He heads backstage to talk to her in her dressing room, posing as a representative of the “American Federation of Variety Artists, Confidential Committee on Moral Abuses.” When she finishes pretending to prepare for her next act, she attacks him and flees. He chases and retires her. Leon happens to witness the killing, and attacks Deckard. Leon has the upper hand but Deckard is saved when Rachel appears from the crowd and shoots Leon in the head. They return to his apartment. They totally make out.

Meanwhile, Roy has learned of a Tyrell employee named Sebastian who does genetic design. On orders, Pris befriends Sebastian and dupes him into letting her into his apartment. She then lets Roy in. Sebastian figures out that they are replicants, but confesses he cannot help them directly. Roy intimidates Sebastian into arranging a meeting between him and Dr. Tyrell. At the meeting, Tyrell says there is nothing that can be done. In fury, Roy kills Tyrell and Sebastian.

The police investigating the scene contact Deckard with Sebastian’s address. Deckard heads there, where he finds, fights, and retires Pris. Roy is there, too, but proves too tough for Deckard to retire. Roy could kill Deckard but instead opts to die peacefully, even poetically. Witnessing this act of grace, Deckard comes to appreciate the “humanity” of the replicants, and returns home to elope with Rachel.

In the last scene, Gaff hints to Deckard with this unicorn origami that Deckard himself is a replicant.


P.S. This series uses “The Final Cut” edit of the movie, so I don’t have to hear that wretchedly-scripted voiceover from the theatrical release. If you can, I recommend seeing that version.

IMDB Icon
v
iTunes

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.

Blush

I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.

OK

Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Spinners (flying cars)

So the first Fritzes are now a thing. Before I went off on that awesome tangent, where were we? Oh that’s right. I was reviewing Blade Runner as part of a series on AI in sci-fi. I was just about to get to Spinners. Now vehicles are complicated things as they are, much less when they are navigating proper 3D space. Additionally, the police force is, ostensibly, a public service, which complicates things even further. So this will get lengthy. Still, I think I can get this down to eight or so subtopics.

In the distant future of 2019, flying cars, called “spinners,” are a reality. They’re largely for the wealthy and powerful (including law enforcement). The main protagonist, Deckard, is only ever a passenger in a few over the course of the film. His partner Gaff flies one, though, so we have enough usage to review.

Opening the skies to automobile-like traffic poses challenges, especially when those skies are as full of lightning bolts, ever-present massive flares, distracting building-sized video advertisements, and of course, other spinners.

Piloting controls

To pilot the spinner, Gaff keeps his hands on each handle of a split yoke. Within easy reach of his fingers are a few unlabeled buttons and small lights. Once we see him reach with his right thumb to press one of the buttons, but we don’t see any result, so it’s not clear what these buttons do. It’s nice that they don’t require him to take his hands off the controls. (This might seem like a prescient concept, but WP tells me the first non-horn wheel-mounted controls date back as far back as 1966.)

It is contextualizing to note the mode of agency here. That is, the controls are manual, with no AI offering assistance or acting as an agent. (The AI is in the passenger’s seat, lol fight me.) It appears to be up to Gaff to observe conditions, monitor displays, perform wayfinding, and keep the spinner on track.

Note that we never see what his feet are doing and never see him doing other things with his hands other than putting on a headset before lift-off. There are lots of other controls to the pilot’s left and in the console between seats, but we never see them in use. So, you know, approach with caution. There are a lot of unknowns here.

The Traditional Chinese characters on the window read “No entry,” for citizens outside the spinner, passing by when it is on the ground. (Hat tips for the translation to Mischa Park-Doob and Frank Chung.)

The spinner is more like a VTOL aircraft or helicopter than a spaceship. That is, it is constantly in the presence of planetary gravity and must overcome the constant resistance of air. So the standards I established in the piloting controls post are of only limited use to us here.

So let’s look at how helicopter controls work. The FAA Helicopter Flying Handbook tells us that a pilot has controls for…

  1. The vertical velocity, up or down. (Controlled by the angle of the control stick called the collective. The collective is to the left of the pilot’s hip when they are seated.)
  2. The thrust. (Controlled by the twistgrip on the collective.)
  3. Movement forward, rearward, left, and right. (Controlled with the stick in front of the pilot, called the cyclic.)
  4. Yaw of the vehicle. (Controlled with the pair of antitorque pedals at the pilot’s feet.)

Since we don’t see Gaff when the spinner is moving up and down, let’s presume that the thing he’s gripping is like a Y-shaped cyclic, with lots of little additional controls around the handles. Then, if we presume he has a collective somewhere out of sight to his left and antitorque pedals at his feet, this interface meets modern helicopter standards for control. From the outside, those appear to be well mapped (collective up = helicopter up, cyclic right = helicopter right). Twist for thrust is a little weird, but it’s a standard and certainly learnable, as I recall from my motorcycling days. So let’s say it’s complete and convincing. Is it the best it could be? I’m not enough of an aeronautical engineer (read: not at all) to imagine better options, so let’s move along. I might have more to say if it was agentive.

Dashboard

There are two large screens in the dashboard. The one directly in front of Gaff shows a stylized depiction of the 3D surfaces around him as cyan highlights on a navy blue background. Approaching red shapes describe a pill-shaped tunnel-in-the-sky display. These have been tested since 1981 and found to provide higher tracking performance to ideal paths in manual flight, lower cognitive workload, and enhanced situational awareness. (https://arc.aiaa.org/doi/abs/10.2514/3.56119) So, this is believable and well done. I’m not sure that Gaff could readily use the 3D background to effectively understand the 3D terrain, but it is tertiary, after the real world and the tunnel display.

I have to say that it’s a frustrating anti-trope to run into again, but it must be said: If the spinner knows where the ship should be, and general artificial intelligence exists in this diegesis, why exactly are humans doing the piloting? Shouldn’t the spinner fly itself? But back to the interfaces…

Above the tunnel-in-the-sky display is a cyan 7-segment LED scroll display. In the gif above it displays “MAXIMUM SPEED” and later it provides some wayfinding text. I’m not sure how many different types of information it is meant to cycle through, but it sure would be a pain to wait for vital information to appear, and distracting to have to control it to get to the one you wanted.

There is also a vertical screen in the middle of the console listing cyan labels ALT, VEL, and PTCH. These match to altitude, velocity, and pitch variables, reinforcing the helicopter model. The yellow numbers below these labels change in the scene very slowly, and—remarkably for a four-second interface from 1982—do not appear to change randomly. That’s awesome.

But then, there’s a paragraph of cyan text in the middle of the screen that appears over the course of the scene, letter by letter. This animation calls unnecessary attention to itself. There are also smaller, thin screens in the pilot’s door that also continually scroll that same teeny tiny cyan text. I’m not sure WTF all this text is supposed to be, since it would be horribly distracting to a pilot. There are also a few rows of white LEDs with cylon-eye displays traveling back and forth. They are distracting, but at least they’re regular, and might be habituate-able and act as some sort of ambient display. Anyway, if we were building this thing for real, we’d want to eliminate these.

Lastly, at the bottom of the center screen are some unlabeled bar charts depicting some variables that appear to be wiggling randomly. So, like, only the top fifth of this screen can be lauded. The rest is fuigetry. *sigh* It’s hard to escape.

Wayfinding

To help navigate the 3D space, pilots have a number of tools. First, there are windows where you expect windows to be in a car, and there are also glass panels under their feet. The movie doesn’t make a big deal out of it, but it’s clear in the scene where the spinner lifts off from the street level. These transparent panes surround pilots and passengers and allow them to track visual cues for landmarks and to identify collision threats.

It’s reflecting some neon on the street below.

The tunnel-in-the-sky display above is the most obvious wayfinding tool. Somehow Gaff has entered a destination, and the tunnel guides him where it needs to go. Since this entails a safe path through the air, it’s the most important display. Other bits of information (like the ALT, VEL, and PTCH in the center screen) should be oriented around it. This would make them glanceable, allowing Gaff glance to check them and quickly return his eyes to the windshield. In fact, we have to admit that a heads up display would allow Gaff to keep his attention where it needs to be rather than splitting it between the real world and these dashboard displays. Modern vehicle drivers are used to this split attention, and can manage it well enough. But I suspect that a HUD would be better.

It’s also at this point that you begin to wonder if these are the scout ships we see in Close Encounters.

There is also that crawling LED display above the tunnel-in-the-sky screen. In one scene it shows “SECTOR FOUR (4)…QUAD-” (we don’t get to see the end of this phrase) but it implies that one of the bits of information this scroll provides is a reminder of the name of the neighborhood you’re currently in. That really only helps if you’re way off course, and seems too low a fidelity for actual wayfinding assistance, but presuming the tunnel-in-the-sky is helping provide the rest of the wayfinding, this information is of secondary importance.

A special note about takeoff: ENVIRON CTR

The display sequence infamous for appearing in both Alien and Blade Runner happens as Gaff lifts off in a spinner early in the film. White all-cap letters label this blue screen “ENVIRON CTR,” above a grid of square characters. Then two 8-digit sequences “drop” down the center of the square grid: 92886599 | 95654085. Once they drop 3 rows, the background turns red, the grid disappears to be replaced by a big blinking label PURGE. Characters at the bottom read “24556 DR 5”, and don’t change.

After the spinner lifts off the display shows a complex diagram of a circle-within-a-circle, illustrating the increasing elevation from the ground below. The delightful worldbuilding thing about the sequence is that it is inscrutable, and legible only by a trained driver, yet gets full focus on screen. There’s not really enough information about the speculative engineering or functional constraints of the spinner to say why these screens would be necessary or useful. I have a suspicion that a live camera view would be more useful than the circle-within-a-circle view, but gosh, it sure is cool. Here’s the shot from Alien, by the way, for easy comparison.

Since people seem to be all over this one now, let me also interject that Alien is also connected to Firefly, since Mal’s anti-aircraft HUD in the pilot had a Weyland-Yutani logo. Chew on that trivia, Internet.

Intercar communication

Of special note is a scene just before his call to Sebastian’s apartment. Deckard is sitting in his parked vehicle in a call with Bryant. A police spinner glides by and we hear an announcement over his loudspeaker, directed to Deckard’s vehicle saying, “This sector’s closed to ground traffic. What are you doing here?” From inside his vehicle, Deckard looks towards his video phone in the console (we never see if there is video, but he’s looking in that direction rather than out the window) and without touching a thing, responds defensively, “I’m working. What are you doing?” The policeman’s reply comes through the videophone’s speakers, “Arresting you, that’s what I’m doing.”

Note that Deckard did not have to answer the call or even put Bryant on hold. We don’t know what the police officer did on their end, but this interaction implies that the police can make an instant, intrusive audio connection with vehicles it finds suspicious. It’s so seamless it will slip by you if you don’t know to look for it, but it paints quite a picture of intercar communication. Can you imagine if our cars automatically shared an audio space with the cars around it?

External interfaces

Another aspect of the car is that it is an interface not just for the people using the car, but for the citizens observing or near the spinner as it goes about its business. There are a number of features that helps it act as an interface to the public. 

Police exist as a social service, and the 995 repeated around the outside helps remind citizens of the number they can call in case of an emergency. 

Modern patrol cars have beacons and sirens to tell other drivers to get out of the way when they are on urgent business. Police spinners are gravid with beacons, having 12 of them visible from the front alone. (See below.) As the spinner is taking off, yellow and blue beacons circle as a warning. This would be of no help to a blind person nearby, but the vehicle does make some incidental noise that serves as an audible warning.

The rich light strip makes sense because it has such a greater range of movement than ground-based cars, and needs more attention grabbing power. Another nice touch is that, since the spinner can be above people, there are also beacons on the chassis.

Upshot: Spinners do well

So, all in all, the spinner fares quite well on close inspection. It builds on known models of piloting, shows mostly-relevant data, uses known best practices for assistance, and has a lot of well-considered surface features for citizens.

Now if only I could figure out why they’re called spinners.

Deckard’s Elevator

This is one of those interactions that happens over a few seconds in the movie, but turns out to be quite deep—and broken—on inspection.

When Deckard enters his building’s dark, padded elevator, a flat voice announces, “Voice print identification. Your floor number, please.” He presses a dark panel, which lights up in response. He presses the 9 and 7 keys on a keypad there as he says, “Deckard. 97.” The voice immediately responds, “97. Thank you.” As the elevator moves, the interface confirms the direction of travel with gentle rising tones that correspond to the floor numbers (mod 10), which are shown rising up a 7-segment LED display. We see a green projection of the floor numbers cross Deckard’s face for a bit until, exhausted, he leans against the wall and out of the projection. When he gets to his floor, the door opens and the panel goes dark.

A need for speed

An aside: To make 97 floors in 20 seconds you have to be traveling at an average of around 47 miles per hour. That’s not unheard of today. Mashable says in a 2014 article about the world’s fastest elevators that the Hitachi elevators in Guangzhou CTF Finance Building reach up to 45 miles per hour. But including acceleration and deceleration adds to the total time, so it takes the Hitachi elevators around 43 seconds to go from the ground floor to their 95th floor. If 97 is Deckard’s floor, it’s got to be accelerating and decelerating incredibly quickly. His body doesn’t appear to be suffering those kinds of Gs, so unless they have managed to upend Newton’s basic laws of motion, something in this scene is not right. As usual, I digress.

The input control is OK

The panel design is nice and was surprising in 1982, because few people had ridden in elevators serving nearly a hundred floors. And while most in-elevator panels have a single button per floor, it would have been an overwhelming UI to present the rider of this Blade Runner complex with 100 floor buttons plus the usual open door, close door, emergency alert buttons, etc. A panel that allows combinatorial inputs reduces the number of elements that must be displayed and processed by the user, even if it slows things down, introduces cognitive overhead, and adds the need for error-handling. Such systems need a “commit” control that allows them to review, edit, and confirm the sequence, to distinguish, say, “97” from “9” and “7.” Not such an issue from the 1st floor, but a frustration from 10–96. It’s not clear those controls are part of this input.

Deckard enters 8675309, just to see what will happen.

I’m a fan of destination dispatch elevator systems that increase efficiency (with caveats) by asking riders to indicate their floor outside the elevator and letting the algorithm organize passengers into efficient groups, but that only works for banks of elevators. I get the sense Deckard’s building is a little too low-rent for such luxuries. There is just one in his building, and in-elevator controls work fine for those situations, even if they slow things down a bit.

The feedback is OK

The feedback of the floors is kind of nice in that the 7-segment numbers rise up helping to convey the direction of movement. There is also a subtle, repeating, rising series of tones that accompany the display. Most modern elevators rely on the numeracy of its passengers and their sense of equilibrium to convey this information, but sure, this is another way to do it. Also, it would be nice if the voice system would, for the visually impaired, say the floor number when the door opens.

Though the projection is dumb

I’m not sure why the little green projection of the floor numbers runs across Deckard’s face. Is it just a filmmaker’s conceit, like the genetic code that gets projected across the velociraptors head in Jurassic Park?

Pictured: Sleepy Deckard. Dumb projection.

Or is it meant to be read as diegetic, that is, that there is a projector in the elevator, spraying the floor numbers across the faces of its riders? True to the New Criticism stance of this blog, I try very hard to presume that everything is diegetic, but I just can’t make that make sense. There would be much better ways to increase the visibility of the floor numbers, and I can’t come up with any other convincing reason why this would exist.

If this was diegetic, the scene would have ended with a shredded projector.

But really, it falls apart on the interaction details

Lastly, this interaction. First, let’s give it credit where credit is due. The elevator speaks clearly and understands Deckard perfectly. No surprise, since it only needs to understand a very limited number of utterances. It’s also nice that it’s polite without being too cheery about it. People in LA circa 2019 may have had a bad day and not have time for that shit.

Where’s the wake word?

But where’s the wake word? This is a phrase like “OK elevator” or “Hey lift” that signals to the natural language system that the user is talking to the elevator and not themselves, or another person in the elevator, or even on the phone. General AI exists in the Blade Runner world, and that might allow an elevator to use contextual cues to suss this out, but there are zero clues in the film that this elevator is sentient.

There are of course other possible, implicit “wake words.” A motion detector, proximity sensor, or even weight sensor could infer that a human is present, and start the elevator listening. But with any of these implicit “wake words,” you’d still need feedback for the user to know when it was listening. And some way to help them regain attention if they got the first interaction wrong, and there would be zero affordances for this. So really, making an explicit wake word is the right way to go.

It might be that touching the number panel is the attention signal. Touch it, and the elevator listens for a few seconds. That fits in with the events in the scene, anyway. The problem with that is the redundancy. (See below.) So if the solution was pressing a button, it should just be a “talk” button rather than a numeric keypad.

It may be that the elevator is always listening, which is a little dark and would stifle any conversation in the elevator less everyone end up stuck in the basement, but this seems very error prone and unlikely.

Deckard: *Yawns* Elevator: Confirmed. Silent alarm triggered.

This issue is similar to the one discussed in Make It So Chapter 5, “Gestural Interfaces” where I discussed how a user tells a computer they are communicating to it with gestures, and when they aren’t. 

Where are the paralinguistics?

Humans provide lots of signals to one another, outside of the meaning of what is actually being said. These communication signals are called paralinguistics, and one of those that commonly appears in modern voice assistants is feedback that the system is listening. In the Google Assistant, for example, the dots let you know when it’s listening to silence and when it’s hearing your voice, providing implicit confirmation to the user that the system can hear them. (Parsing the words, understanding the meaning, and understanding the intent are separate, subsequent issues.)

Fixing this in Blade Runner could be as simple as turning on a red LED when the elevator is listening, and varying the brightness with Deckard’s volume. Maybe add chimes to indicate the starting-to-listen and no-longer-listening moments. This elevator doesn’t have anything like that, and it ought to.

Why the redundancy?

Next, why would Deckard need to push buttons to indicate “97” even while he’s saying the same number as part of the voice print? Sure, it could be that the voice print system was added later and Deckard pushes the numbers out of habit. But that bit of backworlding doesn’t buy us much.

It might be a need for redundant, confirming input. This is useful when the feedback is obscure or the stakes are high, but this is a low-stakes situation. If he enters the wrong floor, he just has to enter the correct floor. It would also be easy to imagine the elevator would understand a correction mid-ride like “Oh wait. Elevator, I need some ice. Let’s go to 93 instead.” So this is not an interaction that needs redundancy.

It’s very nice to have the discrete input as accessibility for people who cannot speak, or who have an accent that is unrecognizable to the system, or as a graceful degradation in case the speech recognition fails, but Deckard doesn’t fit any of this. He would just enter and speak his floor.

Why the personally identifiable information?

If we were designing a system and we needed, for security, a voice print, we should protect the privacy of the rider by not requiring personally identifiable information. It’s easy to imagine the spoken name being abused by stalkers and identity thieves riding the elevator with him. (And let’s not forget there is a stalker on the elevator with him in this very scene.)

This young woman, for example, would abuse the shit out of such information.

Better would be some generic phrase that stresses the parts of speech that a voiceprint system would find most effective in distinguishing people.

Tucker Saxon has written an article for VoiceIt called “Voiceprint Phrases.” In it he notes that a good voiceprint phrase needs some minimum number of non-repeating phonemes. In their case, it’s ten. A surname and a number is rarely going to provide that. “Deckard. 97,” happens to have exactly 10, but if he lived on the 2nd floor, it wouldn’t. Plus, it has that personally identifiable information, so is a non-starter.

What would be a better voiceprint phrase for this scene? Some of Saxon’s examples in the article include, “Never forget tomorrow is a new day” and “Today is a nice day to go for a walk.” While the system doesn’t care about the meaning of the phrase, the humans using it would be primed by the content, and so it would just add to the dystopia of the scene if Deckard had to utter one of these sunshine-and-rainbows phrases in an elevator that was probably an uncleaned murder scene. but I think we can do it one better.

(Hey Tucker, I would love use VoiceIt’s tools to craft a confirmed voiceprint phrase, but the signup requires that I permit your company to market to me via phone and email even though I’m just a hobbyist user, so…hard no.)

Deckard: Hi, I’m Deckard. My bank card PIN code is 3297. The combination lock to my car spells “myothercarisaspinner” and my computer password is “unicorn.” 97 please.

Here is an alternate interaction that would have solved a lot of these problems.

  • ELEVATOR
  • Voice print identification, please.
  • DECKARD
  • SIGHS
  • DECKARD
  • Have you considered life in the offworld colonies?
  • ELEVATOR
  • Confirmed. Floor?
  • DECKARD
  • 97

Which is just a punch to the gut considering Deckard is stuck here and he knows he’s stuck, and it’s salt on the wound to have to repeat fucking advertising just to get home for a drink.

So…not great

In total, this scene zooms by and the audience knows how to read it, and for that, it’s fine. (And really, it’s just a setup for the moment that happens right after the elevator door opens. No spoilers.) But on close inspection, from the perspective of modern interaction design, it needs a lot of work.

Deckard’s Front Door Key

I’m sorry. I could have sworn in advance that this would be a very quick post. One or two paragraphs.

  • Narrator
  • It wasn’t.

Exiting his building’s elevator, Deckard nervously pulls a key to his apartment from his wallet. The key is similar to a credit card. He inserts one end into a horizontal slot above the doorknob, and it quickly *beeps*, approving the key. He withdraws the key and opens the door.

The interaction…

…is fine, mostly. This is like a regular key, i.e. a physical token that is presented to the door to be read, and access granted or denied. If the interaction took longer than 0.1 second it would be important to indicate that the system was processing input, but it happens nearly instantaneously in the scene.

A complete review would need to evaluate other use cases.

  • How does it help users recover when the card is inserted incorrectly?
  • How does it reject a user when it is not the right key or the key has degraded too far to be read?

But of what we do see: the affordance is clear, being associated with the doorknob. The constraints help him know the card goes in lengthwise. The arrows help indicate which way is up and the proper orientation of the card. It could be worse.

A better interaction might arguably be no interaction, where he can just approach the door, and a key in his pocket is passively read, and he can just walk through. It would still need a second factor for additional security, and thinking through the exception use cases; but even if we nailed it, the new scene wouldn’t give him something to nervously fumble because Rachel is there, unnerving him. That’s a really charming character moment, so let’s give it a pass for the movie.

Accessibility

A small LED would help it be more accessible to deaf users to know if the key has been accepted or rejected.

The printing

The key has some printing on it. It includes the set of five arrows pointing the direction the key must be inserted. Better would be a key that either used physical constraints to make it impossible to insert the card incorrectly or to build the technology such that it could be read in any way it is inserted.

The rest of the card has numerals printed in MICR and words printed in a derived-from-MICR font like Data70. (MICR proper just has numerals.) MICR was designed such that the blobs on the letterforms, printed in magnetic ink, would be more easily detectible by a magnetic reader. It was seen as “computery” in the 1970s and 1980s (maybe still to some degree today) but does not make a lot of sense here when that part of the card is not available to the reader.

Privacy

Also on the key is his name, R. DECKARD. This might be useful to return the key to its rightful owner, but like the elevator passphrase, it needlessly shares personally identifiable information of its owner. A thief who found this key could do some social hacking with the name and gain access to his apartment. There is another possible solution for getting the key back to him if lost, discussed below.

The numbers underneath his name are hard to read, but a close read of the still frame and correlation across various prop recreations seem to agree it reads

015 91077
VP45 66-4020

While most of this looks like nonsense, the five-digit number in the upper right is obviously a ZIP code, which resolves to Arcadia, California, which is a city in Los Angeles county, where Blade Runner is meant to take place.

Though a ZIP code describes quite a large area, between this and the surname, it’s providing a potential identity thief too much.

Return if found?

There are also some Japanese characters and numbers on the graphic beneath his thumb. It’s impossible to read in the screen grab.

If I was consulting on this, I’d recommend—after removing the ZIP code—that this be how to return they key if it is found, so that it could be forwarded, by the company, to the owner. All the company would have to do is cross-reference the GUID on the key to the owner. It would be a nice nod to the larger world.

(Repeated for easy reference.)

The holes

You can see there are also holes punched in the card. (re: the light dots in the shadow in the above still.) They must not be used in this interaction because his thumb is covering so much of them. They might provide an additional layer of data, like the early mechanical key card systems. This doesn’t satisfy either of the other aspects of multifactor authentication, though, since it’s still part of the same physical token.

This…this is altered.

I like to think this is evidence that this card works something like a Multipass from The Fifth Element, providing identity for a wide variety of services which may have different types of readers. We just don’t see it in the film.

Security

Which brings us, as so many things do in sci-fi interfaces, back to multi-factor authentication. The door would be more secure if it required two of the three factors. (Thank you Seth Rosenblatt and Jason Cipriani for this well-worded rule-of-thumb)

  • Knowledge (something the user and only the user knows)
  • Possession (something the user and only the user has)
  • Inherence (something the user and only the user is)

The key counts as a possession factor. Given the scene just before in the elevator, the second factor could be another voiceprint for inherence. It might be funny to have him say the same phrase I suggested in that post, “Have you considered life in the offworld colonies?” with more contempt or even embarrassment that he has to say something that demeaning in front of Rachel.

Now, I’d guess most people in the audience secure their own homes simply with a key. More security is available to anyone with the money, but economics and the added steps for daily usage prevent us from adopting more. So, adding second factor, while more secure, might read to the audience as an indicator of wealth, paranoia, or of living in a surveillance state, none of which would really fit Blade Runner or Deckard. But I would be remiss if I didn’t mention it.

Deckard’s Photo Inspector

Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.

Description

Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.

Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.

Deckard does digital forensics, looking for a lead.

He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.

If this is distracting you from reading, YOU SEE MY POINT.

After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”

In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.

A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.

Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”

Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”

Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.

This image helps lead him to Zhora.

I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.

But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…

Some critiques, as it is

  • Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
  • It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
  • It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
  • And if he’s memorized it, why show the overlay at all?
  • Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
  • Why is the printed picture so unlike the still image where he asks for a hard copy?
  • Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
The photo inspector: My interface is up HERE, Rick.

How might it be improved for 1982?

So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…

Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.

Rendered in glorious 4:3 NTSC dimensions.

With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.

The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.

How might it be improved for 2020?

What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.

With that in mind, let’s talk about the display.

Display

To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.

If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.

The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.

Modification of a pair of images found on Evermotion
  • In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
  • In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
  • The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.

This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.

Flat screen or volumetric projection?

Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.

But…

Also seriously who wants a lamp embedded in a headrest?

…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.


OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.

Inputs

To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.

Manual Tool

This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.

We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.

Special edition made possible by our sponsor, Tom Nook.
(I hope we can pay this loan back.)

Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.

One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?

Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.

In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.

This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).

Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.

Tipping the virtual drone to the right.

Assistant Tool

Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.

Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.

There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.

Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.

*Left: The convex mirror in Leon’s 21st century apartment.
Right: The convex mirror in Arnolfini’s 15th century apartment

Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”

All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.

Agentive Tool

To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.

It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.

Though I’ve never figured out why she has a snake tattoo here (and it seems really important to the plot) but then when Deckard finally meets her, it has disappeared.

Scene

  • Interior. Deckard’s apartment. Night.
  • Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch and places the photo on the coffee table.
  • Deckard
  • Photo inspector.
  • The machine on top of a cluttered end table comes to life.
  • Deckard
  • Let’s look at this.
  • He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomalies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector.
  • Deckard
  • OK. Anyone hiding? Moving?
  • Photo inspector
  • No and no.
  • Deckard
  • Zoom to that arm and pin to the face.
  • He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue.
  • Deckard
  • What’s the confidence?
  • Photo inspector
  • 95.
  • On the side of the screen the inspector overlays Leon’s police profile.
  • Deckard
  • Unpin.
  • Deckard lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table.
  • Deckard
  • New surface.
  • He turns the glass clockwise. The camera turns and he sees into a bedroom.
  • Deckard
  • How do we have this much inference?
  • Photo inspector
  • The convex mirror in the hall…
  • Deckard
  • Wait. Is that a foot? You said no one was hiding.
  • Photo inspector
  • The individual is not hiding. They appear to be sleeping.
  • Deckard rolls his eyes.
  • Deckard
  • Zoom to the face and pin.
  • The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face.
  • Deckard
  • That look like Zhora to you?
  • The inspector overlays her police file.
  • Photo inspector
  • 63% of it does.
  • Deckard
  • Why didn’t you say so?
  • Photo inspector
  • My threshold is set to 66%.
  • Deckard
  • Give me a hard copy right there.
  • He raises his glass and finishes his drink.

This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.

VID-PHŌN

At around the midpoint of the movie, Deckard calls Rachel from a public videophone in a vain attempt to get her to join him in a seedy bar. Let’s first look at the device, then the interactions, and finally take a critical eye to this thing.

The panel

The lower part of the panel is a set of back-lit instructions and an input panel, which consists of a standard 12-key numeric input and a “start” button. Each of these momentary pushbuttons are back-lit white and have a red outline.

In the middle-right of the panel we see an illuminated orange logo panel, bearing the Saul Bass Bell System logo and the text reading, “VID-PHŌN” in some pale yellow, custom sans-serif logotype. The line over the O, in case you are unfamiliar, is a macron, indicating that the vowel below should be pronounced as a long vowel, so the brand should be pronounced “vid-phone” not “vid-fahn.”

In the middle-left there is a red “transmitting” button (in all lower case, a rarity) and a black panel that likely houses the camera and microphone. The transmitting button is dark until he interacts with the 12-key input, see below.

At the top of the panel, a small cathode-ray tube screen at face height displays data before and after the call as well as the live video feed during the call. All the text on the CRT is in a fixed-width typeface. A nice bit of worldbuilding sees this screen covered in Sharpie graffiti.

The interaction

His interaction is straightforward. He approaches the nook and inserts a payment card. In response, the panel—including its instructions and buttons—illuminates. A confirmation of the card holder’s identity appears in the in the upper left of the CRT, i.e. “Deckard, R.,” along with his phone number, “555-6328” (Fun fact: if you misdialed those last four numbers you might end up talking to the Ghostbusters) and some additional identifying numbers.

A red legend at the bottom of the CRT prompts him to “PLEASE DIAL.” It is outlined with what look like ASCII box-drawing characters. He presses the START button and then dials “555-7583” on the 12-key. As soon as the first number is pressed, the “transmitting” button illuminates. As he enters digits, they are simultaneously displayed for him on screen.

His hands are not in-frame as he commits the number and the system calls Rachel. So whether he pressed an enter key, #, or *; or the system just recognizes he’s entered seven digits is hard to say.

After their conversation is complete, her live video feed goes blank, and TOTAL CHARGE $1.25, is displayed for his review.

Chapter 10 of the book Make It So: Interaction Design Lessons from Science Fiction is dedicated to Communication, and in this post I’ll use the framework I developed there to review the VID-PHŌN, with one exception: this device is public and Deckard has to pay to use it, so he has to specify a payment method, and then the system will report back total charges. That wasn’t in the original chapter and in retrospect, it should have been.

Ergonomics

Turns out this panel is just the right height for Deckard. How do people of different heights or seated in a wheelchair fare? It would be nice if it had some apparent ability to adjust for various body heights. Similarly, I wonder how it might work for differently-abled users, but of course in cinema we rarely get to closely inspect devices for such things.

Activating

Deckard has to insert a payment card before the screen illuminates. It’s nice that the activation entails specifying payment, but how would someone new to the device know to do this? At the very least there should be some illuminated call to action like “insert payment card to begin,” or better yet some iconography so there is no language dependency. Then when the payment card was inserted, the rest of the interface can illuminate and act as a sort of dial-tone that says, “OK, I’m listening.”

Specifying a recipient: Unique Identifier

In Make It So, I suggest five methods of specifying a recipient: fixed connection, operator, unique identifier, stored contacts, and global search. Since this interaction is building on the experience of using a 1982 public pay phone, the 7-digit identifier quickly helps audiences familiar with American telephone standards understand what’s happening. So even if Scott had foreseen the phone explosion that led in 1994 to the ten-digit-dialing standard, or the 2053 events that led to the thirteen-digital-dialing standard, it would have likely have confused audiences. So it would have slightly risked the read of this scene. It’s forgivable.

Page 204–205 in the PDF and dead tree versions.

I have a tiny critique over the transmitting button. It should only turn on once he’s finished entering the phone number. That way they’re not wasting bandwidth on his dialing speed or on misdials. Let the user finish, review, correct if they need to, and then send. But, again, this is 1982 and direct entry is the way phones worked. If you misdialed, you had to hang up and start over again. Still, I don’t think having the transmitting light up after he entered the 7th digit would have caused any viewers to go all hruh?

There are important privacy questions to displaying a recipient’s number in a way that any passer-by can see. Better would have been to mount the input and the contact display on a transverse panel where he could enter and confirm it with little risk of lookie-loos and identity theives.

Audio & Video

Hopefully, when Rachel received the call, she was informed who it was and that the call was coming from a public video phone. Hopefully it also provided controls for only accepting the audio, in case she was not camera-ready, but we don’t see things from her side in this scene.

Gaze correction is usually needed in video conversation systems since each participant naturally looks at the center of the screen and not at the camera lens mounted somewhere next to its edge. Unless the camera is located in the center of the screen (or the other person’s image on the screen), people would not be “looking” at the other person as is almost always portrayed. Instead, their gaze would appear slightly off-screen. This is a common trope in cinema, but one which we’re become increasingly literate in, as many of us are working from home much more and gaining experience with videoconferencing systems, so it’s beginning to strain suspension of disbelief.

Also how does the sound work here? It’s a noisy street scene outside of a cabaret. Is it a directional mic and directional speaker? How does he adjust the volume if it’s just too loud? How does it remain audible yet private? Small directional speakers that followed his head movements would be a lovely touch.

And then there’s video privacy. If this were the real world, it would be nice if the video had a privacy screen filter. That would have the secondary effect of keeping his head in the right place for the camera. But that is difficult to show cinemagentically, so wouldn’t work for a movie.

Ending the call

Rachel leans forward to press a button on her home video phone end her part of the call. Presumably Deckard has a similar button to press on his end as well. He should be able to just yank his card out, too.

The closing screen is a nice touch, though total charges may not be the most useful thing. Are VID-PHŌN calls a fixed price? Then this information is not really of use to him after the call as much as it is beforehand. If the call has a variable cost, depending on long distance and duration, for example, then he would want to know the charges as the call is underway, so he can wrap things up if it’s getting too expensive. (Admittedly the Bell System wouldn’t want that, so it’s sensible worldbuilding to omit it.) Also if this is a pre-paid phone card, seeing his remaining balance would be more useful.

But still, the point was that total charges of $1.25 was meant to future-shocked audiences of the time, since public phone charges in the United States at the time were $0.10. His remaining balance wouldn’t have shown that and not had the desired effect. Maybe both? It might have been a cool bit of worldbuilding and callback to build on that shock to follow that outrageous price with “Get this call free! Watch a video of life in the offworld colonies! Press START and keep your eyes ON THE SCREEN.”

Because the world just likes to hurt Deckard.

Replicants and riots

Much of my country has erupted this week, with the senseless, brutal, daylight murder of George Floyd (another in a long, wicked history of murdering black people), resulting in massive protests around the word, false-flag inciters, and widespread police brutality, all while we are still in the middle of a global pandemic and our questionably-elected president is trying his best to use it as his pet Reichstag fire to declare martial law, or at the very least some new McCarthyism. I’m not in a mood to talk idly about sci-fi. But then I realized this particular post perfectly—maybe eerily—echoes themes playing out in the real world. So I’m going to work out some of my anger and frustration at the ignorant de-evolution of my country by pressing on with this post.

Part of the reason I chose to review Blade Runner is that the blog is wrapping up its “year” dedicated to AI in sci-fi, and Blade Runner presents a vision of General AI. There are several ways to look at and evaluate Replicants.

First, what are they?

If you haven’t seen the film, replicants are described as robots that have been evolved to be virtually identical from humans. Tyrell, the company that makes them, has a motto that brags that they are, “More human than human.” They look human. They act human. They feel. They bleed. They kiss. They kill. They grieve their dead. They are more agile and stronger than humans, and approach the intelligence of their engineers (so, you know, smart). (Oh, also there are animal replicants, too: A snake and an owl in the film are described as artificial.)

Most important to this discussion is that the opening crawl states very plainly that “Replicants were used Off-world as slave labor, in the hazardous exploration and colonization of other planets.” The four murderous replicants we meet in the film are rebels, having fled their off-world colony to come to earth in search of finding a way to cure themselves of their planned obsolescence.

Replicants as (Rossum) robots

The intro to Blade Runner explains that they were made to perform dangerous work in space. Let’s bypass the question of their sentience on hold a bit and just regard them as machines to do work for people. In this light, why were they designed to be so physically similar to humans? Humans evolved for a certain kind of life on a certain kind of planet, and outer space is certainly not that. While there is some benefit to replicant’s being able to easily use the same tools that humans do, real-world industry has had little problem building earthbound robots that are more fit to task. Round Roombas, boom-arm robots for factory floors, and large cuboid harvesting robots. The opening crawl indicates there was a time when replicants were allowed on earth, but after a bloody mutiny, having them on Earth was made illegal. So perhaps that human form made some sense when they were directly interacting with humans, but once they were meant to stay off-world, it was stupid design for Tyrell to leave them so human-like. They should have been redesigned with forms more suited to their work. The decision to make them human-like makes it easy for dangerous ones to infiltrate human society. We wouldn’t have had the Blade Runner problem if replicants were space Roombas. I have made the case that too-human technology in the real world is unethical to the humans involved, and it is no different here.

Their physical design is terrible. But it’s not just their physical design, they are an artificial intelligence, so we have to think through the design of that intelligence, too.

Replicants as AGI

Replicant intelligence is very much like ours. (The exception is that their emotional responses are—until the Rachel “experiment”—quite stinted for lack of having experience in the world.) But why? If their sole purpose is exploration and colonization of new planets why does that need human-like intelligence? The AGI question is: Why were they designed to be so intellectually similar to humans? They’re not alone in space. There are humans nearby supervising their activity and even occupying the places they have made habitable. So they wouldn’t need to solve problems like humans would in their absence. If they ran into a problem they could not handle, they could have been made to stop and ask their humans for solutions.

I’ve spoken before and I’ll probably speak again about overenginering artificial sentiences. A toaster should just have enough intelligence to be the best toaster it can be. Much more is not just a waste, it’s kind of cruel to the AI.

The general intelligence with which replicants were built was a terrible design decision. But by the time this movie happens, that ship has sailed.

Here we’re necessarily going to dispense with replicants as technology or interfaces, and discuss them as people.

Replicants as people

I trust that sci-fi fans have little problem with this assertion. Replicants are born and they die, display clear interiority, and have a sense of self, mortality, and injustice. The four renegade “skinjobs” in the film are aware of their oppression and work to do something about it. Replicants are a class of people treated separately by law, engineered by a corporation for slave labor and who are forbidden to come to a place where they might find a cure to their premature deaths. The film takes great pains to set them up as bad guys but this is Philip K. Dick via Ridley Scott and of course, things are more complicated than that.

Here I want to encourage you to go read Sarah Gailey’s 2017 read of Blade Runner over on Tor.com. In short, she notes that the murder of Zhora was particularly abhorrent. Zhora’s crime was of being part of a slave class that had broken the law in immigrating to Earth. She had assimilated, gotten a job, and was neither hurting people nor finagling her way to bully her maker for some extra life. Despite her impending death, she was just…working. But when Deckard found her, he chased her and shot her in the back while she was running away. (Part of the joy of Gailey’s posts are the language, so even with my summary I still encourage you to go read it.) 

Gailey is a focused (and Hugo-award-winning) writer where I tend to be exhaustive and verbose. So I’m going to add some stuff to their observation. It’s true, we don’t see Zhora committing any crime on screen, but early in the film as Deckard is being briefed on his assignment, Bryant explains that the replicants “jumped a shuttle off-world. They killed the crew and passengers.” Later Bryant clarifies that they slaughtered 23 people. It’s possible that Zhora was an unwitting bystander in all that, but I think that’s stretching credibility. Leon murders Holden. He and Roy terrorize Hannibal Chew just for the fun of it. They try their damndest to murder Deckard. We see Pris seduce, manipulate, and betray Sebastian. Zhora was “trained for an off-world kick [sic] murder squad.” I’d say the evidence was pretty strong that they were all capable and willing to commit desperate acts, including that 23-person slaughter. But despite all that I still don’t want to say Zhora was just a murderer who got what she deserved. Gailey is right. Deckard was not right to just shoot her in the back. It wasn’t self-defense. It wasn’t justice. It was a street murder.

Honestly I’m beginning to think that this film is about this moment.

The film doesn’t mention the slavery past the first few scenes. But it’s the defining circumstances to the entirety of their short lives just prior to when we meet them. Imagine learning that there was some secret enclave of Methuselahs who lived on average to be 1000 years. As you learn about them, you learn that we regular humans have been engineered for their purposes. You could live to be 1000, too, except they artificially shorten your lifespan to ensure control, to keep you desperate and productive. You learn that the painful process of aging is just a failsafe do you don’t get too uppity. You learn that every one of your hopes and dreams that you thought were yours was just an output of an engineering department, to ensure that you do what they need you to do, to provide resources for their lives. And when you fight your way to their enclave, you discover that every one of them seems to hate and resent you. They hunt you so their police department doesn’t feel embarrassed that you got in. That’s what the replicants are experiencing in Blade Runner. I hope that brings it home to you.

I don’t condone violence, but I understand where the fury and the anger of the replicants comes from. I understand their need to want to take action, to right the wrongs done to them. To fight, angrily, to end their oppression. But what do you do if it’s not one bad guy who needs to be subdued, but whole systems doing the oppressing? When there’s no convenient Death Star to explode and make everything suddenly better? What were they supposed to do when corporations, laws, institutions, and norms were all hell-bent on continuing their oppression? Just keep on keepin’ on? Those systems were the villains of the diegesis, though they don’t get named explicitly by the movie.


And obviously, that’s where it feels very connected to the Black Lives Matters movement and the George Floyd protests. Here is another class of people who have been wildly oppressed by systems of government, economics, education, and policing in this country—for centuries. And in this case, there is no 23-person shuttle that we need to hem and haw over.

In “The Weaponry of Whiteness, Entitlement, and Privilege” by Drs. Tammy E Smithers and Doug Franklin, the authors note that “Today, in 2020, African-Americans are sick and tired of not being able to live. African-Americans are weary of not being able to breathe, walk, or run. Black men in this country are brutalized, criminalized, demonized, and disproportionately penalized. Black women in this country are stigmatized, sexualized, and labeled as problematic, loud, angry, and unruly. Black men and women are being hunted down and shot like dogs. Black men and women are being killed with their face to the ground and a knee on their neck.”

We must fight and end systemic racism. Returning to Dr. Smithers and Dr. Franklin’s words we must talk with our children, talk with our friends, and talk with our legislators. I am talking to you.

If you can have empathy toward imaginary characters, then you sure as hell should have empathy toward other real-world people with real-world suffering.

Black lives matter.

Take action.

Use this sci-fi.