At around the midpoint of the movie, Deckard calls Rachel from a public videophone in a vain attempt to get her to join him in a seedy bar. Let’s first look at the device, then the interactions, and finally take a critical eye to this thing.
The lower part of the panel is a set of back-lit instructions and an input panel, which consists of a standard 12-key numeric input and a “start” button. Each of these momentary pushbuttons are back-lit white and have a red outline.
In the middle-right of the panel we see an illuminated orange logo panel, bearing the Saul Bass Bell System logo and the text reading, “VID-PHŌN” in some pale yellow, custom sans-serif logotype. The line over the O, in case you are unfamiliar, is a macron, indicating that the vowel below should be pronounced as a long vowel, so the brand should be pronounced “vid-phone” not “vid-fahn.”
In the middle-left there is a red “transmitting” button (in all lower case, a rarity) and a black panel that likely houses the camera and microphone. The transmitting button is dark until he interacts with the 12-key input, see below.
At the top of the panel, a small cathode-ray tube screen at face height displays data before and after the call as well as the live video feed during the call. All the text on the CRT is in a fixed-width typeface. A nice bit of worldbuilding sees this screen covered in Sharpie graffiti.
His interaction is straightforward. He approaches the nook and inserts a payment card. In response, the panel—including its instructions and buttons—illuminates. A confirmation of the card holder’s identity appears in the in the upper left of the CRT, i.e. “Deckard, R.,” along with his phone number, “555-6328” (Fun fact: if you misdialed those last four numbers you might end up talking to the Ghostbusters) and some additional identifying numbers.
A red legend at the bottom of the CRT prompts him to “PLEASE DIAL.” It is outlined with what look like ASCII box-drawing characters. He presses the START button and then dials “555-7583” on the 12-key. As soon as the first number is pressed, the “transmitting” button illuminates. As he enters digits, they are simultaneously displayed for him on screen.
His hands are not in-frame as he commits the number and the system calls Rachel. So whether he pressed an enter key, #, or *; or the system just recognizes he’s entered seven digits is hard to say.
After their conversation is complete, her live video feed goes blank, and TOTAL CHARGE $1.25, is displayed for his review.
Chapter 10 of the book Make It So: Interaction Design Lessons from Science Fiction is dedicated to Communication, and in this post I’ll use the framework I developed there to review the VID-PHŌN, with one exception: this device is public and Deckard has to pay to use it, so he has to specify a payment method, and then the system will report back total charges. That wasn’t in the original chapter and in retrospect, it should have been.
Turns out this panel is just the right height for Deckard. How do people of different heights or seated in a wheelchair fare? It would be nice if it had some apparent ability to adjust for various body heights. Similarly, I wonder how it might work for differently-abled users, but of course in cinema we rarely get to closely inspect devices for such things.
Deckard has to insert a payment card before the screen illuminates. It’s nice that the activation entails specifying payment, but how would someone new to the device know to do this? At the very least there should be some illuminated call to action like “insert payment card to begin,” or better yet some iconography so there is no language dependency. Then when the payment card was inserted, the rest of the interface can illuminate and act as a sort of dial-tone that says, “OK, I’m listening.”
Specifying a recipient: Unique Identifier
In Make It So, I suggest five methods of specifying a recipient: fixed connection, operator, unique identifier, stored contacts, and global search. Since this interaction is building on the experience of using a 1982 public pay phone, the 7-digit identifier quickly helps audiences familiar with American telephone standards understand what’s happening. So even if Scott had foreseen the phone explosion that led in 1994 to the ten-digit-dialing standard, or the 2053 events that led to the thirteen-digital-dialing standard, it would have likely have confused audiences. So it would have slightly risked the read of this scene. It’s forgivable.
I have a tiny critique over the transmitting button. It should only turn on once he’s finished entering the phone number. That way they’re not wasting bandwidth on his dialing speed or on misdials. Let the user finish, review, correct if they need to, and then send. But, again, this is 1982 and direct entry is the way phones worked. If you misdialed, you had to hang up and start over again. Still, I don’t think having the transmitting light up after he entered the 7th digit would have caused any viewers to go all hruh?
There are important privacy questions to displaying a recipient’s number in a way that any passer-by can see. Better would have been to mount the input and the contact display on a transverse panel where he could enter and confirm it with little risk of lookie-loos and identity theives.
Audio & Video
Hopefully, when Rachel received the call, she was informed who it was and that the call was coming from a public video phone. Hopefully it also provided controls for only accepting the audio, in case she was not camera-ready, but we don’t see things from her side in this scene.
Gaze correction is usually needed in video conversation systems since each participant naturally looks at the center of the screen and not at the camera lens mounted somewhere next to its edge. Unless the camera is located in the center of the screen (or the other person’s image on the screen), people would not be “looking” at the other person as is almost always portrayed. Instead, their gaze would appear slightly off-screen. This is a common trope in cinema, but one which we’re become increasingly literate in, as many of us are working from home much more and gaining experience with videoconferencing systems, so it’s beginning to strain suspension of disbelief.
Also how does the sound work here? It’s a noisy street scene outside of a cabaret. Is it a directional mic and directional speaker? How does he adjust the volume if it’s just too loud? How does it remain audible yet private? Small directional speakers that followed his head movements would be a lovely touch.
And then there’s video privacy. If this were the real world, it would be nice if the video had a privacy screen filter. That would have the secondary effect of keeping his head in the right place for the camera. But that is difficult to show cinemagentically, so wouldn’t work for a movie.
Ending the call
Rachel leans forward to press a button on her home video phone end her part of the call. Presumably Deckard has a similar button to press on his end as well. He should be able to just yank his card out, too.
The closing screen is a nice touch, though total charges may not be the most useful thing. Are VID-PHŌN calls a fixed price? Then this information is not really of use to him after the call as much as it is beforehand. If the call has a variable cost, depending on long distance and duration, for example, then he would want to know the charges as the call is underway, so he can wrap things up if it’s getting too expensive. (Admittedly the Bell System wouldn’t want that, so it’s sensible worldbuilding to omit it.) Also if this is a pre-paid phone card, seeing his remaining balance would be more useful.
But still, the point was that total charges of $1.25 was meant to future-shocked audiences of the time, since public phone charges in the United States at the time were $0.10. His remaining balance wouldn’t have shown that and not had the desired effect. Maybe both? It might have been a cool bit of worldbuilding and callback to build on that shock to follow that outrageous price with “Get this call free! Watch a video of life in the offworld colonies! Press START and keep your eyes ON THE SCREEN.”
Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.
Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home with with. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.
Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.
He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.
After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”
In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.
A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.
Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”
Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”
Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.
I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.
But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…
Some critiques, as it is
Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
And if he’s memorized it, why show the overlay at all?
Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
Why is the printed picture so unlike the still image where he asks for a hard copy?
Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
How might it be improved for 1982?
So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…
Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.
With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.
The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.
How might it be improved for 2020?
What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.
With that in mind, let’s talk about the display.
To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.
If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.
The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.
In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.
This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.
Flat screen or volumetric projection?
Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.
…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.
OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.
To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.
This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.
We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.
Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.
One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?
Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.
In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.
This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).
Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.
Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.
Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.
There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.
Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.
Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”
All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.
To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.
It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.
Interior. Deckard’s apartment. Night.
Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch, places the photo on the coffee table and says “Photo inspector?” The machine on top of a cluttered end table comes to life. Deckard continues, “Let’s look at this.” He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomolies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink and says, “Controller,” before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector. He says, “OK. Anyone hiding? Moving?” The inspector replies, “No and no.” Deckard looks at the screen he says, “Zoom to that arm and pin to the face.” He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue. He asks, “What’s the confidence?” The inspector replies, “95.” On the side of the screen the inspector overlays Leon’s police profile. Deckard says, “unpin” and lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table. “New surface,” he says, and turns the glass clockwise. The camera turns and he sees into a bedroom. “How do we have this much inference?” he asks. The inspector replies, “The convex mirror in the hall…” Deckard interrupts, saying, “Wait. Is that a foot? You said no one was hiding.” The inspector replies, “The individual is not hiding. They appear to be sleeping.” Deckard rolls his eyes. He says, “Zoom to the face and pin.” The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face. Deckard says, “That look like Zhora to you?” The inspector overlays her police file and replies, “63% of it does.” Deckard says, “Why didn’t you say so?” The inspector replies, “My threshold is set to 66%.” Deckard says, “Give me a hard copy right there.” He raises his glass and finishes his drink.
This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.
Another incidental interface is the pregnancy test that Joe finds in the garbage. We don’t see how the test is taken, which would be critical when considering its design. But we do see the results display in the orange light of Joe and Beth’s kitchen. It’s a cartoon baby with a rattle, swaying back and forth.
Sure it’s cute, but let’s note that the news of a pregnancy is not always good news. If the pregnancy is not welcome, the “Lucky you!” graphic is just going to rip her heart out. Much better is an unambiguous but neutral signal.
That said, Black Mirror is all about ripping our hearts out, so the cuteness of this interface is quite fitting to the world in which this appears. Narratively, it’s instantly recognizable as a pregnancy test, even to audience members who are unfamiliar with such products. It also sets up the following scene where Joe is super happy for the news, but Beth is upset that he’s seen it. So, while it’s awful for the real world; for the show, this is perfect.
After Joe confronts Beth and she calls for help, Joe is taken to a police station where in addition to the block, he now has a GPS-informed restraining order against him.
To confirm the order, Joe has to sign is name to a paper and then press his thumbprints into rectangles along the bottom. The design of the form is well done, with a clearly indicated spot for his signature, and large touch areas in which he might place his thumbs for his thumbprints to be read.
A scary thing in the interface is that the text of what he’s signing is still appearing while he’s providing his thumbprints. Of course the page could be on a loop that erases and redisplays the text repeatedly for emphasis. But, if it was really downloading and displaying it for the first time to draw his attention, then he has provided his signature and thumbprints too early. He doesn’t yet know what he’s signing.
Government agencies work like this all the time and citizens comply because they have no choice. But ideally, if he tried to sign or place his thumbprints before seeing all the text of what he’s signing, it would be better for the interface to reject his signature with a note that he needs to finish reading the text before he can confirm he has read and understands it. Otherwise, if the data shows that he authenticated it before the text appeared, I’d say he had a pretty good case to challenge the order in court.
Virtual Greta has a console to perform her slavery duties. Matt explains what this means right after she wakes up by asking her how she likes her toast. She answers, “Slightly underdone.”
He puts slices of bread in a toaster and instructs her, “Think about how you like it, and just press the button.”
She asks, incredulously, “Which one?” and he explains, “It doesn’t matter. You already know you’re making toast. The buttons are symbolic mostly, anyway.”
She cautiously approaches the console and touches a button in the lower left corner. In response, the toaster drops the carriage lever and begins toasting.
“See?” he asks, “This is your job now. You’re in charge of everything here. The temperature. The lighting. The time the alarm clock goes off in the morning. If there’s no food in the refrigerator, you’re in charge of ordering it.”Continue reading →
EYE-LINK is an interface used between a person at a desktop who uses support tools to help another person who is live “in the field” using Zed-Eyes. The working relationship between the two is very like Vika and Jack in Oblivion, or like the A.I. in Sight.
In this scene, we see EYE-LINK used by a pick-up artist, Matt, who acts as a remote “wingman” for pick-up student Harry. Matt has a group video chat interface open with paying customers eager to lurk, comment, and learn from the master.
Harry wears a hidden camera and microphone. This is the only tech he seems to have on him, only hearing his wingman’s voice, and only able to communicate back to his wingman by talking generally, talking about something he’s looking at, or using pre-arranged signals.
Tap your beer twice if this is more than a little creepy.
A smaller transparent information panel for automated analysis, research, and advice.
An extra, laptop-like screen where Matt leads a group video chat with a paying audience, who are watching and snarkily commenting on the wingman scenario. It seems likely that this is not an official part of the EYE-LINK software.
In the priorthreeposts, I’ve discussed the goods-and-bads of the Eye of Agamotto in the Tibet mode. (I thought I could squeeze the Hong Kong and the Dark Dimension modes into one post, but turns out this one was just too long. keep reading. You’ll see.) In this post we examine a mode that looks like the Tibet mode, but is actually quite different.
Hong Kong mode
Near the film’s climax, Strange uses the Eye to reverse Kaecilius’ destruction of the Hong Kong Sanctum Sanctorum (and much of the surrounding cityscape). In this scene, Kaecilius leaps at Strange, and Strange “freezes” Kaecilius in midair with the saucer. It’s done more quickly, but similarly to how he “freezes” the apple into a controlled-time mode in Tibet.
But then we see something different, and it complicates everything. As Strange twists the saucer counterclockwise, the cityscape around him—not just Kaecilius—begins to reverse slowly. (And unlike in Tibet, the saucer keeps spinning clockwise underneath his hand.) Then the rate of reversal accelerates, and even continues in its reversal after Strange drops his gesture and engages in a fight with Kaecilius, who somehow escapes the reversing time stream to join Strange and Mordo in the “observer” time stream.
So in this mode, the saucer is working much more like a shuttle wheel with no snap-back feature.
A shuttle wheel, as you’ll recall from the first post, doesn’t specify an absolute value along a range like a jog dial does. A shuttle wheel indicates a direction and rate of change. A little to the left is slow reverse. Far to the left is fast reverse. Nearly all of the shuttle wheels we use in the real world have snap-back features, because if you were just going to leave it reversing and pay attention to something else, you might as well use another control to get to the absolute beginning, like a jog dial. But, since Strange is scrubbing an endless “video stream,” (that is, time), and he can pull people and things out of the manipulated-stream and into the observer-stream and do stuff, not having a snap-back makes sense.
For the Tibet mode I argued for a chapter ring to provide some context and information about the range of values he’s scrubbing. So for shuttling along the past in the Hong Kong mode, I don’t think a chapter ring or content overview makes sense, but it would help to know the following.
The rate of change
Direction of change
Timedate difference from when he started
In the scene that information is kind of obvious from the environment, so I can see the argument for not having it. But if he was in some largely-unchanging environment, like a panic room or an underground cave or a Sanctum Sanctorum, knowing that information would save him from letting the shuttle go too far and finding himself in the Ordovician. A “home” button might also help to quickly recover from mistakes. Adding these signals would also help distinguish the two modes. They work differently, so they should look different. As it stands, they look identical.
He still (probably) needs future branches
Can Strange scrub the future this way? We don’t see it in the movie. But if so, we have many of the same questions as the Tibet mode future scrubber: Which timeline are we viewing & how probable is it? What other probabilities exist and how does he compare them? This argues for the addition of the future branches from that design.
Selecting the mode
So how does Strange specify the jog dial or shuttle wheel mode?
One cop-out answer is a mental command from Strange. It’s a cop-out because if the Eye responds to mental commands, this whole design exercise is moot, and we’re here to critique, practice, and learn. Not only that, but physical interfaces are more cinegenic, so better to make a concrete interaction for the film.
You might think we could modify the opening finger-tut (see the animated gif, below). But it turns out we need that for another reason: specifying the center and radius-of-effect.
Center and radius-of-effect
In Tibet, the Eye appears to affect just an apple and a tome. But since we see it affecting a whole area in Hong Kong, let’s presume the Eye affects time in a sphere. For the apple and tome, it was affecting a small sphere that included the table, too, it’s just that table didn’t change in the spans of time we see. So if it works in spheres, how is the center and the radius of the sphere set?
Let’s say the Eye does some simple gaze monitoring to find the salient object at his locus of attention. Then it can center the effect on the thing and automatically set the radius of effect to the thing’s size across likely-to-be scrubbed extents. In Tibet, it’s easy. Apple? Check. Tome? Check. In Hong Kong, he’s focusing on the Sanctum, and its image recognition is smart enough to understand the concept of “this building.”
But the Hong Kong radius stretches out beyond his line of sight, affecting something with a very vague visual and even conceptual definition, that is, “the wrecked neighborhood.” So auto-setting these variables wouldn’t work without reconceiving the Eye as a general artificial intelligence. That would have some massive repercussions throughout the diegesis, so let’s avoid that.
If it’s a manual control, how does he do it? Watch the animated gif above carefully and see he’s got two steps to the “turn Eye on” tut: opening the eye by making an eye shape, and after the aperture opens, spreading his hands apart, or kind of expanding the Eye. In Tibet that spreading motion is slow and close. In Hong it’s faster and farther. That’s enough evidence to say the spread*speed determines the radius. We run into the scales problem of apple-versus-neighborhood that we had in determining the time extents, but make it logarithmic and add some visual feedback and he should be able to pick arbitrary sizes with precision.
So…back to mode selection
So if we’re committing the “turn on” gesture to specifying the center-and-radius, the only other gesture left is the saucer creation. For a quick reminder, here’s how it works in Tibet.
Since the circle works pretty well for a jog dial, let’s leave this for Tibet mode. A contrasting but related gesture would be to have Strange hold his right hand flat, in a sagittal plane, with the palm facing to his left. (See an illustration, below.) Then he can tilt his hand inside the saucer to reverse or fast forward time, and withdraw his hand from the saucer graphic to leave time moving at the adjusted rate. Let the speed of the saucer indicate speed of change. To map to a clock, tilting to the left would reverse time, and tilting to the right would advance it.
The yank out
There’s one more function we see twice in the Hong Kong scene. Strange is able to pull Mordo and Wong from the reversing time stream by thrusting the saucer toward them. This is a goofy choice of a gesture that makes no semantic sense. It would make much more sense for Strange to keep his saucer hand extended, and use his left hand to pull them from the reversing stream.
So one of the nice things about this movie interface, is that while it doesn’t hold up under the close scrutiny of this blog, the interface to the Eye of Agamotto works while watching the film. Audience sees the apple happen, and gets that gestures + glowing green circle = adjusting time. For that, it works.
That said, we can see improvements that would not affect the script, would not require much more of the actors, and not add too much to post. It could be more consistent and believable.
But we’re not done yet. There’s one other function shown by the Eye of Agamotto when Strange takes it into the Dark Dimension, which is the final mode of the Eye, up next.
Dr. Strange uses the Crimson Bands of Cyttorak to immobilize Kaecilius while they are fighting in the New York Sanctum.
The bands are a flexible torso shaped device, that look like a bunch of metal ribs attached to a spine. We do not actually know whether this relic has “chosen” Strange or if it simply functions for anyone who wields it correctly. But given its immense power, it definitely qualifies as a relic and opens up the conversation about whether some relics are simply masterless.
On the name
Discussing the bands is made semantically difficult for two reasons. The first is that “they” are multiple bands joined together by a single “spine” and handled in combat like a single thing. So it needn’t be plural “Bands.” That’s like calling a shoe the Running Laces of Reebok. It is an it not a they. Also it is not Crimson (even in the comic books, most folks would call them pink.) They are not actually named in the film, but authoritative source material indicates that is what these are. So forgive the weirdness, but this post will discuss the bands as a single thing. An it.
So where did it get its plural name? Comic book fans have already noted: In the books, the Crimson Bands of Cyttorak are actually a spell for binding. They are—no surprise—glowing crimson bands of energy, and used by many spellcasters, not just Strange. Here they are in The Uncanny X-Men, cast by the Scarlet Witch and subsequently smashed by Magik.
Mordo wears the Vaulting Boots of Valtor throughout the movie and first demonstrates their use to Dr. Strange when they are sparring. The Boots allow the user to walk, run, or jump on air as if it were solid ground.
When activated, the sole of each boot creates a circular field of force in anticipation of a footfall in midair, as if creating free-floating stepping stones.
How might this work as tech?
The main interaction design challenge is how the wearer indicates where he wants a stepping-stone to appear. The best solution is to let Mordo’s footfall location and motion inform the boots when and where he expects there to be a solid surface. (Anyone who has stumbled while misjudging the height or location of a step on a stairway knows how differently you treat a step where you expect there to be solid footing.)
If this were a technological device, sensors within the boots would retain a detailed history of the wearer’s stride for all possible speeds and distances of movement. The boots would detect muscle tension and flexion combined with the owner’s direction and velocity to accurately predict the placement of each step and then insert an appropriately elevated and angled stepping stone. The boots would know the difference between each of these styles of movement, walking, running, and sprinting and behave accordingly.
As a result, Mordo could always remain upright and stable regardless of his intended direction or how high he had climbed. And while Mordo may be a sorcerer with exceptional physical training, he isn’t superhuman. With the power of the boots he is only able to run and step as high as he could normally if for example he was taking a set of stairs two or three at a time.
As a magical device, the intelligence imbued in the boots is limited to the awareness of the intent of the sorcerer and knows where to place each force-field stepping-stone.
This staff appears to be made of wood and is approximately a meter long when in its normal form. When activated by Mordo it has several powers. With a strong pull on both ends, the staff expands into a jointed energy nunchaku. It can also extend to an even greater length like a bullwhip. When it impacts a solid object such as a floor, it seems to release a crack of loud energy. Too bad we only ever see it in demo mode.
How might this work as technology?
The staff is composed of concentric rings within rings of material similar to a collapsing travel cup. This allows the device to expand and contract in length. The handle would likely contain the artificial intelligence and a power source that activates when Mordo gives it a gestural command, or if we’re thinking far future, a mental one. There might also be an additional control for energy discharge.
In the movie, sadly, Mordo does not use the Staff to its best effect, especially when Kaecilius returns to the New York sanctum. Mordo could easily disrupt the spell being cast by the disciples using the staff like a whip, but instead he leaps off the balcony to physically attack them. Dude, you’re the franchise’s next Big Bad? But let’s put down the character’s missteps to look at the interface.
Mode switching and inline meta-signals
Any time you design a thing with modes, you have to design the state changes between those modes. Let’s look at how Mordo moves between staff, nunchaku, and whip in this short demonstration scene. Continue reading →