Deckard’s Elevator

This is one of those interactions that happens over a few seconds in the movie, but turns out to be quite deep—and broken—on inspection.

When Deckard enters his building’s dark, padded elevator, a flat voice announces, “Voice print identification. Your floor number, please.” He presses a dark panel, which lights up in response. He presses the 9 and 7 keys on a keypad there as he says, “Deckard. 97.” The voice immediately responds, “97. Thank you.” As the elevator moves, the interface confirms the direction of travel with gentle rising tones that correspond to the floor numbers (mod 10), which are shown rising up a 7-segment LED display. We see a green projection of the floor numbers cross Deckard’s face for a bit until, exhausted, he leans against the wall and out of the projection. When he gets to his floor, the door opens and the panel goes dark.

A need for speed

An aside: To make 97 floors in 20 seconds you have to be traveling at an average of around 47 miles per hour. That’s not unheard of today. Mashable says in a 2014 article about the world’s fastest elevators that the Hitachi elevators in Guangzhou CTF Finance Building reach up to 45 miles per hour. But including acceleration and deceleration adds to the total time, so it takes the Hitachi elevators around 43 seconds to go from the ground floor to their 95th floor. If 97 is Deckard’s floor, it’s got to be accelerating and decelerating incredibly quickly. His body doesn’t appear to be suffering those kinds of Gs, so unless they have managed to upend Newton’s basic laws of motion, something in this scene is not right. As usual, I digress.

The input control is OK

The panel design is nice and was surprising in 1982, because few people had ridden in elevators serving nearly a hundred floors. And while most in-elevator panels have a single button per floor, it would have been an overwhelming UI to present the rider of this Blade Runner complex with 100 floor buttons plus the usual open door, close door, emergency alert buttons, etc. A panel that allows combinatorial inputs reduces the number of elements that must be displayed and processed by the user, even if it slows things down, introduces cognitive overhead, and adds the need for error-handling. Such systems need a “commit” control that allows them to review, edit, and confirm the sequence, to distinguish, say, “97” from “9” and “7.” Not such an issue from the 1st floor, but a frustration from 10–96. It’s not clear those controls are part of this input.

Deckard enters 8675309, just to see what will happen.

I’m a fan of destination dispatch elevator systems that increase efficiency (with caveats) by asking riders to indicate their floor outside the elevator and letting the algorithm organize passengers into efficient groups, but that only works for banks of elevators. I get the sense Deckard’s building is a little too low-rent for such luxuries. There is just one in his building, and in-elevator controls work fine for those situations, even if they slow things down a bit.

The feedback is OK

The feedback of the floors is kind of nice in that the 7-segment numbers rise up helping to convey the direction of movement. There is also a subtle, repeating, rising series of tones that accompany the display. Most modern elevators rely on the numeracy of its passengers and their sense of equilibrium to convey this information, but sure, this is another way to do it. Also, it would be nice if the voice system would, for the visually impaired, say the floor number when the door opens.

Though the projection is dumb

I’m not sure why the little green projection of the floor numbers runs across Deckard’s face. Is it just a filmmaker’s conceit, like the genetic code that gets projected across the velociraptors head in Jurassic Park?

Pictured: Sleepy Deckard. Dumb projection.

Or is it meant to be read as diegetic, that is, that there is a projector in the elevator, spraying the floor numbers across the faces of its riders? True to the New Criticism stance of this blog, I try very hard to presume that everything is diegetic, but I just can’t make that make sense. There would be much better ways to increase the visibility of the floor numbers, and I can’t come up with any other convincing reason why this would exist.

If this was diegetic, the scene would have ended with a shredded projector.

But really, it falls apart on the interaction details

Lastly, this interaction. First, let’s give it credit where credit is due. The elevator speaks clearly and understands Deckard perfectly. No surprise, since it only needs to understand a very limited number of utterances. It’s also nice that it’s polite without being too cheery about it. People in LA circa 2019 may have had a bad day and not have time for that shit.

Where’s the wake word?

But where’s the wake word? This is a phrase like “OK elevator” or “Hey lift” that signals to the natural language system that the user is talking to the elevator and not themselves, or another person in the elevator, or even on the phone. General AI exists in the Blade Runner world, and that might allow an elevator to use contextual cues to suss this out, but there are zero clues in the film that this elevator is sentient.

There are of course other possible, implicit “wake words.” A motion detector, proximity sensor, or even weight sensor could infer that a human is present, and start the elevator listening. But with any of these implicit “wake words,” you’d still need feedback for the user to know when it was listening. And some way to help them regain attention if they got the first interaction wrong, and there would be zero affordances for this. So really, making an explicit wake word is the right way to go.

It might be that touching the number panel is the attention signal. Touch it, and the elevator listens for a few seconds. That fits in with the events in the scene, anyway. The problem with that is the redundancy. (See below.) So if the solution was pressing a button, it should just be a “talk” button rather than a numeric keypad.

It may be that the elevator is always listening, which is a little dark and would stifle any conversation in the elevator less everyone end up stuck in the basement, but this seems very error prone and unlikely.

Deckard: *Yawns* Elevator: Confirmed. Silent alarm triggered.

This issue is similar to the one discussed in Make It So Chapter 5, “Gestural Interfaces” where I discussed how a user tells a computer they are communicating to it with gestures, and when they aren’t. 

Where are the paralinguistics?

Humans provide lots of signals to one another, outside of the meaning of what is actually being said. These communication signals are called paralinguistics, and one of those that commonly appears in modern voice assistants is feedback that the system is listening. In the Google Assistant, for example, the dots let you know when it’s listening to silence and when it’s hearing your voice, providing implicit confirmation to the user that the system can hear them. (Parsing the words, understanding the meaning, and understanding the intent are separate, subsequent issues.)

Fixing this in Blade Runner could be as simple as turning on a red LED when the elevator is listening, and varying the brightness with Deckard’s volume. Maybe add chimes to indicate the starting-to-listen and no-longer-listening moments. This elevator doesn’t have anything like that, and it ought to.

Why the redundancy?

Next, why would Deckard need to push buttons to indicate “97” even while he’s saying the same number as part of the voice print? Sure, it could be that the voice print system was added later and Deckard pushes the numbers out of habit. But that bit of backworlding doesn’t buy us much.

It might be a need for redundant, confirming input. This is useful when the feedback is obscure or the stakes are high, but this is a low-stakes situation. If he enters the wrong floor, he just has to enter the correct floor. It would also be easy to imagine the elevator would understand a correction mid-ride like “Oh wait. Elevator, I need some ice. Let’s go to 93 instead.” So this is not an interaction that needs redundancy.

It’s very nice to have the discrete input as accessibility for people who cannot speak, or who have an accent that is unrecognizable to the system, or as a graceful degradation in case the speech recognition fails, but Deckard doesn’t fit any of this. He would just enter and speak his floor.

Why the personally identifiable information?

If we were designing a system and we needed, for security, a voice print, we should protect the privacy of the rider by not requiring personally identifiable information. It’s easy to imagine the spoken name being abused by stalkers and identity thieves riding the elevator with him. (And let’s not forget there is a stalker on the elevator with him in this very scene.)

This young woman, for example, would abuse the shit out of such information.

Better would be some generic phrase that stresses the parts of speech that a voiceprint system would find most effective in distinguishing people.

Tucker Saxon has written an article for VoiceIt called “Voiceprint Phrases.” In it he notes that a good voiceprint phrase needs some minimum number of non-repeating phonemes. In their case, it’s ten. A surname and a number is rarely going to provide that. “Deckard. 97,” happens to have exactly 10, but if he lived on the 2nd floor, it wouldn’t. Plus, it has that personally identifiable information, so is a non-starter.

What would be a better voiceprint phrase for this scene? Some of Saxon’s examples in the article include, “Never forget tomorrow is a new day” and “Today is a nice day to go for a walk.” While the system doesn’t care about the meaning of the phrase, the humans using it would be primed by the content, and so it would just add to the dystopia of the scene if Deckard had to utter one of these sunshine-and-rainbows phrases in an elevator that was probably an uncleaned murder scene. but I think we can do it one better.

(Hey Tucker, I would love use VoiceIt’s tools to craft a confirmed voiceprint phrase, but the signup requires that I permit your company to market to me via phone and email even though I’m just a hobbyist user, so…hard no.)

Deckard: Hi, I’m Deckard. My bank card PIN code is 3297. The combination lock to my car spells “myothercarisaspinner” and my computer password is “unicorn.” 97 please.

Here is an alternate interaction that would have solved a lot of these problems.

  • ELEVATOR
  • Voice print identification, please.
  • DECKARD
  • SIGHS
  • DECKARD
  • Have you considered life in the offworld colonies?
  • ELEVATOR
  • Confirmed. Floor?
  • DECKARD
  • 97

Which is just a punch to the gut considering Deckard is stuck here and he knows he’s stuck, and it’s salt on the wound to have to repeat fucking advertising just to get home for a drink.

So…not great

In total, this scene zooms by and the audience knows how to read it, and for that, it’s fine. (And really, it’s just a setup for the moment that happens right after the elevator door opens. No spoilers.) But on close inspection, from the perspective of modern interaction design, it needs a lot of work.

6-Screen TV

BttF_109

When Marty Jr. gets home, he approaches the large video display in the living room, which is displaying a cropped image of “The Gold of Their Bodies (Et l’’or de Leur Corps)” by Paul Gauguin. He speaks to the screen, saying “Art off.” After a bit of static, the screen goes black. He then says, “OK, I want channels 18, 24, 63, 109, 87, and the Weather Channel.” As he says each, a sixth of the screen displays the live feed. The number for the channel appears in the upper left corner for a short while before fading. Marty Jr. then sits down to watch the six channels simultaneously.

Voice control. Perfect recognition. No modality. Spot on. It might dynamically update the screen in case he only wanted to watch 2 or 3 channels, but perhaps it is a cheaper system apropos to the McFly household.

Iron Man HUD: A Breakdown

So this is going to take a few posts. You see, the next interface that appears in The Avengers is a video conference between Tony Stark in his Iron Man supersuit and his partner in romance and business, Pepper Potts, about switching Stark Tower from the electrical grid to their independent power source. Here’s what a still from the scene looks like.

Avengers-Iron-Man-Videoconferencing01

So on the surface of this scene, it’s a communications interface.

But that chat exists inside of an interface with a conceptual and interaction framework that has been laid down since the original Iron Man movie in 2008, and built upon with each sequel, one in 2010 and one in 2013. (With rumors aplenty for a fourth one…sometime.)

So to review the video chat, I first have to talk about the whole interface, and that has about 6 hours of prologue occurring across 4 years of cinema informing it. So let’s start, as I do with almost every interface, simply by describing it and its components. Continue reading

The Drone

image01

Each drone is a semi-autonomous flying robot armed with large cannons, heavy armor, and a wide array of sensor systems. When in flight mode, the weapon arms retract. The arms extend when the drone senses a threat.

image02

Each drone is identical in make and temperament, distinguishable only by large white numbers on its “face”. The armored shell is about a meter in diameter (just smaller than Jack). Internal power is supplied by a small battery-like device that contains enough energy to start a nuclear explosion inside of a sky-scraper-sized hydrogen distiller. It is not obvious whether the weapons are energy or projectile-based.

The HUD

The Drone Interface is a HUD that shows the drone’s vision and secondary information about its decision making process. The HUD appears on all video from the Drone’s primary camera. Labels appear in legible human English.

Video feeds from the drone can be in one of several modes that vary according to what kind of searching the drone is doing. We never see the drone use more than one mode at once. These modes include visual spectrum, thermal imaging, and a special ‘tracking’ mode used to follow Jack’s bio signature.

Occasionally, we also see the Drone’s primary objective on the HUD. These include an overlay on the main view that says “TERMINATE” or “CLEAR”.

image00 Continue reading

Her interface components (2/8)

Depending on how you slice things, the OS1 interface consists of five components and three (and a half) capabilities.

Her-earpiece

1. An Earpiece

The earpiece is small and wireless, just large enough to fit snugly in the ear and provide an easy handle for pulling out again. It has two modes. When the earpiece is in Theodore’s ear, it’s in private mode, hearable only by him. When the earpiece is out, the speaker is as loud as a human speaking at room volume. It can produce both voice and other sounds, offering a few beeps and boops to signal needing attention and changes in the mode.

Her-cameo

2. Cameo phone

I think I have to make up a name for this device, and “cameo phone” seems to fit. This small, hand-sized, bi-fold device has one camera on the outside an one on the inside of the recto, and a display screen on the inside of the verso. It folds along its long edge, unlike the old clamshell phones. The has smartphone capabilities. It wirelessly communicates with the internet. Theodore occasionally slides his finger left to right across the wood, so it has some touch-gesture sensitivity. A stripe around the outside-edge of the cameo can glow red to act as a visual signal to get its user’s attention. This is quite useful when the cameo is folded up and sitting on a nightstand, for instance. Continue reading

The answer does not program

LogansRun224

Logan’s life is changed when he surrenders an ankh found on a particular runner. Instead being asked to identify, the central computer merely stays quiet a long while as it scans the objects. Then its lights shut off, and Logan has a discussion with the computer he has never had before.

The computer asks him to “approach and identify.” The computer gives him, by name, explicit instructions to sit facing the screen. Lights below the seat illuminate. He identifies in this chair by positioning his lifeclock in a recess in the chair’s arm, and a light above him illuminates. Then a conversation ensues between Logan and the computer.

LogansRun113

The computer communicates through a combination of voice and screen, on which it shows blue text and occasional illustrative shapes. The computer’s voice is emotionless and soothing. For the most part it speaks in complete sentences. In contrast, Logan’s responses are stilted and constrained, saying “negative” instead of “no,” and prefacing all questions with the word, “Question,” as in, “Question: What is it?”

On the one hand it’s linguistically sophisticated

Speech recognition and generation would not have a commercially released product for four years after the release of Logan’s Run, but there is an odd inconsistency here even for those unfamiliar with the actual constraints of the technology. The computer is sophisticated enough to generate speech with demonstrative pronouns, referring to the picture of the ankh as “this object” and the label as “that is the name of the object.” It can even communicate with pragmatic meaning. When Logan says,

“Question: Nobody reached renewal,”

…and receives nothing but silence, the computer doesn’t object to the fact that his question is not a question. It infers the most reasonable interpretation, as we see when Logan is cut off during his following objection by the computer’s saying,…

“The question has been answered.”

Despite these linguistic sophistications, it cannot parse anything but the most awkwardly structured inputs? Sadly, this is just an introduction to the silliness that is this interface.

Logan undergoes procedure “033-03,” in which his lifeclock is artificially set to blinking. He is then instructed to become a runner himself and discover where “sanctuary” is. After his adventure in the outside performing the assignment he was forced to accept, he is brought in as a prisoner. The computer traps him in a ring of bars demanding to know the location of sanctuary. Logan reports (correctly) that Santuary doesn’t exist.

LogansRun206
LogansRun205
hologram

On the other hand, it explodes

This freaks the computer out. Seriously. Now, the crazy thing is that the computer actually understands Logan’s answer, because it comments on it. It says, “Unacceptable. The answer does not program [sic].” That means that it’s not a data-type error, as if it got the wrong kind of input. No, the thing heard what Logan was saying. It’s just unsatisfied, and the programmer decided that the best response to dissatisfaction was to engage the heretofore unused red and green pixels in the display, randomly delete letters from the text—and explode. That’s right. He decided that in addition to the Dissatisfaction() subroutine calling the FreakOut(Seriously) subroutine, the FreakOut(Seriously) subroutine in its turn calls Explode(Yourself), Release(The Prisoner), and the WhileYoureAtItRuinAllStructuralIntegrityoftheSurroundingArcitecture() subroutines.

LogansRun221

Frankly, if this is the kind of coding that this entire society was built upon, this whole social collapse thing was less deep social commentary and really just a matter of technical debt.

LogansRun223
LogansRun225
LogansRun226
LogansRun227

Technical. Debt.

Siege Support

GitS-Aramaki-03

When Section 9 launches an assault on the Puppet Master’s headquarters, Department Chief Aramaki watches via a portable computer. It looks and behaves much like a modern laptop, with a heavy base that connects via a hinge to a thin screen. This shows him a live video feed.

The scan lines on the feed tell us that the cameras are diegetic, something Aramaki is watching, rather than the "camera" of the movie we as the audience are watching. These cameras must be placed in many places around the compound: behind the helicopter, following the Puppet Master, to the far right of the Puppet Master, and even floating far overhead. That seems a bit far-fetched until you remember that there are agents all around the compound, and Section 9 has the resources to outfitted all of them with small cameras. Even the overhead camera could be an unoticed helicopter equipped with a high-powered telephoto lens. So stretching believability, but not beyond the bounds of possibility. My main question is, given these cameras, who is doing the live editing? Aramaki’s view switches dramatically between these views as he’s watching with no apparent interaction.

A clue comes from his singular interaction with the system. When a helicopter lands in the lawn of the building, Aramaki says, "Begin recording," and a blinking REC overlay appears in the upper left and a timecode overlay appears in the lower right. If you look at the first shot in the scene, there is a soldier next to him hunched over a different terminal, so we can presume that he’s the hands-on guy, executing orders that Aramaki calls out. That same tech can be doing the live camera switching and editing to show Aramaki the feed that’s most important and relevant.

GitS-Aramaki-11

That idea makes even more sense knowing that Aramaki is a chief, and his station warrants spending money on an everpresent human technician.

Sometimes, as in this case, the human is the best interface.

Section No9’s crappy security

GitS-Sec9_security-01

The heavily-mulleted Togusa is heading to a company car when he sees two suspicious cars in the parking basement. After sizing them up for a moment, he gets into his car and without doing anything else, says,

"Security, whose official vehicles are parked in the basement garage?"

It seems the cabin of the car is equipped to continuously monitor for sound, and either an agent from security is always waiting, listening at the other end, or by addressing a particular department by name, a voice recognition system instantly routs him to an operator in that department, who is able to immediately respond:

"They belong to Chief Nakamura of the treaties bureau and a Dr. Willis."

"Give me the video record of their entering the building."

In response, a panel automatically flips out of the dashboard to reveal a monitor, where he can watch the the security footage. He watches it, and says,

"Replay, infrared view"

After watching the replay, he says,

"Send me the pressure sensor records for basement garage spaces B-7 and 8."

The screen then does several things at once. It shows a login screen, for which his username is already supplied. He mentally supplies his password. Next a menu appears on a green background with five options: NET-WORK [sic], OPTICAL, PRESSURE, THERMO, and SOUND. "PRESSURE" highlights twice with two beeps. Then after a screen-green 3D rendering of Section 9 headquarters builds, the camera zooms around the building and through floorplans to the parking lot to focus on the spaces, labeled appropriately. Togusa watches as pea green bars on radial dials bounce clockwise, twice, with a few seconds between.

The login

Sci-fi logins often fail for basic multifactor authentication, and at first it appears that this screen only has two parts: a username and password. But given that Togusa connects to the system first vocally and then mentally, it’s likely that one of these other channels supplies a third level of authentication. Also it seems odd to have him supply a set of characters as the mental input. Requiring Togusa to think a certain concept might make more sense, like a mental captcha.

The zoom

Given that seconds can make a life-or-death difference and that the stakes at Section 9 are so high, the time that the system spends zooming a camera around the building all the way to the locations is a waste. It should be faster. It does provide context to the information, but it doesn’t have to be distributed in time. Remove the meaningless and unlabeled dial in the lower right to gain real estate, and replace it with a small version of the map that highlights the area of detail. Since Togusa requested this information, the system should jump here immediately and let him zoom out for more detail only if he wants it or if the system wants him to see suspect information.

The radial graphs

The radial graphs imply some maximum to the data, and that Nakamura’s contingent hits some 75% of it. What happens if the pressure exceeds 37 ticks? Does the floor break? (If so, it should have sent off structural warning alarms at the gate independently of the security question.) But presumably Section 9 is made of stronger stuff than this, and so a different style of diagram is called for. Perhaps remove the dial entirely and just leave the parking spot labels and the weight. Admittedly, the radial dial is unusual and might be there for consistency with other, unseen parts of the system.

Moreover, Togusa is interested in several things: how the data has changed over time, when it surpassed an expected maximum, and by how much. This diagram only addresses one of them, and requires Togusa to notice and remember it himself. A better diagram would trace this pressure reading across time, highlighting the moments when it passed a threshold. (This parallels the issues of medical monitoring highlighted in the book, Chapter 12, Medicine.)

SECURITY_redo

Even better would be to show this data over time alongside or overlaid with any of the other feeds, like a video feed, such that Togusa doesn’t have to make correlations between different feeds in his head. (I’d have added it to the comp but didn’t have source video from the movie.)

The ultimately crappy Section No9 security system

Aside from all these details of the interface and interaction design, I have to marvel at the broader failings of the system. This is meant to be the same bleeding-edge bureau that creates cyborgs and transfers consciousnesses between them? If the security system is recording all of this information, why is it not being analyzed continuously, automatically? We can presume that object recognition is common in the world from a later scene in which a spider tank is able to track Kunasagi. So as the security system was humming along, recording everything, it should have also been analyzing that data, noting the discrepancy between of the number of people it counted in any of the video feeds, the number of people it counted passing through the door, and the unusual weight of these "two" people. It should have sent a warning to security at the gate of the garage, not relied on the happenstance of Togusa’s hunch and good timing.

This points to a larger problem that Hollywood has with technology being part of its stories. It needs heroes to be smart and heroic, and having them simply respond to warnings passed along by smart system can seem pointedly unheroic. But as technology gets smarter and more agentive, these kinds of discrepancies are going to break believability and get embarassing.

Alphy

Barbarella-014

Barbarella’’s onboard conversational computer is named Alphy. He speaks with a polite male voice with a British accent and a slight lisp. The voice seems to be omnidirectional, but confined to the cockpit of the space rocket.

Goals

Alphy’’s primary duties are threefold. First, to obey Barbarella’’s commands, such as waking her up before their approach to Tau Ceti. Second, autopilot navigation. Third, to report statuses, such as describing the chances of safe landing or the atmospheric analysis that assures Barbarella she will be able to breathe.

Display

Alphy

Whenever Alphy is speaking, a display panel at the back of the cockpit moves. The panel stretches from the floor to the ceiling and is about a meter wide. The front of the panel consists of a large array of small rectangular sheets of metal, each of which is attached on one side to one of the horizontal bars that stretch across the panel. As Alphy talks, individual rectangles lift and fall in a stochastic pattern, adding a small metallic clacking to the voice output. A flat yellow light fills the space behind the panel, and the randomly rising and falling rectangles reveal it in mesmerizing patterns.

The light behind Alphy’’s panel can change. As Barbarella is voicing her grave concerns to Dianthus, Alphy turns red. He also flashes red and green during the magnetic disturbances that crash her ship on Tau Ceti. We also see him turn a number of colors after the crash on Tau Ceti, indicating the damage that has been done to him.

In the case of the conversation with Dianthus, there is no real alert state to speak of, so it is conceivable that these colors act something like a mood ring, reflecting Barbarella’’s affective state.

Language

Like many language-capable sci-fi computer systems of the era, Alphy speaks in a stilted fashion. He is given to “computery” turns of phrases, brusque imperatives, and odd, unsocialized responses. For example, when Barbarella wishes Alphy a good night before she goes to sleep, he replies, “Confirmed.”

Barbarella even speaks this way when addressing Alphy sometimes, such as when they risk crashing into Tau Ceti and she must activate the terrascrew and travel underground. As she is piloting manually, she says things like, “Full operational power on all subterranean systems,” “45 degree ascent,” and “Quarter to half for surfacing.”

Barbarella-Alphy-color-04

Nonetheless, Alphy understands Barbarella completely whenever she speaks to him, so the stilted language seems very much like a convention than a limitation.

Anthropomorphism

Despite his lack of linguistic sophistication, he shows a surprising bit of audio anthropomorphism. When suffering through the magnetic disturbances, his voice gets distressed. Alphy’’s tone also gets audibly stressed when he reveals that the Catchman has performed repairs “in reverse,” in each case underscoring the seriousness of the situation. When the space rocket crashes on Tau Ceti, Alphy asks groggily, ““Where are we?” We know this is only affectation because within a few seconds, he is back up to full functioning, reporting happily that they have landed, ““Planet 16 in the system Tau Ceti. Air density oh-point-oh-51. Cool weather with the possibility of stormy precipitations.”” Alphy does not otherwise exhibit emotion. He doesn’t speak of his emotions or use emotional language. This convention, too, is to match Barbarella’s mood and make make her more comfortable.

Agency

Alphy’s sensors seem to be for time, communication technology, self-diagnostics, and for analyzing the immediate environment around the ship. He has actuators to speak, change his display, supply nutrition to Barbarella, and focus power to different systems around the ship, including the emergency systems. He can detect problems, such as the “magnetic disturbance”, and can respond, but has no authority to initiate action. He can only obey Barbarella, as we hear in the following exchange.

Barbarella: What’s happening?
Alphy: Magnetic disturbances.
Barbarella: Magnetic disturbances?…Emergency systems!
Alphy: All emergency systems will now operate.

His real function?

All told, Alphy is very limited in what he can do. His primary functions are reading aloud data that could be dials on a dashboard and flipping switches so Barbarella won’t have to take her hands off of…well, switches…in emergency situations. The bits of anthropomorphic cues he provides to her through the display and language confirm that his primary goal is social, to make Barbarella’s adventurous trips through space not feel so lonely.

MedPod

Early in the film, when Shaw sees the MedPod for the first time, she comments to Vickers that, “They only made a dozen of these.” As she caresses its interface in awe, a panel extends as the pod instructs her to “Please verbally state the nature of your injury.”

Prometheus-087

The MedPod is a device for automated, generalized surgical procedures, operable by the patient him- (or her-, kinda, see below) self.

When in the film Shaw realizes that she’s carrying an alien organism in her womb, she breaks free from crewmembers who want to contain her, and makes a staggering beeline for the MedPod.

Once there, she reaches for the extended touchscreen and presses the red EMERGENCY button. Audio output from the pod confirms her selection, “Emergency procedure initiated. Please verbally state the nature of your injury.” Shaw shouts, “I need cesarean!” The machine informs her verbally that, “Error. This MedPod is calibrated for male patients only. It does not offer the procedure you have requested. Please seek medical assistance else–”

Prometheus-237

I’ll pause the action here to address this. What sensors and actuators are this gender-specific? Why can’t it offer gender-neutral alternatives? Sure, some procedures might need anatomical knowledge of particularly gendered organs (say…emergency circumcision?), but given…

  • the massive amounts of biological similarity between the sexes
  • the needs for any medical device to deal with a high degree of biological variability in its subjects anyway
  • most procedures are gender neutral

…this is a ridiculous interface plot device. If Dr. Shaw can issue a few simple system commands that work around this limitation (as she does in this very scene), then the machine could have just done without the stupid error message. (Yes, we get that it’s a mystery why Vickers would have her MedPod calibrated to a man, but really, that’s a throwaway clue.) Gender-specific procedures can’t take up so much room in memory that it was simpler to cut the potential lives it could save in half. You know, rather than outfit it with another hard drive.

Aside from the pointless “tension-building” wrong-gender plot point, there are still interface issues with this step. Why does she need to press the emergency button in the first place? The pod has a voice interface. Why can’t she just shout “Emergency!” or even better, “Help me!” Isn’t that more suited to an emergency situation? Why is a menu of procedures the default main screen? Shouldn’t it be a prompt to speak, and have the menu there for mute people or if silence is called for? And shouldn’t it provide a type-ahead control rather than a multi-facet selection list? OK, back to the action.

Desperate, Shaw presses a button that grants her manual control. She states “Surgery abdominal, penetrating injuries. Foreign body. Initiate.” The screen confirms these selections amongst options on screen. (They read “DIAGNOS, THERAP, SURGICAL, MED REC, SYS/MECH, and EMERGENCY”)

The pod then swings open saying, “Surgical procedure begins,” and tilting itself for easy access. Shaw injects herself with anesthetic and steps into the pod, which seals around her and returns to a horizontal position.

Why does Shaw need to speak in this stilted speech? In a panicked or medical emergency situation, proper computer syntax should be the last thing on a user’s mind. Let the patient shout the information however they need to, like “I’ve got an alien in my abdomen! I need it to be surgically removed now!” We know from the Sonic chapter that the use of natural language triggers an anthropomorphic sense in the user, which imposes some other design constraints to convey the system’s limitations, but in this case, the emergency trumps the needs of affordance subtleties.

Once inside the pod, a transparent display on the inside states that, “EMERGENCY PROC INITIATED.” Shaw makes some touch selections, which runs a diagnostic scan along the length of her body. The terrifying results display for her to see, with the alien body differentiated in magenta to contrast her own tissue, displayed in cyan.

Prometheus-254

Prometheus-260

Shaw shouts, “Get it out!!” It says, “Initiating anesthetics” before spraying her abdomen with a bile-yellow local anesthetic. It then says, “Commence surgical procedure.” (A note for the grammar nerds here: Wouldn’t you expect a machine to maintain a single part of speech for consistency? The first, “Initiating…” is a gerund, while the second, “Commence,” is an imperative.) Then, using lasers, the MedPod cuts through tissue until it reaches the foreign body. Given that the lasers can cut organic matter, and that the xenomorph has acid for blood, you have to hand it to the precision of this device. One slip could have burned a hole right through her spine. Fortunately it has a feather-light touch. Reaching in with a speculum-like device, it removes the squid-like alien in its amniotic sac.

OK. Here I have to return to the whole “ManPod” thing. Wouldn’t a scan have shown that this was, in fact, a woman? Why wouldn’t it stop the procedure if it really couldn’t handle working on the fairer sex? Should it have paused to have her sign away insurance rights? Could it really mistake her womb for a stomach? Wouldn’t it, believing her to be a man, presume the whole womb to be a foreign body and try to perform a hysterectomy rather than a delicate caesarian? ManPod, indeed.

Prometheus-265

After removing the alien, it waits around 10 seconds, showing it to her and letting her yank its umbilical cord, before she presses a few controls. The MedPod seals her up again with staples and opens the cover to let her sit up.

She gets off the table, rushes to the side of the MedPod, and places all five fingertips of her right hand on it, quickly twisting her hand clockwise. The interface changes to a red warning screen labeled “DECONTAMINATE.” She taps this to confirm and shouts, “Come on!” (Her vocal instruction does not feel like a formal part of the procedure and the machine does not respond differently.) To decontaminate, the pod seals up and a white mist fills the space.

OK. Since this is a MedPod, and it has something called a decontamination procedure, shouldn’t it actually test to see whether the decontamination worked? The user here has enacted emergency decontamination procedures, so it’s safe to say that this is a plague-level contagion. That’s doesn’t say to me: Spray it with a can of Raid and hope for the best. It says, “Kill it with fire.” We just saw, 10 seconds ago, that the MedPod can do a detailed, alien-detecting scan of its contents, so why on LV-223 would it not check to see if the kill-it-now-for-God’s-sake procedure had actually worked, and warn everyone within earshot that it hadn’t? Because someone needs to take additional measures to protect the ship, and take them, stat. But no, MedPod tucks the contamination under a white misty blanket, smiles, waves, and says, “OK, that’s taken care of! Thank you! Good day! Move along!”

For all of the goofiness that is this device, I’ll commend it for two things. The first is for pushing the notion forward of automated medicine. Yes, in this day and age, it’s kind of terrifying to imagine devices handling something as vital as life-saving surgery, but people in the future will likely find it terrifying that today we’d rather trust an error prone, bull-in-a-china-shop human to the task. And, after all, the characters have entrusted their lives to an android while they were in hypersleep for two years, so clearly that’s a thing they do.

Second, the gestural control to access the decontamination is well considered. It is a large gesture, requiring no great finesse on the part of the operator to find and press a sequence of keys, and one that is easy to execute quickly and in a panic. I’m absolutely not sure what percentage of procedures need the back-up safety of a kill-everything-inside mode, but presuming one is ever needed, this is a fine gesture to initiate that procedure. In fact, it could have been used in other interfaces around the ship, as we’ll see later with the escape pod interface.

I have the sense that in the original script, Shaw had to do what only a few very bad-ass people have been willing to do: perform life-saving surgery on themselves in the direst circumstances. Yes, it’s a bit of a stretch since she’s primarily an anthropologist and astronomer in the story, but give a girl a scalpel, hardcore anesthetics, and an alien embryo, and I’m sure she’ll figure out what to do. But pushing this bad-assery off to an automated device, loaded with constraints, ruins the moment and changes the scene from potentially awesome to just awful.

Given the inexplicable man-only settings, requiring a desperate patient to recall FORTRAN-esque syntax for spoken instructions, and the failure to provide any feedback about the destruction of an extinction-level pathogen, we must admit that the MedPod belongs squarely in the realm of goofy narrative technology and nowhere near the real world as a model of good interaction design.