Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.
Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.
There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.
The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.
Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.
Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.
There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.
Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.
One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.
If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.
What do we see in the real world?
Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.
The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.
The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.
The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.
The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.
Back to sci-fi
So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.
This is one of those interactions that happens over a few seconds in the movie, but turns out to be quite deep—and broken—on inspection.
When Deckard enters his building’s dark, padded elevator, a flat voice announces, “Voice print identification. Your floor number, please.” He presses a dark panel, which lights up in response. He presses the 9 and 7 keys on a keypad there as he says, “Deckard. 97.” The voice immediately responds, “97. Thank you.” As the elevator moves, the interface confirms the direction of travel with gentle rising tones that correspond to the floor numbers (mod 10), which are shown rising up a 7-segment LED display. We see a green projection of the floor numbers cross Deckard’s face for a bit until, exhausted, he leans against the wall and out of the projection. When he gets to his floor, the door opens and the panel goes dark.
A need for speed
An aside: To make 97 floors in 20 seconds you have to be traveling at an average of around 47 miles per hour. That’s not unheard of today. Mashable says in a 2014 article about the world’s fastest elevators that the Hitachi elevators in Guangzhou CTF Finance Building reach up to 45 miles per hour. But including acceleration and deceleration adds to the total time, so it takes the Hitachi elevators around 43 seconds to go from the ground floor to their 95th floor. If 97 is Deckard’s floor, it’s got to be accelerating and decelerating incredibly quickly. His body doesn’t appear to be suffering those kinds of Gs, so unless they have managed to upend Newton’s basic laws of motion, something in this scene is not right. As usual, I digress.
The input control is OK
The panel design is nice and was surprising in 1982, because few people had ridden in elevators serving nearly a hundred floors. And while most in-elevator panels have a single button per floor, it would have been an overwhelming UI to present the rider of this Blade Runner complex with 100 floor buttons plus the usual open door, close door, emergency alert buttons, etc. A panel that allows combinatorial inputs reduces the number of elements that must be displayed and processed by the user, even if it slows things down, introduces cognitive overhead, and adds the need for error-handling. Such systems need a “commit” control that allows them to review, edit, and confirm the sequence, to distinguish, say, “97” from “9” and “7.” Not such an issue from the 1st floor, but a frustration from 10–96. It’s not clear those controls are part of this input.
I’m a fan of destination dispatch elevator systems that increase efficiency (with caveats) by asking riders to indicate their floor outside the elevator and letting the algorithm organize passengers into efficient groups, but that only works for banks of elevators. I get the sense Deckard’s building is a little too low-rent for such luxuries. There is just one in his building, and in-elevator controls work fine for those situations, even if they slow things down a bit.
The feedback is OK
The feedback of the floors is kind of nice in that the 7-segment numbers rise up helping to convey the direction of movement. There is also a subtle, repeating, rising series of tones that accompany the display. Most modern elevators rely on the numeracy of its passengers and their sense of equilibrium to convey this information, but sure, this is another way to do it. Also, it would be nice if the voice system would, for the visually impaired, say the floor number when the door opens.
Though the projection is dumb
I’m not sure why the little green projection of the floor numbers runs across Deckard’s face. Is it just a filmmaker’s conceit, like the genetic code that gets projected across the velociraptors head in Jurassic Park?
Or is it meant to be read as diegetic, that is, that there is a projector in the elevator, spraying the floor numbers across the faces of its riders? True to the New Criticism stance of this blog, I try very hard to presume that everything is diegetic, but I just can’t make that make sense. There would be much better ways to increase the visibility of the floor numbers, and I can’t come up with any other convincing reason why this would exist.
But really, it falls apart on the interaction details
Lastly, this interaction. First, let’s give it credit where credit is due. The elevator speaks clearly and understands Deckard perfectly. No surprise, since it only needs to understand a very limited number of utterances. It’s also nice that it’s polite without being too cheery about it. People in LA circa 2019 may have had a bad day and not have time for that shit.
Where’s the wake word?
But where’s the wake word? This is a phrase like “OK elevator” or “Hey lift” that signals to the natural language system that the user is talking to the elevator and not themselves, or another person in the elevator, or even on the phone. General AI exists in the Blade Runner world, and that might allow an elevator to use contextual cues to suss this out, but there are zero clues in the film that this elevator is sentient.
There are of course other possible, implicit “wake words.” A motion detector, proximity sensor, or even weight sensor could infer that a human is present, and start the elevator listening. But with any of these implicit “wake words,” you’d still need feedback for the user to know when it was listening. And some way to help them regain attention if they got the first interaction wrong, and there would be zero affordances for this. So really, making an explicit wake word is the right way to go.
It might be that touching the number panel is the attention signal. Touch it, and the elevator listens for a few seconds. That fits in with the events in the scene, anyway. The problem with that is the redundancy. (See below.) So if the solution was pressing a button, it should just be a “talk” button rather than a numeric keypad.
It may be that the elevator is always listening, which is a little dark and would stifle any conversation in the elevator less everyone end up stuck in the basement, but this seems very error prone and unlikely.
This issue is similar to the one discussed in Make It So Chapter 5, “Gestural Interfaces” where I discussed how a user tells a computer they are communicating to it with gestures, and when they aren’t.
Where are the paralinguistics?
Humans provide lots of signals to one another, outside of the meaning of what is actually being said. These communication signals are called paralinguistics, and one of those that commonly appears in modern voice assistants is feedback that the system is listening. In the Google Assistant, for example, the dots let you know when it’s listening to silence and when it’s hearing your voice, providing implicit confirmation to the user that the system can hear them. (Parsing the words, understanding the meaning, and understanding the intent are separate, subsequent issues.)
Fixing this in Blade Runner could be as simple as turning on a red LED when the elevator is listening, and varying the brightness with Deckard’s volume. Maybe add chimes to indicate the starting-to-listen and no-longer-listening moments. This elevator doesn’t have anything like that, and it ought to.
Why the redundancy?
Next, why would Deckard need to push buttons to indicate “97” even while he’s saying the same number as part of the voice print? Sure, it could be that the voice print system was added later and Deckard pushes the numbers out of habit. But that bit of backworlding doesn’t buy us much.
It might be a need for redundant, confirming input. This is useful when the feedback is obscure or the stakes are high, but this is a low-stakes situation. If he enters the wrong floor, he just has to enter the correct floor. It would also be easy to imagine the elevator would understand a correction mid-ride like “Oh wait. Elevator, I need some ice. Let’s go to 93 instead.” So this is not an interaction that needs redundancy.
It’s very nice to have the discrete input as accessibility for people who cannot speak, or who have an accent that is unrecognizable to the system, or as a graceful degradation in case the speech recognition fails, but Deckard doesn’t fit any of this. He would just enter and speak his floor.
Why the personally identifiable information?
If we were designing a system and we needed, for security, a voice print, we should protect the privacy of the rider by not requiring personally identifiable information. It’s easy to imagine the spoken name being abused by stalkers and identity thieves riding the elevator with him. (And let’s not forget there is a stalker on the elevator with him in this very scene.)
Better would be some generic phrase that stresses the parts of speech that a voiceprint system would find most effective in distinguishing people.
Tucker Saxon has written an article for VoiceIt called “Voiceprint Phrases.” In it he notes that a good voiceprint phrase needs some minimum number of non-repeating phonemes. In their case, it’s ten. A surname and a number is rarely going to provide that. “Deckard. 97,” happens to have exactly 10, but if he lived on the 2nd floor, it wouldn’t. Plus, it has that personally identifiable information, so is a non-starter.
What would be a better voiceprint phrase for this scene? Some of Saxon’s examples in the article include, “Never forget tomorrow is a new day” and “Today is a nice day to go for a walk.” While the system doesn’t care about the meaning of the phrase, the humans using it would be primed by the content, and so it would just add to the dystopia of the scene if Deckard had to utter one of these sunshine-and-rainbows phrases in an elevator that was probably an uncleaned murder scene. but I think we can do it one better.
(Hey Tucker, I would love use VoiceIt’s tools to craft a confirmed voiceprint phrase, but the signup requires that I permit your company to market to me via phone and email even though I’m just a hobbyist user, so…hard no.)
Here is an alternate interaction that would have solved a lot of these problems.
Voice print identification, please.
Have you considered life in the offworld colonies?
Which is just a punch to the gut considering Deckard is stuck here and he knows he’s stuck, and it’s salt on the wound to have to repeat fucking advertising just to get home for a drink.
In total, this scene zooms by and the audience knows how to read it, and for that, it’s fine. (And really, it’s just a setup for the moment that happens right after the elevator door opens. No spoilers.) But on close inspection, from the perspective of modern interaction design, it needs a lot of work.
After fleeing the Yakuza in the hotel, Johnny arrives in the Free City of Newark, and has to go through immigration control. This process appears to be entirely automated, starting with an electronic passport reader.
In the last post we went over the Iron HUD components. There is a great deal to say about the interactions and interface, but let’s just take a moment to recount everything that the HUD does over the Iron Man movies and The Avengers. Keep in mind that just as there are many iterations of the suit, there can be many iterations of the HUD, but since it’s largely display software controlled by JARVIS, the functions can very easily move between exosuits.
Along the bottom of the HUD are some small gauges, which, though they change iconography across the properties, are consistently present.
For the most part they persist as tiny icons and thereby hard to read, but when the suit reboots in a high-altitude freefall, we get to see giant versions of them, and can read that they are:
Forgive me, as I am but a humble interaction designer (i.e., neither a professional visual designer nor video editor) but here’s my shot at a redesigned DuoMento, taking into account everything I’d noted in the review.
There’s only one click for Carl to initiate this test.
To decrease the risk of a false positive, this interface draws from a large category of concrete, visual and visceral concepts to be sent telepathically, and displays them visually.
It contrasts Carl’s brainwave frequencies (smooth and controlled) with Johnny’s (spiky and chaotic).
It reads both the brain of the sender and the receiver for some crude images from their visual cortex. (It would be better at this stage to have the actors wear some glowing attachment near a crown to show how this information was being read.)
These changes are the sort that even in passing would help tell a more convincing narrative by being more believable, and even illustrating how not-psychic Johnny really is.
For personal security during her expeditions on Earth, Eve is equipped with a powerful energy weapon in her right arm. Her gun has a variable power setting, and is shown firing blasts between “Melt that small rock” and “Mushroom Cloud visible from several miles away”
After each shot, the weapon is shown charging up before it is ready to fire again. This status is displayed by three small yellow lights on the exterior, as well as a low-audible charging whine. Smaller blasts appear to use less energy than large blasts, since the recharge cycle is shorter or longer depending on the damage caused.
On the Axiom, Eve’s weapon is removed during her service check-up and tested separately from her other systems. It is shown recharging without firing, implying an internal safety or energy shunt in case the weapon needs to be discharged without firing.
While detached, Wall-E manages to grab the gun away from the maintenance equipment. Through an unseen switch, Wall-E then accidentally fires the charged weapon. This shot destroys the systems keeping the broken robots in the Axiom’s repair ward secured and restrained.
Awesome but Irresponsible
I am assuming here that BNL has a serious need for a weapon of Eve’s strength. Good reasons for this are:
They have no idea what possible threats may still lurk on Earth (a possible radioactive wasteland), or
They are worried about looters, or
They are protecting their investment in Eve from any residual civilization that may see a giant dropship (See the ARV) as a threat.
In any of those cases, Eve would have to defend herself until more Eve units or the ARV could arrive as backup.
Given that the need exists, the weapon should protect Eve and the Axiom. It fails to do this because of its flawed activation (firing when it wasn’t intended). The accidental firing scheme is an anti-pattern that shouldn’t be allowed into the design.
The only lucky part about Wall-E’s mistake is that he doesn’t manage to completely destroy the entire repair ward. Eve’s gun is shown having the power to do just that, but Wall-E fires the weapon on a lower power setting than full blast. Whatever the reason for the accidental shot, Wall-E should never have been able to fire the weapon in that situation.
First, Wall-E was holding the gun awkwardly. It was designed to be attached at Eve’s shoulder and float via a technology we haven’t invented yet. From other screens shown, there were no physical buttons or connection points. This means that the button Wall-E hits to fire the gun is either pressure sensitive or location sensitive. Either way, Wall-E was handling the weapon unsafely, and it should not have fired.
Second, the gun is nowhere near (relatively speaking) Eve when Wall-E fires. She had no control over it, shown by her very cautious approach and “wait a minute” gestures to Wall-E. Since it was not connected to her or the Axiom, the weapon should not be active.
Third, they were in the “repair ward”, which implies that the ship knows that anything inside that area may be broken and do something wildly unpredictable. We see broken styling machines going haywire, tennis ball servers firing non-stop, and an umbrella that opens involuntarily. Any robot that could be dangerous to the Axiom was locked in a space where they couldn’t do harm. Everything was safely locked down except Eve’s gun. The repair ward was too sensitive an area to allow the weapon to be active.
Extremely sensitive area
Any one of those three should have kept Eve’s gun from firing.
Eve’s gun should have been locked down the moment she arrived on the Axiom through the gun’s location aware internal safeties, and exterior signals broadcast by the Axiom. Barring that, the gun should have locked itself down and discharged safely the moment it was disconnected from either Eve or the maintenance equipment.
A Possible Backup?
There is a rationale for having a free-form weapon like this: as a backup system for human crew accompanying an Eve probe during an expedition. In a situation where the Eve pod was damaged, or when humans had to take control, the gun would be detachable and wielded by a senior officer.
Still, given that it can create mushroom clouds, it feels grossly irresponsible.
In a “fallback” mode, a simple digital totem (such as biometrics or an RFID chip) could tie the human wielder to the weapon, and make sure that the gun was used only by authorized personnel. (Notably Wall-E is not an authorized wielder.) By tying the safety trigger to the person using the weapon, or to a specific action like the physical safeties on today’s firearms, the gun would prevent someone who is untrained in its operation from using it.
If something this powerful is required for exploration and protection, it should protect its user in all reasonable situations. While we can expect Eve to understand the danger and capabilities of her weapon, we cannot assume the same of anyone else who might come into contact with it. Physical safeties, removal of easy to press external buttons, and proper handling would protect everyone involved in the Axiom exploration team.
We know in the film that Control has been working behind the scenes long before the event takes place. The Chem department, for example, has somehow gotten Jules to bleach her hair, and the hair dye “works its way into the blood” as a way to slow her cognition, and make her conform more the Whore archetype. Additionally, they have been lacing Marty’s marijuana to keep him dazed & confused. (Though, key to the plot, they missed his secret stash.) There’s even an actor placed en route to the eponymous cabin who unsettles the victims with his aggression and direct violent insults to Jules, setting the stage for their suffering. Though these things occur “off stage” of the actual cabin (and the Chem team works off screen), they help tell the story about how deeply embedded Control is in the world, and set the stage for the surveillance interfaces on stage.
Marking the deaths: on screen & ritually
The goal of the scenario is the suffering and death of the victims, in the right order. To provide a visual marker on the monitoring screens, a transparent red overlay is placed over victims who are believed to have been killed.
The choice of red has a natural association with the violence, but red has a number of problems. Visually, it vibrates against blue (according to opponent process of color theory, the red and blue receptors in our retinas are in the same place and can’t perceive both at the same time). It’s also typically used to grab attention, which in this case is the exact wrong signal. Jules is no longer in the picture, and so specifically no attention is needed for her. Better would be to dim her section on the monitor, or remove her altogether, if marking progress is unimportant.
Hadley orders Thorazine
In addition to marking the deaths in the digital interfaces, the deaths must be marked ritually for the system to work. To this end, Sitterson and Hadley act as the human interface that transfers the information from the electronic systems to the Bronze-Age mechanical systems behind him. Though this could be accomplished mechanically, there are ritual words that must be spoken and an amulet that must be kissed by a supplicant.
Sitterson, the senior of the two, recites, “This we offer in humility and fear / For the blessed peace of your eternal slumber / As it ever was.”
After these ritual actions, Hadley raises a roll top wooden panel to reveal a simple switch. Pulling it down initiates a chain of mechanics that ultimately break a vial of blood into a funnel, which channels the blood into grooves carved into a sacrificial slab.
Sitterson and Hadley mark the first sacrifice
The roll top door acts as a physical barrier against accidental activation, and the mechanical switch requires a manageable, but deliberate, amount of force. Both of these features in the interface ensure that it is only done when intended, and the careful mechanical construction ensures that it is done right.
The mission is world-critical, so like a cockpit, the two who are ultimately in control are kept secure. The control room is accessible (to mere humans, anyway) only through a vault door with an armed guard. Hadley and Sitterson must present IDs to the guard before he grants them access.
Sitterson and Hadley pass security.
Truman, the guard, takes and swipes their cards through a groove in a hand-held device. We are not shown what is on the tiny screen, but we do hear the device’s quick chirps to confirm the positive identity. That sound means that Truman’s eyes aren’t tied to the screen. He can listen for confirmation and monitor the people in front of him for any sign of nervousness or subterfuge.
Hadley boots up the control room screens.
The room itself tells a rich story through its interfaces alone. The wooden panels at the back access Bronze Age technology with its wooden-handled gears, glass bowls, and mechanical devices that smash vials of blood. The massive panel at which they sit is full of Space Age pushbuttons, rheostats, and levers. On the walls behind them are banks of CRT screens. These are augmented with Digital Age, massive, flat panel displays and touch panel screens within easy reach on the console. This is a system that has grown and evolved for eons, with layers of technology that add up to a tangled but functional means of surveillance and control.
The interfaces hint at the great age of the operation.
In order for Control to do their job, they have to keep tabs on the victims at all times, even long before the event: Are the sacrifices conforming to archetype? Do they have a reason to head to the cabin?
The nest empties.
To these ends, there are field agents in the world reporting back by earpiece, and everything about the cabin is wired for video and audio: The rooms, the surrounding woods, even the nearby lake.
Once the ritual sacrifice begins, they have to keep an even tighter surveillance: Are they behaving according to trope? Do they realize the dark truth? Is the Virgin suffering but safe? A lot of the technology seen in the control room is dedicated to this core function of monitoring.
The stage managers monitor the victims.
There are huge screens at the front of the room. There are manual controls for these screens on the big panel. There is an array of CRTs on the far right.
The small digital screens can display anything, but a mode we often see is a split in quarters, showing four cameras in the area of the stage. For example, all the cameras fixed on the rooms are on one screen. This provides a very useful peripheral signal in Sitterson and Hadley’s visual field. As they monitor the scenario, motion will catch their eyes. If that motion is not on a monitor they expect it to be, they can check what’s happening quickly by turning their head and fixating. This helps keep them tightly attuned to what’s happening in the different areas on “stage.”
For internal security, the entire complex is also wired for video, including the holding cages for the nightmare monsters.
Sitterson looks for the escapees amongst the cubes.
The control room watches the bloody chaos spread.
One screen that kind of confuses us appears to be biometrics of the victims. Are the victims implanted with devices for measuring such things, or are sophisticated non-invasive environmental sensors involved? Regardless of the mechanisms, if Control has access to vital signs, how are they mistaken about Marty’s death? We only get a short glance at the screen, so maybe it’s not vital signs, but simple, static biometrics like height, and weight, even though the radiograph diagram suggests more.
Sitterson tries to avoid talking to Mordecai.
Sitterson and Hadley are managing a huge production. It involves departments as broad ranging as chemistry, maintenance, and demolitions. To coordinate and troubleshoot during the ritual, two other communications options are available beyond the monitors; land phone lines and direct-connection, push-to-talk microphones.