Sci-fi Spacesuits: Interface Locations

A major concern of the design of spacesuits is basic usability and ergonomics. Given the heavy material needed in the suit for protection and the fact that the user is wearing a helmet, where does a designer put an interface so that it is usable?

Chest panels

Chest panels are those that require that the wearer only look down to manipulate. These are in easy range of motion for the wearer’s hands. The main problem with this location is that there is a hard trade off between visibility and bulkiness.

Arm panels

Arm panels are those that are—brace yourself—mounted to the forearm. This placement is within easy reach, but does mean that the arm on which the panel sits cannot be otherwise engaged, and it seems like it would be prone to accidental activation. This is a greater technological challenge than a chest panel to keep components small and thin enough to be unobtrusive. It also provides some interface challenges to squeeze information and controls into a very small, horizontal format. The survey shows only three arm panels.

The first is the numerical panel seen in 2001: A Space Odyssey (thanks for the catch, Josh!). It provides discrete and easy input, but no feedback. There are inter-button ridges to kind of prevent accidental activation, but they’re quite subtle and I’m not sure how effective they’d be.

2001: A Space Odyssey (1968)

The second is an oversimplified control panel seen in Star Trek: First Contact, where the output is simply the unlabeled lights underneath the buttons indicating system status.

The third is the mission computers seen on the forearms of the astronauts in Mission to Mars. These full color and nonrectangular displays feature rich, graphic mission information in real time, with textual information on the left and graphic information on the right. Input happens via hard buttons located around the periphery.

Side note: One nifty analog interface is the forearm mirror. This isn’t an invention of sci-fi, as it is actually on real world EVAs. It costs a lot of propellant or energy to turn a body around in space, but spacewalkers occasionally need to see what’s behind them and the interface on the chest. So spacesuits have mirrors on the forearm to enable a quick view with just arm movement. This was showcased twice in the movie Mission to Mars.


The easiest place to see something is directly in front of your eyes, i.e. in a heads-up display, or HUD. HUDs are seen frequently in sci-fi, and increasingly in sc-fi spacesuits as well. One is Sunshine. This HUD provides a real-time view of each other individual to whom the wearer is talking while out on an EVA, and a real-time visualization of dangerous solar winds.

These particular spacesuits are optimized for protection very close to the sun, and the visor is limited to a transparent band set near eye level. These spacewalkers couldn’t look down to see the top of a any interfaces on the suit itself, so the HUD makes a great deal of sense here.

Star Trek: Discovery’s pilot episode included a sequence that found Michael Burnham flying 2000 meters away from the U.S.S. Discovery to investigate a mysterious Macguffin. The HUD helped her with wayfinding, navigating, tracking time before lethal radiation exposure (a biological concern, see the prior post), and even doing a scan of things in her surroundings, most notably a Klingon warrior who appears wearing unfamiliar armor. Reference information sits on the periphery of Michael’s vision, but the augmentations occur mapped to her view. (Noting this raises the same issues of binocular parallax seen in the Iron HUD.)

Iron Man’s Mark L armor was able to fly in space, and the Iron HUD came right along with it. Though not designed/built for space, it’s a general AI HUD assisting its spacewalker, so worth including in the sample.

Avengers: Infinity War (2018)

Aside from HUDs, what we see in the survey is similar to what exists in existing real-world extravehicular mobility units (EMUs), i.e. chest panels and arm panels.

Inputs illustrate paradigms

Physical controls range from the provincial switches and dials on the cigarette-girl foldout control panels of Destination Moon to the simple and restrained numerical button panel of 2001, to strangely unlabeled buttons of Star Trek: First Contact’s arm panels (above), and the ham-handed touch screens of Mission to Mars.

Destination Moon (1950)
2001: A Space Odyssey (1968)

As the pictures above reveal, the input panels reflect the familiar technology of the time of the creation of the movie or television show. The 1950s were still rooted in mechanistic paradigms, the late 1960s interfaces were electronic pushbutton, the 2000s had touch screens and miniaturized displays.

Real world interfaces

For comparison and reference, the controls for NASA’s EMU has a control panel on the front, called the Display and Control Module, where most of the controls for the EMU sit.

The image shows that inputs are very different than what we see as inputs in film and television. The controls are large for easy manipulation even with thick gloves, distinct in type and location for confident identification, analog to allow for a minimum of failure points and in-field debugging and maintenance, and well-protected from accidental actuation with guards and deep recesses. The digital display faces up for the convenience of the spacewalker. The interface text is printed backwards so it can be read with the wrist mirror.

The outputs are fairly minimal. They consist of the pressure suit gauge, audio warnings, and the 12-character alphanumeric LCD panel at the top of the DCM. No HUD.

The gauge is mechanical and standard for its type. The audio warnings are a simple warbling tone when something’s awry. The LCD panel provides information about 16 different values that the spacewalker might need, including estimated time of oxygen remaining, actual volume of oxygen remaining, pressure (redundant to the gauge), battery voltage or amperage, and water temperature. To cycle up and down the list, she presses the Mode Selector Switch forward and backward. She can adjust the contrast using the Display Intensity Control potentiometer on the front of the DCM.

A NASA image tweeted in 2019.

The DCMs referenced in the post are from older NASA documents. In more recent images on NASA’s social media, it looks like there have been significant redesigns to the DCM, but so far I haven’t seen details about the new suit’s controls. (Or about how that tiny thing can house all the displays and controls it needs to.)

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.


I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.


Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Brain Upload

Once Johnny has installed his motion detector on the door, the brain upload can begin.

3. Building it

Johnny starts by opening his briefcase and removing various components, which he connects together into the complete upload system. Some of the parts are disguised, and the whole sequence is similar to an assassin in a thriller film assembling a gun out of harmless looking pieces.


It looks strange today to see a computer system with so many external devices connected by cables. We’ve become accustomed to one piece computing devices with integrated functionality, and keyboards, mice, cameras, printers, and headphones that connect wirelessly.

Cables and other connections are not always considered as interfaces, but “all parts of a thing which enable its use” is the definition according to Chris. In the early to mid 1990s most computer user were well aware of the potential for confusion and frustration in such interfaces. A personal computer could have connections to monitor, keyboard, mouse, modem, CD drive, and joystick – and every single device would use a different type of cable. USB, while not perfect, is one of the greatest ever improvements in user interfaces. Continue reading

Hotel Remote

The Internet 2021 shot that begins the film ends in a hotel suite, where it wakes up lead character Johnny. This is where we see the first real interface in the film. It’s also where this discussion gets more complicated.

A note on my review strategy

As a 3D graphics enthusiast, I’d be happy just to analyze the cyberspace scenes, but when you write for Sci Fi Interfaces, there is a strict rule that every interface in a film must be subjected to inspection. And there are a lot of interfaces in Johnny Mnemonic. (Curse your exhaustive standards, Chris!)

A purely chronological approach which would spend too much time looking at trees and not enough at the forest. So I’ll be jumping back and forth a bit, starting with the gadgets and interfaces that appear only once, then moving on to the recurring elements, variations on a style or idea that are repeated during the film.


The wakeup call arrives in the hotel room as a voice announcement—a sensible if obvious choice for someone who is asleep—and also as text on a wall screen, giving the date, time, and temperature. The voice is artificial sounding but pleasant rather than grating, letting you know that it’s a computer and not some hotel employee who let himself in. The wall display functions as both a passive television and an interactive computer monitor. Johnny picks up a small remote control to silence the wake up call.


This remote is a small black box like most current-day equivalents, but with a glowing red light at one end. At the time of writing blue lights and indicators are popular for consumer electronics, apparently following the preference set by science fiction films and noted in Make It So. Johnny Mnemonic is an outlier in using red lights, as we’ll see more of these as the film progresses. Here the glow might be some kind of infrared or laser beam that sends a signal, or it might simply indicate the right way to orient the control in the hand for the controls to make sense. Continue reading

Jefferson Projection


When Imperial troopers intrude to search the house, one of the bullying officers takes interest in a device sitting on the dining table. It’s the size of a sewing machine, with a long handle along the top. It has a set of thumb toggles along the top, like old cassette tape recorder buttons.

Saun convinces the officer to sit down, stretches the thin script with a bunch of pointless fiddling of a volume slider and pantomimed delays, and at last fumbles the front of the device open. Hinged at the bottom like a drawbridge, it exposes a small black velvet display space. Understandably exasperated, the officer stands up to shout, “Will you get on with it?” Saun presses a button on the opened panel, and the searing chord of an electric guitar can be heard at once.


Inside the drawbridge-space a spot of pink light begins to glow, and mesmerized officer who, moments ago was bent on brute intimidation, but now spends the next five minutes and 23 seconds grinning dopily at the volumetric performance by Jefferson Starship.

During the performance, 6 lights link in a pattern in the upper right hand corner of the display. When the song finishes, the device goes silent. No other interactions are seen with it.


Many questions. Why is there a whole set of buttons to open the thing? Is this the only thing it can play? If not, how do you select another performance?Is it those unused buttons on the top? Why are the buttons unlabeled? Is Jefferson Starship immortal? How is it that they have only aged in the long, long time since this was recorded? Or was this volumetric recording somehow sent back in time?  Where is the button that Saun pressed to start the playback? If there was no button, and it was the entire front panel, why doesn’t it turn on and off while the officer taps (see above)? What do the little lights do other than distract? Why is the glow pink rather than Star-Wars-standard blue? Since volumetric projections are most often free-floating, why does this appear in a lunchbox? Since there already exists ubiquitous display screens, why would anyone haul this thing around? How does this officer keep his job?

Perhaps it’s best that these questions remain unanswered. For if anything were substantially different, we would risk losing this image, of the silhouette of the lead singer and his microphone. Humanity would be the poorer for it.


The Groomer

The groomer is a device for sale at the Wookie Planet Trading Post C by local proprietor Saun Dann. It looks like a dust brush with an OXO designed, black, easy-grip handle, with a handful of small silver pushbuttons on one side (maybe…three?), and a handful of black buttons on the other (again, maybe three). It’s kind of hard to call it exactly, since this is lower-res than a recompressed I Can Haz Cheezburger jpg.


Let’s hear Saun describe it to the vaguely menacing Imperial shopper in his store.

Besides shaving and hair trimming, it’s guaranteed to lift stains off clothing, faces, and hands. Cleans teeth, fingers and toenails, washes eyes, pierces ears, calculates, modulates, syncopates life rhythms, and can repeat the Imperial Penal Code—all 17 volumes— in half the time of the old XP-21. Just the thing to keep you squeaky clean.

There are so many, many problems with this thing. On every level it’s wretched. Continue reading

Containment unit

With a ghost ensconced in a trap, the next step in ghostbusting is to transfer the trap to a containment unit.  Let’s look at the interaction.

The containment unit is a large device built into a wall of the old firehouse that serves as the Ghostbusters headquarters. It’s painted a fire-truck red and has two colored bulbs above it. As they approach, the green bulb is lit. It’s got a number of buttons, levers, and cables extending into it. Fortunately for purposes of discussion, Stantz has to explain it to their new employee Winston Zeddmore, and I can just quote him.


“This is where we store all the vapors, entities, and slimers that we trap. Very simple, really. Loaded trap here. Unlock the system…” He grabs the red door lever and cranks it counterclockwise 90 degrees and lowers the door to reveal a slot for the trap.

“Insert the trap,” he continues, and a sucking sound is heard and the green lightbulb goes off and the red lightbulb turns on.

Then Stanz pulls the trap out of the slot and is able to, as he explains, “Release. Close. Lock the system.” (Which he does with the lever handle.)


Next, he presses the buttons on the front of the device, starting with the top red one and continuing with the second below yellow, explaining, “Set your entry grid. Neutronize your field…” Then he grabs the red lever on the right-hand size and pushes it down. In response, the lowest push button lights up green, the red bulb above turns off, and the green bulb illuminates once again.

Stantz concludes, “When the light is green, the trap is clean. The ghost is incarcerated here in our custom-made storage facility.”


The interaction here is all based on the unkonwn ghostbusting technology, but it certainly feels very 1.0, very made-by-engineers, which is completely appropriate to the film. There’s also that nice rhyming mnemonic to remember the meaning of the colored bulbs, which helps Zeddmore immediately remember it. And course with the red paint and thick plates, it feels really secure and conveys a sense of pith and importance. Still, if they had a designer consulting, that designer would most likely tell them talk about a few aspects of the workflow.


First, why, if there’s no breakpoint between the entry grid and the field neutronizer, can’t those two be consolidated into a single button? A gridtronizer? While we’re on the buttons, why is that third one looks like a button but acts just like a light? If it’s not meant to be pressed, let’s make it an indicator light, like we see on the trap.

Similarly, why do they have to press that last lever and wait for the green light? I get that a variety of controls feels better to convey a complicated technology that’s been hacked together, and that would be appropriate for a user to understand as well, but it seems error-prone and unnecessary. Better would be another pushbutton that would stay depressed until the unit was doing whatever it was doing behind the scenes, and then release when it was done. It could even be consolidated with the gridtronizer.


But while we’re including automation in the process, why would the ghostbuster have to press anything at all? If the unit can detect when a ghost has been sucked in (which it does) then why can’t it do all the other steps automatically? I know, it would be less juicy for the audience’s sense of ghostbusting technological complexity, but for the “real world,” such things should be fully assistive:

  1. Insert trap (which gets locked in place)
  2. Watch the machine’s lights indicating its four steps
  3. Remove unlocked trap.

You might think for efficiency to have the trap removed immediately, but you really want the Ghostbuster’s attention on the system in case something goes wrong. Similar to the way ATMs/bancomats hold on to your card through a transaction.

Lastly, there should be some sense of what’s contained. In this scene there’s just Slimer in there, but as business picks up, it gets so jammed full that when EPA representative Peck recklessly shuts it down, it…you know…explodes with ghosts. Would a sense of the contents have helped provide him with a sense of the contents, and therefore the danger? A counter, a gauge, a window into the space, a “virtual window” of closed-circuit television showing inside the unit*, or a playback showing helmet-cam video of the ghosts as they’re being captured—would all help to convey that, Mr. Peck, you do not want to eff with this machine.


*IMDB trivia for the movie says this was originally included in the script but was too depressing to visualize so it was cut. But hey, if it’s depressing, maybe that would help its users consider the ethics of the situation. (Once again, thank you, @cracked, and RIP.)



After Rico’s fatal mistake in the live fire exercise, he is disgraced, relieved of squad command, and subject to corporal punishment. At the time of his punishment, the squads stand at attention around the square as Rico approaches the pillory at its center. Sergeant Zim pulls the restraints down from housings in the frame and loops them around Rico’s wrists. Then, he activates the interface, which is a hand-sized chrome button on the side of the frame.

With a single slap of the huge button, the restraints pull up and hold Rico’s arms at their fullest extents, simultaneously disabling him and giving some adolescents in the audience feelings they would not come to terms with for years.


There’s a basic improvement that can be made, which is for the control to indicate the status. Yes, the status is apparent from a glance at the restraints. So it’s not an essential improvement. But as a general rule, you want to save the user from having to check some other place for the status of a system. Output where you input.

A more important improvement is related to the fact that this is a public event, a piece of authoritarian theater. With that in mind, a big knife switch with a loud thunk would add to the drama of the moment and make more of an impression on the audience. Which is the point. And, incidentally, it would solve the apparent-state problem from the prior paragraph, for a win-win all around. Except for the incredibly painful flogging that comes next.


Nothing we can do about that, right? Go, fascism.

In case of evasion, BREAK GLASS

  • You sent for me, sir?
  • Yes I did…I did not, however, invite you to sit, Lieutenant.
  • Sorry, sir.
  • Are you aware that we have just lost contact with the Rodger Young?
  • Everyone’s talking about it, sir.
  • Well, I have the video feed from the bridge here. I understand you are the designer of the emergency evasion panel, and the footage raises some fundamental questions about that design. Watch with me now, Lieutenant.
  • As you can see, immediately after Captain Deladier issues her order, your panel slides up from a recess in the dash.
  • (He pauses the video)
  • (After a silence)
  • Is there a question, sir?
  • Why is this panel recessed?
  • To prevent accidental activation, sir.
  • But it’s an emergency panel. For crisis situations. It takes two incredibly valuable seconds for this thing to dramatically rise up. What else do you imagine that pilot might have done with those extra two seconds?
  • I…
  • Don’t answer that. It’s rhetorical. Next I need you to not explain this layout. Why aren’t the buttons labeled? What does that second one do, and why does it look exactly the same as the emergency evasion button? Are you deliberately trying to confuse our pilots?
  • (Stares.)
  • OK, now I actually do want you to explain something.
  • (Resuming the video)
  • Why did you cover the panel in glass? Ibanez—and I can’t believe I’m saying this—punches it.
  • The glass is there also to prevent accidental activation, sir.
  • But you already covered that with the time-wasting recession. You know she’s likely to have tendon, nerve, and arterial damage now, right? And she’s a pilot, Lieutenant. Without her hands, she’s almost useless to us. And now, in addition to having a giant, peanut-shaped boulder in their face, they’ve got a bridge full of loose glass shards scattered about. Let’s hope the artificial gravity lasts long enough for them to get a broom, or they’re going to be in for some floating laceration ballet.
  • That would be unfortunate, sir.
  • Damn right. Now honestly I might be of a mind to simply court martial you and treat you to some good old Federation-approved public flogging for Failure to Design. But today may be your lucky day. I believe your elegant design decisions were exacerbated by the pilot’s being something of a drama queen.
  • The glass was designed to be lifted off, sir.
  • (Resuming the video)
  • Fair enough. My last question…
  • Did I see correctly that all of the lights underneath the engine boost light up all at once? The ones labeled POWER ON? AUTO HOME? NOSE RAM? The ones that don’t have anything to do with the engine boost?
  • And…and the adjacent green LED, sir.
  • All at once.
  • Sir.
  • (Sighs)
  • Well, as you might not be able to imagine, we’re moving you. After you collect your belongings you are to report to the Reassignment Office.
  • (He scrubs back and forth over the drone video of the communication tower ripping off.)
  • Out of curiosity, WOODS, what was the last thing you designed as part of my department?
  • The Buenos Aires Missile Defense System, sir.
  • I’ll look into it. Dismissed.

Otto’s Manual Control



When it refused to give up authority, the Captain wrested control of the Axiom from the artificial intelligence autopilot, Otto. Otto’s body is the helm wheel of the ship and fights back against the Captain. Otto wants to fulfil BNL’s orders to keep the ship in space. As they fight, the Captain dislodges a cover panel for Otto’s off-switch. When the captain sees the switch, he immediately realizes that he can regain control of the ship by deactivating Otto. After fighting his way to the switch and flipping it, Otto deactivates and reverts to a manual control interface for the ship.

The panel of buttons showing Otto’s current status next to the on/off switch deactivates half its lights when the Captain switches over to manual. The dimmed icons are indicating which systems are now offline. Effortlessly, the captain then returns the ship to its proper flight path with a quick turn of the controls.

One interesting note is the similarity between Otto’s stalk control keypad, and the keypad on the Eve Pod. Both have the circular button in the middle, with blue buttons in a semi-radial pattern around it. Given the Eve Pod’s interface, this should also be a series of start-up buttons or option commands. The main difference here is that they are all lit, where the Eve Pod’s buttons were dim until hit. Since every other interface on the Axiom glows when in use, it looks like all of Otto’s commands and autopilot options are active when the Captain deactivates him.

A hint of practicality…

The panel is in a place that is accessible and would be easily located by service crew or trained operators. Given that the Axiom is a spaceship, the systems on board are probably heavily regulated and redundant. However, the panel isn’t easily visible thanks to specific decisions by BNL. This system makes sense for a company that doesn’t think people need or want to deal with this kind of thing on their own.

Once the panel is open, the operator has a clear view of which systems are on, and which are off. The major downside to this keypad (like the Eve Pod) is that the coding of the information is obscure. These cryptic buttons would only be understandable for a highly trained operator/programmer/setup technician for the system. Given the current state of the Axiom, unless the crew were to check the autopilot manual, it is likely that no one on board the ship knows what those buttons mean anymore.


Thankfully, the most important button is in clear English. We know English is important to BNL because it is the language of the ship and the language seen being taught to the new children on board. Anyone who had an issue with the autopilot system and could locate the button, would know which button press would turn Otto off (as we then see the Captain immediately do).

Considering that Buy-N-Large’s mission is to create robots to fill humans’ every need, saving them from every tedious or unenjoyable job (garbage collecting, long-distance transportation, complex integrated systems, sports), it was both interesting and reassuring to see that there are manual over-rides on their mission-critical equipment.

…But hidden

The opposite situation could get a little tricky though. If the ship was in manual mode, with the door closed, and no qualified or trained personnel on the bridge, it would be incredibly difficult for them to figure out how to physically turn the ship back to auto-pilot. A hidden emergency control is useless in an emergency.

Hopefully, considering the heavy use of voice recognition on the ship, there is a way for the ship to recognize an emergency situation and quickly take control. We know this is possible because we see the ship completely take over and run through a Code Green procedure to analyze whether Eve had actually returned a plant from Earth. In that instance, the ship only required a short, confused grunt from the Captain to initiate a very complex procedure.

Security isn’t an issue here because we already know that the Axiom screens visitors to the bridge (the Gatekeeper). By tracking who is entering the bridge using the Axiom’s current systems, the ship would know who is and isn’t allowed to activate certain commands. The Gatekeeper would either already have this information coded in, or be able to activate it when he allowed people into the bridge.

For very critical emergencies, a system that could recognize a spoken ‘off’ command from senior staff or trained technicians on the Axiom would be ideal.

Anti-interaction as Standard Operating Procedure


The hidden door, and the obscure hard-wired off button continue the mission of Buy-N-Large: to encourage citizens to give up control for comfort, and make it difficult to undo that decision. Seeing as how the citizens are more than happy to give up that control at first, it looks like profitable assumption for Buy-N-Large, at least in the short term. In the long term we can take comfort that the human spirit–aided by an adorable little robot–will prevail.

So for BNL’s goals, this interface is fairly well designed. But for the real world, you would want some sort of graceful degradation that would enable qualified people to easily take control in an emergency. Even the most highly trained technicians appreciate clearly labeled controls and overrides so that they can deal directly with the problem at hand rather than fighting with the interface.