The Lifeboat Controls


After Wall-E and Eve return to the Axiom, Otto steals the Earth plant and has his security bot place it on a lifeboat for removal from the ship. Wall-E follows the plant onboard the pod, and is launched from the Axiom when the security bot remotely activates the pod. The Pod has an autopilot function (labeled an auto-lock, and not obviously sentient), and a Self-Destruct function, both of which the security bot activates at launch. Wall-E first tries to turn the auto-pilot off by pushing the large red button on the control panel. This doesn’t work.


Wall-E then desperately tries to turn off the auto-destruct by randomly pushing buttons on the pod’s control panel. He quickly gives up as the destruct continues counting down and he makes no progress on turning it off. In desperation, Wall-E grabs a fire extinguisher and pulls the emergency exit handle on the main door of the pod to escape.

The Auto-Destruct

There are two phases of display on the controls for the Auto-Destruct system: off and countdown. In its off mode, the area of the display dedicated to the destruct countdown is plain and blue, with no label or number. The large physical button in the center is unlit and hidden, flush with the console. There is no indication of which sequence of keypresses activates the auto-destruct.

When it’s on, the area turns bright red, with a pulsing countdown in large numbers, a large ‘Auto-Destruct’ label on the left. The giant red pushbutton in the center is elevated above the console, surrounded by hazard striping, and lit from within.


The odd part is that when the button in the center gets pushed down, nothing happens. This is the first thing Wall-E does to turn the system off, and it’s has every affordance for being a button to stop the auto-destruct panel in which it sits. It’s possible that this center button is really just a pop-up alert light to add immediacy to the audible and other visual cues of impending destruction.

If so, the pod’s controls are seriously inadequate.

Wall-E wants to shut the system off, and the button is the most obvious choice for that action. Self-destruction is an irreversible process. If accidentally activated, it is something that needs to be immediately shut off. It is also something that would cause panicked decision making in the escape pod’s users.


The blinking button in the center of the control area is the best and most obvious target to “SHUT IT OFF NOW!”

Of course this is just part of the fish-out-of-water humor of the scene, but is there a real reason it’s not responding like it obviously should? One possibility is that the pod is running an authority scan of all the occupants (much like the Gatekeeper for the bridge or what I suggested for Eve’s gun), and is deciding that Wall-E isn’t cleared to use that control. If so, that kind of biometric scanning should be disabled for a control like the Anti-Auto-Destruct. None of the other controls (up to and including the airlock door exit) are disabled in the same way, which causes serious cognitive dissonance for Wall-E.

The Axiom is able to defend itself from anyone interested in taking advantage of this system through the use of weapons like Eve’s gun and the Security robots’ force fields.

Anything that causes such a serious effect should have an undo or an off switch. The duration of the countdown gives Wall-E plenty of time to react, but the pod should accept that panicked response as a request to turn the destruct off, especially as a fail-safe in case its biometric scan isn’t functioning properly, and there might be lives in the balance.

The Other Controls

No Labels.



This escape pod is meant to be used in an emergency, and so the automatic systems should degrade as gracefully as possible.

While beautiful, extremely well grouped by apparent function, and incredibly responsive to touch inputs, labels would have made the control panel usable for even a moderately skilled crewmember in the pilot seat. Labels would also provide reinforcement of a crew member’s training in a panic-driven situation.

Buy-N-Large: Beautifully Designed Dystopia


A design should empower the people using it, and provide reinforcement to expert training in a situation where memory can be strained because of panic. The escape-pod has many benefits: clear seating positions, several emergency launch controls, and an effective auto-pilot. Adding extra backups to provide context for a panicked human pilot would add to the pod’s safety and help crew and passengers understand their options in an emergency.


Dust Storm Alert


While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.

A Well Practiced Design

The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.

Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.


Equal Opportunity Alerts

By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.

For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.

Why Not Network It?

Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.

Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.

The Dropship


The Axiom Return Vehicle’s (ARV’s) first job is to drop off Eve and activate her for her mission on Earth. The ARV acts as the transport from the Axiom, landing on the surface of Earth to drop off Eve pods, then returning after an allotted time to retrieve the pods and return them to the Axiom.

The ARV drops Eve at the landing site by Wall-E’s home, then pushes a series of buttons on her front chest. The buttons light up as they’re pushed, showing up blue just after the arm clicks them. At the end of the button sequence, Eve wakes up and immediately begins scanning the ground directly in front of her. She then continues scanning the environment, leaving the ARV to drop off more Eve Pods elsewhere.

If It Ain’t Broke…

There’s an oddity in ARV’s use of such a crude input device to activate Eve. On first appearance, it seems like it’s a system that is able to provide a backup interface for a human user, allowing Eve to be activated by a person on the ground in the event of an AI failure, or a human-led research mission. But this seems awkward in use because Eve’s front contains no indication of what the buttons each do, or what sequence is required.

A human user of the system would be required to memorize the proper sequence as a physical set of relationships. Without more visual cues, it would be incredibly easy for the person in that situation to push the wrong button to start with, then continue pushing wrong buttons without realizing it (unless they remembered what sound the first button was supposed to make, but then they have one /more/ piece of information to memorize. It just spirals out of control from there).

What was originally for people is now best used by robots.


So if it’s not for humans, what’s going on? Looking at it, the minimal interface has strong hints of originally being designed for legacy support: large physical buttons, coded interface, and tilted upward for a person standing above it. BNL shows a strong desire to design out people, but leave interactions (see The Gatekeeper). This style of button interface looks like a legacy control kept by BNL because by the time people weren’t needed in the system anymore, the automated technology had already been adapted for the same situation.

Large hints to this come from the labels. Each label is an abstract symbol, with the keys grouped into two major areas (the radial selector on the top, and the line of large squares on the bottom). For highly trained technicians meant to interact only rarely with an Eve pod, these cryptic labels would either be memorized or referenced in a maintenance manual. For BNL, the problem would only appear after both the technicians and the manual are gone.

It’s an interface that sticks around because it’s more expensive to completely redo a piece of technology than simply iterate it.

Despite the information hurdles, the physical parts of this interface look usable. By angling the panel they make it easier to see the keypad from a standing position, and the keys are large enough to easily press without accidentally landing on the wrong one. The feedback is also excellent, with a physical depression, a tactile click, and a backlight that trails slightly to show the last key hit for confirmation.

If I were redesigning this I would bring in the ability for a basic- or intermediate-skill technician to use this keypad quickly. An immediate win would be labeling the keys on the panel with their functions, or at least their position in the correct activation sequence. Small hints would make a big difference for a technician’s memory.


To improve it even more, I would bring in the holographic technology BNL has shown elsewhere. With an overlay hologram, the pod itself could display real-time assistance, of the right sequence of keypresses for whatever function the technician needed.

This small keypad continues to build on the movie’s themes of systems that evolve: Wall-E is still controllable and serviceable by a human, but Eve from the very start has probably never even seen a human being. BNL has automated science to make it easier on their customers.

Who did it better? Fingernail-o-matic edition

The Fifth Element


When in The Fifth Element the Mangalore Aknot calls Zorg to report that the “mission is accomplished,” we get a few seconds of screen time with Zorg’s secretary who receives the call. During this moment, she’s a bit bored, and idly shoves a finger into a small, lipstick-case sized device. When she removes it, the device has colored her fingernail a lovely shade of #81002c.


The small device is finger-sized, the industrial design feels very much like cosmetics, and its simple design clearly affords inserting a finger. There’s also a little icon on the side that indicates its color. This one device speaks well of what the entire line of products might look like. All told, a simple and lovely interaction in a domain, i.e. cosmetics, that typically doesn’t get a lot of attention in sci-fi.

But what is even more remarkable is that this isn’t the only fingernail interface in the Make It So survey. There is one other, 7 years earlier, and it happens to be used by someone with the exact same job. This other interface comes from the 1990 movie Total Recall.

Total Recall (1990)


As you can see, this receptionist has an interface for coloring her nails as well, but the interaction is entirely different. This device has something like a a tablet with a connected stylus. It displays 16 color options in a full screen grid. She selects a particular color with the tap of the stylus. Then when she taps the stylus to a nail, the nail wipe-transitions to the new color from the tip to the cuticle.


This device is cumbersome. It’s not something that could fit into a purse. Does she just leave it on her desk? Doesn’t her supervisor have opinions about that? My sense is that this is something better suited to a salon than an office space.

As a selection and application mechanism, the stylus is a bad choice. It requires quite a bit of precision to tap the tip of the nail. Our old friend Paul Fitts certainly would use something different for his nails. Since the secretary has to have to have some kind of high-tech coating, perhaps similar to electrophoretic ink, why is the stylus necessary at all? Can’t she just tap her fingernails to the color square of her choice? That would disintermediate the interaction and save her the hassle of targeting her nails with that stylus, especially when she has to switch to her off-hand.

The color display poses some other interesting problems as well. It needs to show colors, but why just 16? We don’t see any means of selecting others. Are these just this season’s most popular? Why not offer her any color she likes? Or some means of capturing her current outfit and suggesting colors based on that? Even the layout is problematic. Because of the effect of simultaneous contrast, the perception of a color alters when seen directly adjacent to other colors. These squares should have some sort of neutral border around them to make perception of them more “true.” But why should we burden her with having to imagine what the color will look like? Show her an image of her hand and let her see in advance what the new color will look like on her fingers. Any sort of low-level augmented reality would help her feel less like she’s picking paint for her living room wall.

And the winner is…

Comparing the two, I’d say that The Fifth Element fingernail-o-matic wins out. It’s more personal, more ergonomic, fits into the user’s lifestyle more, feels more fashionable than techy (which that receptionist clearly cares about). Yes, it’s more restricted in choices, but I’d much rather figure out how to augment that little device with a color selector than try to make a stylus and tablet fingernail-o-matic actually work.

The Shagpile Cockpit


Barbarella’‘s rocket ship has a single main room, which is covered wall-to-wall with shagpile carpet. The visual panel of her voice-interface computer, called Alphy, is built into the wall near the back. On the right side of Alphy sits the video phone statue. To the left a a large reproduction of Seurat’’s A Sunday Afternoon on the Island of La Grande Jatte masks a door to exit the ship.


A recessed, circular seating space near the front acts as the cockpit. From this position Barbarella can see through a large angled viewport. One nice aspect of the design of the cockpit is that when things are going poorly for the rocket ship, and Barbarella is being buffeted about, the pile keeps the damage to a minimum and, the recessed cockpit is likely to “catch” her and hold her there, in a place where she can try and remedy the situation. (This is exactly what happens when they encounter “magnetic disturbances” on their approach to Tau Ceti.)

Magnetic Disturbances

The control panel for the ship is a wide band of roughly 60 unlabeled black, white, and gray keys, curving around the pilot like amphitheater seating. The keys themselves are random lengths, stacking in some places two and three to a column. Barbarella presses these keys when she must manually pilot the ship, at one point pressing a particular one several times in quick succession. That action suggests not controls for building up commands like a computer keyboard, but rather direct-effect controls, like an automobile dashboard, where each key has a different, direct effect.


This keyboard panel lacks any clues as to the functions of the keys for first-time users, but the high-contrast and cluster patterns make it easy for expert users like Barbarella (being as she is a 5-star double-rated Astro-Navigatrix) to visually locate a particular key amongst them. But there’s a lot that could be improved. First and most obvious is that the extents of the keyboard are quite spread out from her immediate reach. Bringing them within easy reach would mean less physical work. We also know that like an automotive dashboard, unless these keys are all controlling things with direct, obvious consequences, some status indicators in the periphery of her vision would be damned handy. And even with the unique key configuration, Barbarella would have an even easier time of it with physically differentiated controls, ideally with carefully designed affordances.

The other features of the cockpit, including a concave panel in the wall to her left with large, round, colored lights, and a set of large, reflective black domes on the right hand side of the cockpit, are not seen in use.

Alien Astrometrics


When David is exploring the ancient alien navigation interfaces, he surveys a panel, and presses three buttons whose bulbous tops have the appearance of soft-boiled eggs. As he presses them in order, electronic clucks echo in in the cavern. After a beat, one of the eggs flickers, and glows from an internal light. He presses this one, and a seat glides out for a user to sit in. He does so, and a glowing pollen volumetric projection of several aliens appears. The one before David takes a seat in the chair, which repositions itself in the semicircular indentation of the large circular table.


The material selection of the egg buttons could not be a better example of affordance. The part that’s meant to be touched looks soft and pliable, smooth and cool to the touch. The part that’s not meant to be touched looks rough, like immovable stone. At a glance, it’s clear what is interactive and what isn’t. Among the egg buttons there are some variations in orientation, size, and even surface texture. It is the bumpy-surfaced one that draws David’s attention to touch first that ultimately activates the seat.

The VP alien picks up and blows a few notes on a simple flute, which brings that seat’s interface fully to life. The eggs glow green and emit green glowing plasma arcs between certain of them. David is able to place his hand in the path of one of the arcs and change its shape as the plasma steers around him, but it does not appear to affect the display. The arcs themselves appear to be a status display, but not a control.

After the alien manipulates these controls for a bit, a massive, cyan volumetric projection appears and fills the chamber. It depicts a fluid node network mapped to the outside of a sphere. Other node network clouds appear floating everywhere in the room along with objects that look like old Bohr models of atoms, but with galaxies at their center. Within the sphere three-dimensional astronomical charts appear. Additionally huge rings appear and surround the main sphere, rotating slowly. After a few inputs from the VP alien at the interface, the whole display reconfigures, putting one of the small orbiting Bohr models at the center, illuminating emerald green lines that point to it and a faint sphere of emerald green lines that surround it. The total effect of this display is beautiful and spectacular, even for David, who is an unfeeling replicant cyborg.


At the center of the display, David observes that the green-highlighted sphere is the planet Earth. He reaches out towards it, and it falls to his hand. When it is within reach, he plucks it from its orbit, at which point the green highlights disappear with an electronic glitch sound. He marvels at it for a bit, turning it in his hands, looking at Africa. Then after he opens his hands, the VP Earth gently returns to its rightful position in the display, where it is once again highlighted with emerald, volumetric graphics.


Finally, in a blinding flash, the display suddenly quits, leaving David back in the darkness of the abandoned room, with the exception of the small Earth display, which is floating over a small pyramid-shaped protrusion before flickering away.

After the Earth fades, david notices the stasis chambers around the outside of the room. He realizes that what he has just seen (and interacted with) is a memory from one of the aliens still present.



Hilarious and insightful Youtube poster CinemaSins asks in the video “Everything Wrong with Prometheus in 4 minutes or Less,” “How the f*ck is he holding the memory of a hologram?” Fair question, but not unanswerable. The critique only stands if you presume that the display must be passive and must play uninterrupted like a television show or movie. But it certainly doesn’t have to be that way.

Imagine if this is less like a YouTube video, and more like a playback through a game engine like a holodeck StarCraft. Of course it’s entirely possible to pause the action in the middle of playback and investigate parts of the display, before pressing play again and letting it resume its course. But that playback is a live system. It would be possible to run it afresh from the paused point with changed parameters as well. This sort of interrupt-and-play model would be a fantastic learning tool for sensemaking of 4D information. Want to pause playback of the signing of the Magna Carta and pick up the document to read it? That’s a “learning moment” and one that a system should take advantage of. I’d be surprised if—once such a display were possible—it wouldn’t be the norm.


The only thing I see that’s missing in the scene is a clear signal about the different state of the playback:

  1. As it happened
  2. Paused for investigation
  3. Playing with new parameters (if it was actually available)

David moves from 1 to 2, but the only change of state is the appearance and disappearance of the green highlight VP graphics around the Earth. This is a signal that could easily be missed, and wasn’t present at the start of the display. Better would be some global change, like a global shift in color to indicate the different state. A separate signal might compare As it Happened with the results of Playing with new parameters, but that’s a speculative requirement of a speculative technology. Best to put it down for now and return to what this interface is: One of the most rich, lovely, and promising examples of sensemaking interactions seen on screen. (See what I did there?)

For more about how VP might be more than a passive playback, see the lesson in Chapter 4 of Make It So, page 84, VP Systems Should Interpret, Not Just Report.

Touch Walls


When exploring the complex, David espies a few cuneiform-like characters high up on a stone wall. He is able to climb a ladder, decipher the language quickly, ascertain that it is an interface rather than an inscription, and figure out how to surreptitiously operate it. To do so, he puts his finger at the top of one of the grooves and drags downward. The groove illuminates briefly in response, and then fades. He does this to another groove, then presses a dot, and presses another dot not near the first one at all. Finally he presses a horizontal triangle firmly, which after a beat plays a 1:1 scale glowing-pollen volumetric projection.

The material and feedback of this interaction are lovely. The grooves provide a nice, tactile, physical affordance for the gesture. A groove is for dragging. A dot or a shape is for pressing. But I cannot imagine what kind of affordances are available to this language such that David can suss out the order of operation on two undifferentiated grooves. Of course presuming that the meaning of the dot and triangle are somehow self-evident to speakers of Architect, David has a 50% chance of getting the order of the grooves right. So we might be able to cut this scene some slack.


But a few scenes later, this is stretched beyond credulity. When David encounters a similarly high-up interface, he is able to ascertain on sight that chording—pressing two controls at once—is possible and necessary for operation. For this interface, he presses and drags 14 different chords flawlessly to open the ancient alien door. This is a much longer sequence involving an interaction that has no affordance.


Looking at the design of the command, an evaluation depends if it’s just a command or a password. If it’s just a control that means “open the door,” why would it take 14 characters’ worth of a command? Is there that much that this door can do? Otherwise a simple press-to-open seems like a more usable design.

If it’s a door security system then the 14 part input is a security password. This would be the more likely interpretation since the chamber beyond contains the deadly, deadly xenomorph liquid. With this in mind it’s a good design to have a 14-part password that includes a required interaction with no affordance. I’m no statistician, but I think the likelihood of guessing the correct password to be 14 factorial, or around 87,178,291,200 to 1. I have no idea what the odds are for guessing the correct operation of an interaction with zero affordance. We’d have to show some aliens MS-DOS to get some hard numbers, but that seems pretty damned secure. Unfortunately, it also stretches the believability of the scene way past the breaking point, to presume that David can just observe the alien login screen and guess the giant password.


Early in the film, when Shaw sees the MedPod for the first time, she comments to Vickers that, “They only made a dozen of these.” As she caresses its interface in awe, a panel extends as the pod instructs her to “Please verbally state the nature of your injury.”


The MedPod is a device for automated, generalized surgical procedures, operable by the patient him- (or her-, kinda, see below) self.

When in the film Shaw realizes that she’s carrying an alien organism in her womb, she breaks free from crewmembers who want to contain her, and makes a staggering beeline for the MedPod.

Once there, she reaches for the extended touchscreen and presses the red EMERGENCY button. Audio output from the pod confirms her selection, “Emergency procedure initiated. Please verbally state the nature of your injury.” Shaw shouts, “I need cesarean!” The machine informs her verbally that, “Error. This MedPod is calibrated for male patients only. It does not offer the procedure you have requested. Please seek medical assistance else–”


I’ll pause the action here to address this. What sensors and actuators are this gender-specific? Why can’t it offer gender-neutral alternatives? Sure, some procedures might need anatomical knowledge of particularly gendered organs (say…emergency circumcision?), but given…

  • the massive amounts of biological similarity between the sexes
  • the needs for any medical device to deal with a high degree of biological variability in its subjects anyway
  • most procedures are gender neutral

…this is a ridiculous interface plot device. If Dr. Shaw can issue a few simple system commands that work around this limitation (as she does in this very scene), then the machine could have just done without the stupid error message. (Yes, we get that it’s a mystery why Vickers would have her MedPod calibrated to a man, but really, that’s a throwaway clue.) Gender-specific procedures can’t take up so much room in memory that it was simpler to cut the potential lives it could save in half. You know, rather than outfit it with another hard drive.

Aside from the pointless “tension-building” wrong-gender plot point, there are still interface issues with this step. Why does she need to press the emergency button in the first place? The pod has a voice interface. Why can’t she just shout “Emergency!” or even better, “Help me!” Isn’t that more suited to an emergency situation? Why is a menu of procedures the default main screen? Shouldn’t it be a prompt to speak, and have the menu there for mute people or if silence is called for? And shouldn’t it provide a type-ahead control rather than a multi-facet selection list? OK, back to the action.

Desperate, Shaw presses a button that grants her manual control. She states “Surgery abdominal, penetrating injuries. Foreign body. Initiate.” The screen confirms these selections amongst options on screen. (They read “DIAGNOS, THERAP, SURGICAL, MED REC, SYS/MECH, and EMERGENCY”)

The pod then swings open saying, “Surgical procedure begins,” and tilting itself for easy access. Shaw injects herself with anesthetic and steps into the pod, which seals around her and returns to a horizontal position.

Why does Shaw need to speak in this stilted speech? In a panicked or medical emergency situation, proper computer syntax should be the last thing on a user’s mind. Let the patient shout the information however they need to, like “I’ve got an alien in my abdomen! I need it to be surgically removed now!” We know from the Sonic chapter that the use of natural language triggers an anthropomorphic sense in the user, which imposes some other design constraints to convey the system’s limitations, but in this case, the emergency trumps the needs of affordance subtleties.

Once inside the pod, a transparent display on the inside states that, “EMERGENCY PROC INITIATED.” Shaw makes some touch selections, which runs a diagnostic scan along the length of her body. The terrifying results display for her to see, with the alien body differentiated in magenta to contrast her own tissue, displayed in cyan.



Shaw shouts, “Get it out!!” It says, “Initiating anesthetics” before spraying her abdomen with a bile-yellow local anesthetic. It then says, “Commence surgical procedure.” (A note for the grammar nerds here: Wouldn’t you expect a machine to maintain a single part of speech for consistency? The first, “Initiating…” is a gerund, while the second, “Commence,” is an imperative.) Then, using lasers, the MedPod cuts through tissue until it reaches the foreign body. Given that the lasers can cut organic matter, and that the xenomorph has acid for blood, you have to hand it to the precision of this device. One slip could have burned a hole right through her spine. Fortunately it has a feather-light touch. Reaching in with a speculum-like device, it removes the squid-like alien in its amniotic sac.

OK. Here I have to return to the whole “ManPod” thing. Wouldn’t a scan have shown that this was, in fact, a woman? Why wouldn’t it stop the procedure if it really couldn’t handle working on the fairer sex? Should it have paused to have her sign away insurance rights? Could it really mistake her womb for a stomach? Wouldn’t it, believing her to be a man, presume the whole womb to be a foreign body and try to perform a hysterectomy rather than a delicate caesarian? ManPod, indeed.


After removing the alien, it waits around 10 seconds, showing it to her and letting her yank its umbilical cord, before she presses a few controls. The MedPod seals her up again with staples and opens the cover to let her sit up.

She gets off the table, rushes to the side of the MedPod, and places all five fingertips of her right hand on it, quickly twisting her hand clockwise. The interface changes to a red warning screen labeled “DECONTAMINATE.” She taps this to confirm and shouts, “Come on!” (Her vocal instruction does not feel like a formal part of the procedure and the machine does not respond differently.) To decontaminate, the pod seals up and a white mist fills the space.

OK. Since this is a MedPod, and it has something called a decontamination procedure, shouldn’t it actually test to see whether the decontamination worked? The user here has enacted emergency decontamination procedures, so it’s safe to say that this is a plague-level contagion. That’s doesn’t say to me: Spray it with a can of Raid and hope for the best. It says, “Kill it with fire.” We just saw, 10 seconds ago, that the MedPod can do a detailed, alien-detecting scan of its contents, so why on LV-223 would it not check to see if the kill-it-now-for-God’s-sake procedure had actually worked, and warn everyone within earshot that it hadn’t? Because someone needs to take additional measures to protect the ship, and take them, stat. But no, MedPod tucks the contamination under a white misty blanket, smiles, waves, and says, “OK, that’s taken care of! Thank you! Good day! Move along!”

For all of the goofiness that is this device, I’ll commend it for two things. The first is for pushing the notion forward of automated medicine. Yes, in this day and age, it’s kind of terrifying to imagine devices handling something as vital as life-saving surgery, but people in the future will likely find it terrifying that today we’d rather trust an error prone, bull-in-a-china-shop human to the task. And, after all, the characters have entrusted their lives to an android while they were in hypersleep for two years, so clearly that’s a thing they do.

Second, the gestural control to access the decontamination is well considered. It is a large gesture, requiring no great finesse on the part of the operator to find and press a sequence of keys, and one that is easy to execute quickly and in a panic. I’m absolutely not sure what percentage of procedures need the back-up safety of a kill-everything-inside mode, but presuming one is ever needed, this is a fine gesture to initiate that procedure. In fact, it could have been used in other interfaces around the ship, as we’ll see later with the escape pod interface.

I have the sense that in the original script, Shaw had to do what only a few very bad-ass people have been willing to do: perform life-saving surgery on themselves in the direst circumstances. Yes, it’s a bit of a stretch since she’s primarily an anthropologist and astronomer in the story, but give a girl a scalpel, hardcore anesthetics, and an alien embryo, and I’m sure she’ll figure out what to do. But pushing this bad-assery off to an automated device, loaded with constraints, ruins the moment and changes the scene from potentially awesome to just awful.

Given the inexplicable man-only settings, requiring a desperate patient to recall FORTRAN-esque syntax for spoken instructions, and the failure to provide any feedback about the destruction of an extinction-level pathogen, we must admit that the MedPod belongs squarely in the realm of goofy narrative technology and nowhere near the real world as a model of good interaction design.

Krell technology

Morbius is the inheritor of a massive underground complex of technology once belonging to a race known as the Krell. As Morbius explains, ““In times long past, this planet was the home of a mighty and noble race of beings which called themselves the Krell….”

Morbius tours Adams and Doc through the Krell technopolis.

“Ethically as well as technologically, they were a million years ahead of humankind; for in unlocking the mysteries of nature they had conquered even their baser selves… “…seemingly on the threshold of some supreme accomplishment which was to have crowned their entire history, this all but divine race perished in a single night.

““In the centuries since that unexplained catastrophe even their cloud-piercing towers of glass and porcelain and adamantine steel have crumbled back into the soil of Altair, and nothing——absolutely nothing——remains above ground.””

Despite this advancement, unless we ascribe to the Krell some sort of extra sensory perception and control, much of the technology we see has serious design flaws.

Morbius plays half-a-million-year-old Krell music.

The first piece of technology is a Krell recorded-music player, which Morbius keeps on the desk in his study. The small cylindrical device stands upright, bulging slighty around its middle. It is made of a gray metal, with a translucent pink band just below the middle. A hollow button sits on top.

The cylinder rests in a clear plastic base, with small, identical metal slugs sitting upright in recessions evenly spaced around it. To initiate music playback, Morbius picks one of the slugs and inserts it into the hollow of the button. He then depresses the momentary button once. The pink translucent band illuminates, and music begins to flow from unseen speakers around the office.

Modern audiences have a good deal of experience with music players, and so the device raises a great many questions. How does a user know which slug relates to what music? The slugs all look the same so this seems difficult at best. How does a user eject the slug? If by upending the device, one hopes that the cylinder comes free from the base easily, or the other slugs will all fall out as well. It must have impressed audiences to see music contained in such small containers, but otherwise the device is more attractive than usable.

Morbius inputs the combination to open the door.

Many Krell doors are protected by a combination lock. The mechanism stands high enough that Morbius can easily reach out and operate it. Its large circular face has four white triangles printed on its surface at the cardinal points, and other geometric red and yellow markings around the remainder. A four-spoke handle is anchored to a swivel joint at the center of the face. To unlock the door, a user twists the handle such that one of its spokes lines up with the north point, and then angles the handle to touch the spoke to the triangle there, before returning the handle to a neutral angle and twisting to the next position in the combination. When the sequence is complete, the triangles, the tips of the spokes, and a large ring around the face all light up and blink as the two-plane aperture doors slide open.

Even Walter Pigeon has trouble making sense of this awkward device. There appear to be no snap-to affordances for the neutral angle of the handle or the cardinal orientations, leaving the user unsure if each step in the sequence has been received correctly. Additionally, if the combination consists of particular spokes at this one point, why are the spokes undifferentiated? If the combination consists of pointing to different triangles, why are there four spokes instead of one? Is familiarity with some subtle cue part of the security measures?

Morbius shares operation of the Krell encyclopedia.

All of Krell wisdom and knowledge is contained in a device that Morbius shows to Adams and Doc. It consists of an underlit scroll of material sliding beneath a rectangular hole cut in the surface of a table. To illuminate it, Morbius turns one of the two ridged green dials located to the left of the “screen” about 45 degrees clockwise. To move the scroll, Morbius turns the other green dial clockwise as well.

Why is the least frequently used dial, i.e. the power button, closer than the more frequently used button, i.e. the scroll wheel? This requires the reader to be stretched awkwardly. Why is the on-off dial free spinning? There appear to be only two states: lit and unlit. The dial should have two states as well. If the content of the pages is discretely chunked into pages, it would also argue for a click-stop rather than free-spinning dial as well, but we do not get a good look at the scroll contents. One might also question the value of a scroll as the organizing method for a vast body of information, since related bits of information may be distractingly far apart.

Lower City oppression

> Metropolis overview

Laborers in the Lower City live lives of horrible dehumanization, tending to and dying in the maw of the machines. Much of the technology in the early part of the film highlights this aspect of the world of Metropolis.

One shift files in as the other files out.

Access to and from the machine halls are carefully controlled. Between shift changes, laborers line up in regimented rows before the gates. A device on the wall adjacent to the gate shines two square lights up top. When that light extinguishes and the circular light illuminates, the laborers know to begin walking through the gates.

They trudge to large elevators, where an operator turns a crank wheel to raise the containing gate and lower the elevator. This elevator operator keeps his eyes on a gauge positioned uncomfortably high on the wall above the crank.

Freder encounters the worker’s city.

A laborer fails to monitor the temperature of the M-machine.

One exhausted laborer has to control the temperature of the machine. He stands before a panel where a thermometer is mounted in the dead center. Its markings tell him the acceptable maximum temperature. A row of flanges and levers line the lower part of the wall. Each has a lightbulb above it. When the lightbulb illuminates, the laborer must activate that flange. The pattern of blinking lights is difficult to keep up with and the work exhausting.

11811 struggles to keep up with his task at the machine.

When Feder returns to the Lower City, he sees a laborer tending a particularly pointless machine. He holds two arms on a human-sized, clock-like face. Instead of numbers, the face is ringed by lightbulbs. Every half a second, the two blinking bulbs will dim, and a new completely random pair will begin blinking. The laborer’s task is to turn the hands such that each one points to the blinking lights. 11811 struggles to keep up with his task at the machine.

Feeling for his brother, Freder offers to take 11811’s place at the machine.

Though too fast and random to be easily sustainable, the task itself is so apparent that Feder can offer to stand in after only a few seconds of watching.