Police light


This post (the first in what is going to amount to The Fifth Element Police Week. What is this, sweeps?) is going to veer to the edge of interaction design, getting into the Venn overlap of industrial design and wearable tech.

The police seen throughout New York each wear uniforms that feature a large, circular, glowing light over the right side of their chest.

There are only two things to say that’s positive about this police light. One: Yes, it looks cool. Two: It certainly gives narby citizens a clear, attention-getting signal that something is up. This might be OK for community relations officers, who are only ever interfacing with the public. But when it comes to dealing with actual criminals, it’s a terrible idea.


It’s a terrible idea because of its placement

Imagine this scene from the chief’s perspective. When he addresses Leeloo down the pipe as she’s standing on the ledge of the building, he is in an isosceles stance, with his shoulders perpendicular to the target and his weapon held in front of his heart. This common stance would place the weapon directly in the glow of the circle. This means that his forearms and weapon will have the brightest illumination in his field of vision and be distracting. This might be manageable by coating his uniform and the back of the weapon with a super black coating to absorb much of this light. But, depending on the distance of the target, it is also likely to place the perp in shadow, making them harder to see and harder to hit.

Looking at the officer on the right, we see he is taking a different stance. He is “bladed” to the target, closer to a Weaver stance, with his body turned a bit sideways. This stance turns the light to the adjacent wall, which minimizes the backscatter and perp-shadow effects, but also aims the light toward his fellow officer, possibly distracting him or her. That’s a pretty crappy design. But wait, it gets worse.


It’s a terrible idea because it’s a giant, glowing target

What’s worse is to imagine the scene from the perspective of the perp, say, the Mondosahwans in the airport. They want to specifically shoot the police in the crowd and all they have to do is shoot towards the glowing discs. That’s right, the police in 2263 are actually wearing attention-drawing targets. Admittedly, if you are going to get shot in the line of duty, you’d rather draw fire away from the head to a place with a solid slab of bone and lots of body armor. But why draw their fire in the first place?

As we saw in another post, Zorg believes in the fallacy/parable of the broken window, and so favors a bit of destruction that encourages market activity. We also know from the film that he has a lot of control over the NYPD. It might be that he’s deliberately sabotaging the police through this design to encourage the sale of more body armor and weapons, but are we to believe that the cops themselves are willing to go along with this? C’mon. They’re smarter than this.

Improve it with a little bit of smarts

Outfit the light with a little agentive smarts, and most of these problems could be fixed. The light could simply dim when it’s counterproductive to have it illuminated. Proximity sensors can sense when the officer’s arm is in the way. Context aware sensors can sense when it might blind another officer. It would take a lot of smarts to know when the officer is being targeted by a weapon, but certainly simple audio sensors should shut it off in the sound of gunfire.

Zorg’s desk


When Zorg begins to choke on a cherry pit, in his panic he pounds a numeric keypad on his desk, clearly hoping that this will contact someone or help him in some way. His clumsy mashing instead causes a number of bizarre things to happen around his office; i.e. the doors lock, a lifejacket inflates (bearing the charming label “HEAD THROUGH HOLE”), a cactus raises and lowers, a Rolodex of photographs appears and spins wildly, a rack begins to shoot plastic wrapped tuxedo shirts into the air, cards spit out of a slot, and a strange piglet-sized, hairless pet with a trunk is roused from its napping place as it raises to the surface of the desk and stares at Zorg helplessly.

In the talk I give about the lines of influence between interfaces in sci-fi and the real world, I cite this as a negative example of affective computing.

If you’re unfamiliar with it, affective computing largely deals with giving computers a sense of emotion or empathy for their users. In this case, of course Zorg doesn’t want to summon his elephantito from its adorable genetically modified slumber. He’s panicking. He wants help. The joke in the scene is largely about how the unfeeling technology on which Zorg relies is of little practical value in a crisis, but we know that a smarter design would have accounted for this case of panicked mashing.

If (a bunch of key chords are pressed rapidly in succession) {summon help}.

Interaction designers should take care to learn from this fictional example that though some scenarios may be rare, they may be dire enough to demand design attention.


This poor Ouliman Akaptan is named Picasso, designed by Hélène Girard.

For a general analysis, I find the number pad to be the worst choice of input for this system. On the plus side it’s useful for arbitrarily-long combinatorial and chorded input. It’s for this reason the telephone network system adopted this strategy to provide access to any one of its 10,000,000,000 nodes. (And that’s only with a ten digit number.) Fine. If Zorg needs a phone pad for dialing numbers than give him a phone. But for this desk interface, it burdens his long-term memory, forcing him to remember the codes for the things he wants. If he really has only around a dozen or so things to control, give him individual controls that are well grouped, distinguished, labeled, and mapped. Also in taking this tack, someone in his service might have thought to give the vengeful, psychopathic industrialist an actual panic button.

Mondoshawan piloting


The Mondoshawan pilot grasps two handles. Each handle moves in a transverse plane (parallel to the floor), being attached to a base by two flat hinges. We only see this interface for a few seconds, but it seems very poorly mapped.

Here on Earth, a pilot primarily needs to specify pitch, roll, and thrust. She supplies this input through a control yoke and a throttle. Each action is clearly differentiated. Pitch is specified by pushing or pulling the yoke. Roll is specified by rolling the yoke like a steering wheel. Thrust is specified by pushing or pulling the throttle. It’s really rare that a pilot wanting to lift the plane will accidentally turn the yoke to the right.

But look at the Mondoshawan inputs. They can specify four basic variables, i.e., an X and a Z for each hand. Try as I might, I can’t elegantly make that fit the act of flying well. (Pipe up if I’m not seeing something obvious.) Even if roll, pitch, and thrust was each assigned to an axis arbitrarily, the pilot would end up having to use the same motion on different hands for different variables, and there would be one “extra” axis. Of course there are two other Mondoshawans visible in the ship, and perhaps between them they’re managing that third axis of control somehow. With training and their “200,000 DNA memo groups,” the Mondoshawans could probably manage it, but it would spell trouble for us poor humans with our measly 40 and need for more direct mapping and control differentiation.


Four a day


After Korben’s alarm clock starts the music and lights the lights, it also drops his daily allotment of cigarettes into place inside vertical glass tubes in a small dispensary mounted on the wall. Each tube has a purple number printed across the top, reinforcing the limit. A robotic voice tells reminds him to only have “four. a. day.”


The dispensary is loaded with warnings to get him to quit. Across the top we read 4™ REFILLS. Just below that is a white imperative, QUIT SMOKING. To the right another legend reinforces the principles spoken aloud, 4™ A DAY. A legend across the bottom, written in glittery red capitals reminds him that, TO QUIT IS MY GOAL. Behind the glass tubes is something like a Surgeon General’s warning about the dangers of smoking.
Continue reading

Alien Astrometrics


When David is exploring the ancient alien navigation interfaces, he surveys a panel, and presses three buttons whose bulbous tops have the appearance of soft-boiled eggs. As he presses them in order, electronic clucks echo in in the cavern. After a beat, one of the eggs flickers, and glows from an internal light. He presses this one, and a seat glides out for a user to sit in. He does so, and a glowing pollen volumetric projection of several aliens appears. The one before David takes a seat in the chair, which repositions itself in the semicircular indentation of the large circular table.


The material selection of the egg buttons could not be a better example of affordance. The part that’s meant to be touched looks soft and pliable, smooth and cool to the touch. The part that’s not meant to be touched looks rough, like immovable stone. At a glance, it’s clear what is interactive and what isn’t. Among the egg buttons there are some variations in orientation, size, and even surface texture. It is the bumpy-surfaced one that draws David’s attention to touch first that ultimately activates the seat.

The VP alien picks up and blows a few notes on a simple flute, which brings that seat’s interface fully to life. The eggs glow green and emit green glowing plasma arcs between certain of them. David is able to place his hand in the path of one of the arcs and change its shape as the plasma steers around him, but it does not appear to affect the display. The arcs themselves appear to be a status display, but not a control.

After the alien manipulates these controls for a bit, a massive, cyan volumetric projection appears and fills the chamber. It depicts a fluid node network mapped to the outside of a sphere. Other node network clouds appear floating everywhere in the room along with objects that look like old Bohr models of atoms, but with galaxies at their center. Within the sphere three-dimensional astronomical charts appear. Additionally huge rings appear and surround the main sphere, rotating slowly. After a few inputs from the VP alien at the interface, the whole display reconfigures, putting one of the small orbiting Bohr models at the center, illuminating emerald green lines that point to it and a faint sphere of emerald green lines that surround it. The total effect of this display is beautiful and spectacular, even for David, who is an unfeeling replicant cyborg.


At the center of the display, David observes that the green-highlighted sphere is the planet Earth. He reaches out towards it, and it falls to his hand. When it is within reach, he plucks it from its orbit, at which point the green highlights disappear with an electronic glitch sound. He marvels at it for a bit, turning it in his hands, looking at Africa. Then after he opens his hands, the VP Earth gently returns to its rightful position in the display, where it is once again highlighted with emerald, volumetric graphics.


Finally, in a blinding flash, the display suddenly quits, leaving David back in the darkness of the abandoned room, with the exception of the small Earth display, which is floating over a small pyramid-shaped protrusion before flickering away.

After the Earth fades, david notices the stasis chambers around the outside of the room. He realizes that what he has just seen (and interacted with) is a memory from one of the aliens still present.



Hilarious and insightful Youtube poster CinemaSins asks in the video “Everything Wrong with Prometheus in 4 minutes or Less,” “How the f*ck is he holding the memory of a hologram?” Fair question, but not unanswerable. The critique only stands if you presume that the display must be passive and must play uninterrupted like a television show or movie. But it certainly doesn’t have to be that way.

Imagine if this is less like a YouTube video, and more like a playback through a game engine like a holodeck StarCraft. Of course it’s entirely possible to pause the action in the middle of playback and investigate parts of the display, before pressing play again and letting it resume its course. But that playback is a live system. It would be possible to run it afresh from the paused point with changed parameters as well. This sort of interrupt-and-play model would be a fantastic learning tool for sensemaking of 4D information. Want to pause playback of the signing of the Magna Carta and pick up the document to read it? That’s a “learning moment” and one that a system should take advantage of. I’d be surprised if—once such a display were possible—it wouldn’t be the norm.


The only thing I see that’s missing in the scene is a clear signal about the different state of the playback:

  1. As it happened
  2. Paused for investigation
  3. Playing with new parameters (if it was actually available)

David moves from 1 to 2, but the only change of state is the appearance and disappearance of the green highlight VP graphics around the Earth. This is a signal that could easily be missed, and wasn’t present at the start of the display. Better would be some global change, like a global shift in color to indicate the different state. A separate signal might compare As it Happened with the results of Playing with new parameters, but that’s a speculative requirement of a speculative technology. Best to put it down for now and return to what this interface is: One of the most rich, lovely, and promising examples of sensemaking interactions seen on screen. (See what I did there?)

For more about how VP might be more than a passive playback, see the lesson in Chapter 4 of Make It So, page 84, VP Systems Should Interpret, Not Just Report.


Early in the film, when Shaw sees the MedPod for the first time, she comments to Vickers that, “They only made a dozen of these.” As she caresses its interface in awe, a panel extends as the pod instructs her to “Please verbally state the nature of your injury.”


The MedPod is a device for automated, generalized surgical procedures, operable by the patient him- (or her-, kinda, see below) self.

When in the film Shaw realizes that she’s carrying an alien organism in her womb, she breaks free from crewmembers who want to contain her, and makes a staggering beeline for the MedPod.

Once there, she reaches for the extended touchscreen and presses the red EMERGENCY button. Audio output from the pod confirms her selection, “Emergency procedure initiated. Please verbally state the nature of your injury.” Shaw shouts, “I need cesarean!” The machine informs her verbally that, “Error. This MedPod is calibrated for male patients only. It does not offer the procedure you have requested. Please seek medical assistance else–”


I’ll pause the action here to address this. What sensors and actuators are this gender-specific? Why can’t it offer gender-neutral alternatives? Sure, some procedures might need anatomical knowledge of particularly gendered organs (say…emergency circumcision?), but given…

  • the massive amounts of biological similarity between the sexes
  • the needs for any medical device to deal with a high degree of biological variability in its subjects anyway
  • most procedures are gender neutral

…this is a ridiculous interface plot device. If Dr. Shaw can issue a few simple system commands that work around this limitation (as she does in this very scene), then the machine could have just done without the stupid error message. (Yes, we get that it’s a mystery why Vickers would have her MedPod calibrated to a man, but really, that’s a throwaway clue.) Gender-specific procedures can’t take up so much room in memory that it was simpler to cut the potential lives it could save in half. You know, rather than outfit it with another hard drive.

Aside from the pointless “tension-building” wrong-gender plot point, there are still interface issues with this step. Why does she need to press the emergency button in the first place? The pod has a voice interface. Why can’t she just shout “Emergency!” or even better, “Help me!” Isn’t that more suited to an emergency situation? Why is a menu of procedures the default main screen? Shouldn’t it be a prompt to speak, and have the menu there for mute people or if silence is called for? And shouldn’t it provide a type-ahead control rather than a multi-facet selection list? OK, back to the action.

Desperate, Shaw presses a button that grants her manual control. She states “Surgery abdominal, penetrating injuries. Foreign body. Initiate.” The screen confirms these selections amongst options on screen. (They read “DIAGNOS, THERAP, SURGICAL, MED REC, SYS/MECH, and EMERGENCY”)

The pod then swings open saying, “Surgical procedure begins,” and tilting itself for easy access. Shaw injects herself with anesthetic and steps into the pod, which seals around her and returns to a horizontal position.

Why does Shaw need to speak in this stilted speech? In a panicked or medical emergency situation, proper computer syntax should be the last thing on a user’s mind. Let the patient shout the information however they need to, like “I’ve got an alien in my abdomen! I need it to be surgically removed now!” We know from the Sonic chapter that the use of natural language triggers an anthropomorphic sense in the user, which imposes some other design constraints to convey the system’s limitations, but in this case, the emergency trumps the needs of affordance subtleties.

Once inside the pod, a transparent display on the inside states that, “EMERGENCY PROC INITIATED.” Shaw makes some touch selections, which runs a diagnostic scan along the length of her body. The terrifying results display for her to see, with the alien body differentiated in magenta to contrast her own tissue, displayed in cyan.



Shaw shouts, “Get it out!!” It says, “Initiating anesthetics” before spraying her abdomen with a bile-yellow local anesthetic. It then says, “Commence surgical procedure.” (A note for the grammar nerds here: Wouldn’t you expect a machine to maintain a single part of speech for consistency? The first, “Initiating…” is a gerund, while the second, “Commence,” is an imperative.) Then, using lasers, the MedPod cuts through tissue until it reaches the foreign body. Given that the lasers can cut organic matter, and that the xenomorph has acid for blood, you have to hand it to the precision of this device. One slip could have burned a hole right through her spine. Fortunately it has a feather-light touch. Reaching in with a speculum-like device, it removes the squid-like alien in its amniotic sac.

OK. Here I have to return to the whole “ManPod” thing. Wouldn’t a scan have shown that this was, in fact, a woman? Why wouldn’t it stop the procedure if it really couldn’t handle working on the fairer sex? Should it have paused to have her sign away insurance rights? Could it really mistake her womb for a stomach? Wouldn’t it, believing her to be a man, presume the whole womb to be a foreign body and try to perform a hysterectomy rather than a delicate caesarian? ManPod, indeed.


After removing the alien, it waits around 10 seconds, showing it to her and letting her yank its umbilical cord, before she presses a few controls. The MedPod seals her up again with staples and opens the cover to let her sit up.

She gets off the table, rushes to the side of the MedPod, and places all five fingertips of her right hand on it, quickly twisting her hand clockwise. The interface changes to a red warning screen labeled “DECONTAMINATE.” She taps this to confirm and shouts, “Come on!” (Her vocal instruction does not feel like a formal part of the procedure and the machine does not respond differently.) To decontaminate, the pod seals up and a white mist fills the space.

OK. Since this is a MedPod, and it has something called a decontamination procedure, shouldn’t it actually test to see whether the decontamination worked? The user here has enacted emergency decontamination procedures, so it’s safe to say that this is a plague-level contagion. That’s doesn’t say to me: Spray it with a can of Raid and hope for the best. It says, “Kill it with fire.” We just saw, 10 seconds ago, that the MedPod can do a detailed, alien-detecting scan of its contents, so why on LV-223 would it not check to see if the kill-it-now-for-God’s-sake procedure had actually worked, and warn everyone within earshot that it hadn’t? Because someone needs to take additional measures to protect the ship, and take them, stat. But no, MedPod tucks the contamination under a white misty blanket, smiles, waves, and says, “OK, that’s taken care of! Thank you! Good day! Move along!”

For all of the goofiness that is this device, I’ll commend it for two things. The first is for pushing the notion forward of automated medicine. Yes, in this day and age, it’s kind of terrifying to imagine devices handling something as vital as life-saving surgery, but people in the future will likely find it terrifying that today we’d rather trust an error prone, bull-in-a-china-shop human to the task. And, after all, the characters have entrusted their lives to an android while they were in hypersleep for two years, so clearly that’s a thing they do.

Second, the gestural control to access the decontamination is well considered. It is a large gesture, requiring no great finesse on the part of the operator to find and press a sequence of keys, and one that is easy to execute quickly and in a panic. I’m absolutely not sure what percentage of procedures need the back-up safety of a kill-everything-inside mode, but presuming one is ever needed, this is a fine gesture to initiate that procedure. In fact, it could have been used in other interfaces around the ship, as we’ll see later with the escape pod interface.

I have the sense that in the original script, Shaw had to do what only a few very bad-ass people have been willing to do: perform life-saving surgery on themselves in the direst circumstances. Yes, it’s a bit of a stretch since she’s primarily an anthropologist and astronomer in the story, but give a girl a scalpel, hardcore anesthetics, and an alien embryo, and I’m sure she’ll figure out what to do. But pushing this bad-assery off to an automated device, loaded with constraints, ruins the moment and changes the scene from potentially awesome to just awful.

Given the inexplicable man-only settings, requiring a desperate patient to recall FORTRAN-esque syntax for spoken instructions, and the failure to provide any feedback about the destruction of an extinction-level pathogen, we must admit that the MedPod belongs squarely in the realm of goofy narrative technology and nowhere near the real world as a model of good interaction design.

The Other Users

There is another set of users of the Global Sacrifice System who bear a bit of consideration, and they are the Old Ones. They are users in the sense that this is an agreement into which they’ve entered with humanity, in order to get a continuing IV drip of (to them) pleasurable abject suffering and death. This isn’t an imposed frame on the story. The ritual spoken by Sitterson are not the words of a zookeeper managing a downed animal’s Ketamine. They’re the words of a supplicant.

As long as the organization keeps the sacrifices coming, providing the tasty intravenous drip of human suffering, the world is allowed to continue as it is for another year. With this in mind, we can analyze how the system works for those users, i.e. the Old Ones.

The outputs that the Old Ones read from the system are unavailable to us, but we can tell it’s kind of precise. There are allusions midway through the film that the titan below the cabin is somehow “watching” the sex scene. (Though that ambiguous line could refer to the ravenous horror-movie audience as well.) It can also somehow sense the suffering of the victims, and whether the details of the ritual are being carried out in the proper order. The way we know this is that when Marty’s “death” vial is inappropriately shattered, the titan causes an earthquake that rocks the complex and the cabin.

This, then, is the input that the Old Ones have: shaking the ground in their displeasure. But as a signal, it’s far too vague. Hadley and Sitterson feel the quake, but disregard it. Hadley shouts, “They must be getting excited downstairs!” and Sitterson replies somewhat jadedly, “Greatest show on earth.” Had they had any inkling that they had actually messed up (even absent of what precisely was messed up) they would have pulled out the stops to find the error and correct it.

So while we as an audience are all in favor of resetting of a world that has accepted annual ritual sacrifice of young people, neither party to the agreement—the Old Ones nor the humans—set in place a system of communication that is precise enough to keep the agreement going. And that’s a failing of the interface.

A disaster-avoidance service

The key system in The Cabin in the Woods is a public service, and all technological components can be understood as part of this service. It is, of course, not a typical consumer service for several reasons. Like the CIA, FBI, and CDC, the people who most benefit from this service—humanity at large—are aware of it barely, if at all. These protective services only work by forestalling a negative event like a terrorist action or plague. Unlike these real-world threats, if Control fails in their duties, there is no crisis management as a next step. There’s only the world ending. Additionally, it is not typical in that it is an ancient service that has built itself up over ages around a mystical core.

So who are the users of the service? The victims are not. They are intentionally kept in the dark, and it is seen as a crisis when Marty learns the truth.

Given that interaction design requires awareness of the service in question, as well as inputs and outputs to steer variables towards a goal, it stands that the organization in the complex are the primary users. Even more particularly it is Sitterson and Hadley, the two “stage managers” in charge of the control room for the event, who are the real users. Understanding their goals we can begin an analysis. Fittingly, it’s complex:

  • Forestall the end of the world…
  • by causing the (non-Virgin) victims to suffer and die before Dana (who represents the Virgin archetype)…
  • at the hand of a Horrible Monster selected by the victims themselves…
  • marking each successful sacrifice with a blood ritual…
  • while keeping the victims unaware of the behind-the-scenes truth.

Sitterson and Hadley dance in the control room.

Part of a larger network with similar goals

This operation is not the only one operating at the same time. There are at least six other operations, working with their particular archetypes and rituals around the world: Berlin, Kyoto, Rangoon, Stockholm, Buenos Aires, and Madrid.

To monitor these other scenarios, there are two banks of CRT monitors high up on the back wall, each monitor dedicated to a different scenario. Notably, these are out of the stage manager’s line of attention when their focus is on their own.

The CRT monitors display other scenarios around the world.

The digital screens on the main console are much more malleable, however, and can be switched to display any of the analog video feeds if any special attention needs to be paid to it.

The amount of information that the stage managers need about any particular scenario is simple: What’s the current state of an ongoing scenario, and whether it has succeeded or failed for a concluded one. We don’t see any scenario succeed in this movie, so we can’t evaluate that output signal. Instead, they all fail. When they fail, a final image is displayed on the CRT with a blinking red legend “FAIL” superimposed across it, so it’s clear when you look at the screen (and catch it in the “on” part of the blink) what it’s status is.

Sitterson watches the Kyoto scenario fail.

Hadley sees that other scenarios have all failed.

One critique of this simple pass-fail signal is that it is an important signal that might be entirely missed, if the stage managers’ attentions were riveted forward, to problems in their own scenario. Another design option would be to alert Sitterson and Hadley to the moment of change with a signal in their peripheral attention, like a flash or a brief buzz. But signaling a change of state might not be enough. The new state, i.e. 4 of 7 failed, ought to be persistent in their field of vision as they continue their work, if the signal is considered an important motivator.

The design of alternate, persistent signals depend on rules we do not have access to. Are more successful scenarios somehow better? Or is it a simple OR-chain, with just one success meaning success overall? Presuming it’s the latter, strips of lighting around the big screens could become increasingly bright red, for instance, or a seven-sided figure mounted around the control room could have wedges turn red when those scenarios failed. Such environmental signals would allow the information to be glanceable, and remind the stage managers of the increasing importance of their own scenario. These signals could turn green at the first success as well, letting them know that the pressure is off and that what remains of their own scenario is to be run as a drill.

There is a Prisoner’s Dilemma argument to be made that stage managers should not have the information about the other scenarios at all, in order to keep each operation running at peak efficiency, but this would not have served the narrative as well.

Worst. Self-destruction mechanism. Ever.

When Morbius has taken a mortal wound from his monster and destroyed his “evil self,” he realizes that mankind is not ready for the power available to him through the Krell technology. Without explaining what he’’s doing, Morbius instructs Adams to “turn “that disc”.” Adams mindlessly obeys, and a plunger emerges from the floor near him.

Adams initiates the irreversible Krell self destruct mechanism.

Morbius commands Adams, ““The switch, throw it.”” Again, Adams does as he’’s told, and the plunger clicks into place as a red ring (the same red ring below the educator lever) illuminates. Then, and only then, Morbius explains that, ““In 24 hours you must be 100 million miles out in space. The Krell furnaces’ …chain reaction……they cannot be reversed.”” You think that with that kind of finality, he might have bothered to explain what was going to happen, or inquire whether the crew could make it out that far in that amount of time, but you know, science knows best.

The krell self-destruct warning signal is a silent, blinking red ring around the plunger.

Adding insult to injury, the complete warning system for this massive, solar-system-sized explosion consists of, in total, a silently pulsing red ring around the base of a plunger located in the heart of a hidden underground city behind a series of impenetrable doors sealed with combination locks. There is no klaxon, no lights seen elsewhere to indicate that your star system is about to go boom. I guess if you didn’’t know, you didn’’t really need to know.