J.D.E.M. LEVEL 5

The first computer interface we see in the film occurs at 3:55. It’s an interface for housing and monitoring the tesseract, a cube that is described in the film as “an energy source” that S.H.I.E.L.D. plans to use to “harness energy from space.” We join the cube after it has unexpectedly and erratically begun to throw off low levels of gamma radiation.

The harnessing interface consists of a housing, a dais at the end of a runway, and a monitoring screen.

Avengers-cubemonitoring-07
Fury walks past the dais they erected just because.

The housing & dais

The harness consists of a large circular housing that holds the cube and exposes one face of it towards a long runway that ends in a dais. Diegetically this is meant to be read more as engineering than interface, but it does raise questions. For instance, if they didn’t already know it was going to teleport someone here, why was there a dais there at all, at that exact distance, with stairs leading up to it? How’s that harnessing energy? Wouldn’t you expect a battery at the far end? If they did expect a person as it seems they did, then the whole destroying swaths of New York City thing might have been avoided if the runway had ended instead in the Hulk-holding cage that we see later in the film. So…you know…a considerable flaw in their unknown-passenger teleportation landing strip design. Anyhoo, the housing is also notable for keeping part of the cube visible to users near it, and holding it at a particular orientation, which plays into the other component of the harness—the monitor.

Avengers-cubemonitoring-03

The monitor

In the underground laboratory, an (unnamed?) technician warns lead scientist Selvig that, “it’s spiking again,” and the camera pans down to this monitoring interface.

JDEM

Header

The header is a static barcode followed by the initialism J.D.E.M. along with its full name, the Joint Dark Energy Mission. (Sounds super cool and sci-fi, right? Turns out it is a real program between NASA and the US DOE.) Another label across the top identifies the screen as LEVEL 5 and that it belongs to PROJECT PEGASUS and NASA.

3D map

A main display shows a 3D wireframe of the tesseract, with color-coded nebula-like shapes within the cube. The wireframe (and most of the text on screen) are a bright cyan, with internal features progressing in color from the cyan through white to a blood red, all the way to lens flares near the most active areas in the cube. The color choices make for a quick read of what is “cool” and what is “hot,” so are effective for being immediate, but if the lens flares are designed into the system to indicate peakness, it’s a bad choice for obscuring other data in the display.

Note that the wireframe of the cube is also rotating slightly, which is  very helpful for a user to more fully understand 3D information from a 2D screen. It might be even better mapping with less cognitive load if the display was a volumetric projection. (VPs exist within the Marvel Cinematic Universe (MCU), but so far I believe we’ve only ever seen them in Tony Stark’s possession so perhaps he has not released it to the outside world.) Hopefully in its rotation on this monitor it does not rotate in 360°, as the regularness of the cube would make it difficult to understand where an internal anomaly might exist in the real thing. Hopefully the wireframe only wavers back and forth within a few degrees, and is oriented in roughly the same way an observer glancing at the real thing would see it in the housing, to allow for instant mapping of problem areas.

Avengers-cubemonitoring-01

Warning

Just to the left of the 3D map is a data monitoring panel. Its top label blinks a red WARNING CRITICAL ENERGY LEVELS and a percentage readout. The panel also features a key whose colors match those of the map. (As it should.) Hopefully a microinteraction allows a user to touch any part of the map, freeze the rotation, and get the percentage details of the touched point. A detail box wavers its vertical position along the key to provide a user a quick assessment of its value, and also contains a percentage readout for precision. Judging by the position of the box and the readout, it looks like the 100% mark is about halfway up the screen. Hopefully the upper part of the scale is logarithmic to accommodate massive surges in values.

Additional elements of the display include several scrolling waveforms and text boxes with inscrutable data and labels. It’s easy to imagine these as useful (say total energy values for specific electromagnetic frequencies) but they’re difficult to read, so difficult to formally evaluate.

All told, a nice display (per some assumptions) for monitoring what’s happening with the cube.

Now if only they had applied that solid design thinking to that dais vs. cage problem.

Avengers-cubemonitoring-04

Escape pod and insertion windows

vlcsnap-2014-12-09-21h15m14s193

When the Rodger Young is destroyed by fire from the Plasma Bugs on Planet P, Ibanez and Barcalow luckily find a functional escape pod and jettison. Though this pod’s interface stays off camera for almost the whole scene, the pod is knocked and buffeted by collisions in the debris cloud outside the ship, and in one jolt we see the interface for a fraction of a second. If it looks familiar, it is not from anything in Starship Troopers.

vlcsnap-2014-12-09-21h16m18s69
The interface features a red wireframe image of the planet below, outlined by a screen-green outline, oriented to match the planet’s appearance out the viewport. Overlaid on this is a set of screen-green rectangles, twisting as they extend in space (and time) towards the planet. These convey the ideal path for the ship to take as it approaches the planet.

I’ve looked through all the screen grabs I’ve made for this movie, and there no other twisting-rectangle interfaces that I can find. (There’s this, but it’s a status-indicator.) It does, however, bear an uncanny resemblance to an interface from a different movie made 18 years earlier: Alien. Compare the shot above to the shot below, which is the interface Ash uses to pilot the dropship from the Nostromo to LV-426.

Alien-071

It’s certainly not the same interface, the most obvious aspect of which is the blue chrome and data, absent from Ibanez’ screen. But the wireframe planet and twisting rectangles of Starship Troopers are so reminiscent of Alien that it must be at least an homage.

Planet P, we have a problem

Whether homage, theft, or coincidence, each of these has a problem as far as the interaction design. The rectangles certainly show the pilot an ideal path in a way that can instantly be understood even by us non-pilots. At a glance we understand that Ibanez should roll her pod to the right. Ash will need to roll his to the left. But how are they actually doing against this ideal? How is the pilot doing compared to that goal at the moment? How is she trending? It’s as if they were driving a car and being told “stay in the center of the middle lane” without being told how close to either edge they were actually driving.

Rectangle to rectangle?

The system could use the current alignment of the frame of the screen itself to the foremost rectangle in the graphic, but I don’t think that’s what happening. The rectangles don’t match the ratio of the frame. Additionally, the foremost rectangle is not given any highlight to draw the pilot’s attention to it as the next task, which you’d expect. Finally that’s a level of abstraction that wouldn’t fit the narrative as well, to immediately convey the purpose of the interface.

Show me me

Ash may see some of that comparison-to-ideal information in blue, but the edge of the screen is the wrong place for it. His attention would be split amongst three loci of attention: the viewport, the graphic display, and the text display. That’s too many. You want users to see information first, and read it secondarily if they need more detail. If we wanted a single locus of attention, you could put ideal, current state, and trends all as a a heads-up display augmenting the viewport (as I recommended for the Rodger Young earlier).

If that broke the diegesis too much, you can at least add to the screen interface an avatar of the ship, in a third-person overhead view. That would give the pilot an immediate sense of where their ship currently is in relation to the ideal. A projection line could show the way the ship is trending in the future, highlighting whether things are on a good or not so good path. Numerical details could augment these overlays.

By showing the pilot themselves in the interface—like the common 3rd person view in modern racing video games—pilots would not just have the ideal path described, but the information they need to keep their vessels on track.

vlcsnap-2014-12-09-21h15m17s229

Rodger Young combat interfaces

The interfaces aboard the Rodger Young in combat are hard to take seriously. The captain’s interface, for instance, features arrays of wireframe spheres that zoom from the bottom of the screen across horizontal lines to become blinking green squares. The shapes bear only the vaguest resemblance to the plasma bolts, but don’t match what we see out the viewscreen or the general behavior of the bolts at all. But the ridiculousness doesn’t end there.

Boomdots_8fps

There’s also Barcalow’s screen, which has an amber graticule of the planet below them, and screen-green rounded-rectangles falling in soft arcs down the screen. These rectangles are falling far faster than the dropships (the only thing descending to the surface we see), and are falling in semi-random vectors across nearly half the arc of planet.

Fallingsquares_8fps

Ibanez’ interface might make sense, since it shows the same spinning graticule of the planet below (though at a completely different orientation), with an overlaid shrinking rectangle. Maybe that’s the corridor of her optimal flight path? Maybe, it’s missing any 3D cues that might actually help with that task. Oh, but look! It also has the familiar spinning pizza graphic in the upper right.

COA

Granted, the dots might indicate plasma bolts from the bugs, and the falling rectangles might indicate the dropships trying to make their way to the surface, and the rectangle might indeed indicate the corridor of optimal flight path, but why on Earth is this information on separate screens being used by separate crew?

Imagine playing a videogame distributed among three players where one sees the goal, another sees the obstacles, and a third sees the other players. Sure the chaos of shouting instructions and information at each other might be fun, but you’d have little hope of success. Given these terrible screens, the main surprise is that anyone in the Federation survived the trip to the Bug Planet at all.


Addendum. I’d failed to notice these flailing bar charts that attract attention, but the type of which is too small to be read. Only adding to the pointless of the interfaces in this scene.

overhead

Tattoo-o-matic

StarshipTroopers-Tattoo-01

After he is spurned by Carmen and her new beau in the station, Rico realizes that he belongs in the infantry and not the fleet where Carmen will be working. So, to cement this new identity, Rico decides to give in and join his fellow roughnecks in getting matching tattoos.  The tattoos show a skull over a shield and the words “Death from Above”. (Incidentally, Death From Above is the name of the documentary detailing the making of the film, a well as the title of a hilarious progressive metal video by the band Holy Light of Demons. You should totally check it out.) 

To get the tattoo, Johnny lies back in a chair, and a technician of some sort works briefly at waist-high controls beneath a nearby screen. Then the technician walks away while Johnny’s tattoo is burned with blue lasers onto his arm.

A man seated in a futuristic setting is receiving a tattoo from a robotic arm. The tattoo features the word 'DEATH' surrounded by a star emblem. A computer screen in the background displays various graphs and a logo related to the tattooing process.

At the upper left corner of the screen a display reads SELECTED above the image being burned into Rico’s arm. (With white indicating no color.) Beneath it, is a square divided into four quadrants is filled with unintelligible numbers scrolling along, above the words AUTOMATIC SEQUENCE CONTROL.  Down the center of the screen beneath the word LASERS is a column filled with boxes showing sine waves and their corresponding frequencies from the shorter blue wavelengths moving down to yellow, red, and finally a double-lined white waveform. At the right of the screen is a large screen-green rectangular grid on which the selected pattern wipes in from top to bottom as the corners blink in red and yellow.

There are two main problems that are apparent in the scene.

1. We don’t need the technician

What does the technician do? Essentially, he presses a button and then walks away. Even as Johnny’s friends rush him out of the room in celebration, no one stops them to pay, which seem to indicate that everything has already been taken care of before he sits in the chair. Also, we notice that Johnny’s arm has not been strapped down, wrapped in healing bandages, or secured in any way. He stays relatively still throughout the procedure, but it seems a safe assumption that not all customers will be stoic soldier types who are able to sit still while their arm is literally charred by lasers in front of their own eyes. The machine must be able to compensate for movement, either by adjusting the lasers or shutting off completely, so, again, no technician is necessary. Also, when a fellow roughneck pours liquor over Johnny’s arm while his skin is in the process of being vaporized by lasers, the liquor doesn’t ignite in a horrifying fireball as we might expect, indicating that the lasers must have scaled back their intensity just in the nick of time—this is a pretty context-aware system with a lot of built-in error correction. Maybe they’re there for insurance purposes but given what we see in the scene, they serve no real purpose. Assuming the Death from Above design was one already in the machine, he could have completed the entire transaction himself from start to liquor-soaked finish.

A group of four individuals showcasing matching tattoos on their arms that read 'DEATH FROM ABOVE', with decorative graphic elements around the text.

2. The screen doesn’t make sense

As the image of the selected design scans into view on the right side of the screen, we can see that there is no exact correlation between the parts of the image on screen, and the parts of the image on-skin. The wireframe wipes in from top-to-bottom. The tattoo is finishing up in the middle. The tattoo is already 90% complete when the animation begins. The blinking numbers, the wiggling sine waves. It doesn’t mean anything and isn’t useful. So all told, the information on the screen initially appears complex, but given the total automation of the system it’s actually quite simple: Here’s what’s happening, and here’s the progress.

But maybe it’s not for the technician

Maybe it’s not for the technician at all. You can imagine that while having your skin seared by painful, painful lasers, all that fuigetry would be a welcome distraction, and a progress bar would be a welcome reassurance that it won’t last forever. With this in mind, the main problem with the screen is that it should be facing the customer, who is the real user.

Gravitic distortion

As Ibanez and Barcalow are juuuuuust about to start a slurpy on-duty make out session, their attention is drawn by the coffee mug whose content is listing in the glass.

coffee

Ibanez explains helpfully, “There’s a gravity field out there.” Barcalow orders her to “Run a scan!” She turns to a screen and does something to run the scan, and Barcalow confirms that “Sensors [are] on” As she watches an amber-colored graticule distort as if weighed down by an increasingly heavy ball while a Big Purple Text Label blinks GRAVITIC DISTORTION. Two numbers increment speedily at the bottom-right edge of the screen and modulus at 1000. “There,” she says.

gravity-field

So many plot questions

  • What kind of coffee cups can withstand enough gravity to tip the contents 45 degrees but remain themselves perfectly still and upright?
  • Why did they need the coffee cup? Wouldn’t their inner ear have told them the same thing faster?
  • Why is the screen in the background of the coffee cup still blinking OPTIMAL COURSE?

Of course we have to put these aside in favor of the interaction design questions.

First the “workflow”

Why on earth would they need to turn on sensors? Aren’t the sensors only useful when they’re sensing? If you have a sense that something is wrong, turning on the sensors only confirms what you already know. This is still more of that pesky stoic guru metaphor. This should have been an active academy that warned them—loudly—the moment nearby gravity started looking weird.

The visualization is not bad…

Let’s pause the criticism for one moment to give credit where credit is due. The grid vortex is a fast and reliable way to illustrate the invisible problem that they’re facing and telegraph increasing danger. Warped graticules have been a staple of depicting spacetime curvature since Disney’s 1979 movie The Black Hole.

The gravity well as depicted in The Black Hole (1979).

The gravity well as depicted in The Black Hole (1979).

This is also the same technique that scientists use to depict the same phenomenon, so it’s got some street cred, too.


gravity5b

The same thing can be shown in 3D, but it’s visually noisier. Moreover, the 2D version builds on our sense of basic physics, as we can easily imagine what would happen to anything nearing the depression. So, it’s mostly the right display.

…But then, the interaction

Despite the immediacy of the display, there’s a major problem. Sure, this interface conveys impending doom, but it doesn’t convey any useful information to help them know where the threat is coming from or what to do about it after they know that doom impends. (Plus, they had to turn it on, and all it tells them is, “Yep, looks pretty bad out there.”) To design this right, they need a sense of the 3D vector of the threat as compared to their own vector, and what the best available options are.

Better: Augmented reality to telegraph the invisible threat

Fortunately, we already have the medium and channel for Ibanez and Barcalow to immediately understand the 3D direction of the threat in the real world and most importantly, in relation to the ship’s trajectory and orientation, since that’s the tool they have on hand to avoid the threat. We’ve already seen that volumetric projection is a thing in this world, so the ship should display the VP just outside the ship’s viewports. The animation can illustrate the threat coming from the outside on the outside, and fade once the threat gets to be in a range of visible light. In this way there’s no 2D to 3D interpretation. It’s direct. Where’s the unexpected gravitic distortion? Look out the window. There. There is the the unexpected gravitic distortion. The HUD display would need to be aimed at the navigator’s seat, but for very distant objects, e.g. out of visible light range, the parallax shift wouldn’t be problematic for other locations on the bridge. You’d also have to manage the scenario where the threat comes from a direction not out the window (like, say, through the floor) but you can just shift the VP interior for that.

Including a screen comp by Deviant artist scrollsofaryavart.

Including a screen comp by Deviant artist scrollsofaryavart.

Next, you could use VP inside the ship to show the two paths and point of collision, as well as best predicted paths (there’s that useful active academy metaphor again.) Then we can let Ibanez trust her own instincts as she presses the manual override to steer the ship clear. I don’t have the time to comp an internal VP up right now, so I’ll rely on your imagination to comp this particular part of a much better solution than what we see on screen.

Course optimal, the Stoic Guru, and the Active Academy

In Starship Troopers, after Ibanez explains that the new course she plotted for the Rodger Young (without oversight, explicit approval, or notification to superiors) is “more efficient this way,” Barcalow walks to the navigator’s chair, presses a few buttons, and the computer responds with a blinking-red Big Text Label reading “COURSE OPTIMAL” and a spinning graphic of two intersecting grids.

STARSHIP_TROOPERS_Course-Optimal

Yep, that’s enough for a screed, one addressed first to sci-fi writers.

A plea to sci-fi screenwriters: Change your mental model

Think about this for a minute. In the Starship Troopers universe, Barcalow can press a button to ask the computer to run some function to determine if a course is good (I’ll discuss “good” vs. “optimal” below). But if it could do that, why would it wait for the navigator to ask it after each and every possible course? Computers are built for this kind of repetition. It should not wait to be asked. It should just do it. This interaction raises the difference between two mental models of interacting with a computer: the Stoic Guru and the Active Academy.

A-writer

Stoic Guru vs. Active Academy

This movie was written when computation cycles may have seemed to be a scarce resource. (Around 1997 only IBM could afford a computer and program combination to outthink Kasparov.) Even if computation cycles were scarce, navigating the ship safely would be the second most important non-combat function it could possibly do, losing out only to safekeeping its inhabitants. So I can’t see an excuse for the stoic-guru-on-the-hill model of interaction here. In this model, the guru speaks great truth, but only when asked a direct question. Otherwise it sits silently, contemplating whatever it is gurus contemplate, stoically. Computers might have started that way in the early part of the last century, but there’s no reason they should work that way today, much less by the time we’re battling space bugs between galaxies.

A better model for thinking about interaction with these kinds of problems is as an active academy, where a group of learned professors is continually working on difficult questions. For a new problem—like “which of the infinite number of possible courses from point A to point B is optimal?”—they would first discuss it among themselves and provide an educated guess with caveats, and continue to work on the problem afterward, continuously, contacting the querant when they found a better answer or when new information came in that changed the answer. (As a metaphor for agentive technologies, the active academy has some conceptual problems, but it’s good enough for purposes of this article.)

guruacademy

Consider this model as you write scenes. Nowadays computation is rarely a scarce resource in your audience’s lives. Most processors are bored, sitting idly and not living up to their full potential. Pretending computation is scarce breaks believability. If ebay can continuously keep looking on my behalf for a great deal on a Ted Baker shirt, the ship’s computer can keep looking for optimal courses on the mission’s behalf.

In this particular scene, the stoic guru has for some reason neglected up to this point to provide a crucial piece of information, and that is the optimal path. Why was it holding this information back if it knew it? How does it know that now? “Well,” I imagine Barcalow saying as he slaps the side of the monitor, “Why didn’t you tell me that the first time I asked you to navigate?” I suspect that, if it had been written with the active academy in mind, it would not end up in the stupid COURSE OPTIMAL zone.

Optimal vs. more optimal than

Part of the believability problem of this particular case may come from the word “optimal,” since that word implies the best out of all possible choices.

But if it’s a stoic guru, it wouldn’t know from optimal. It would just know what you’d asked it or provided it in the past. It would only know relative optimalness amongst the set of courses it had access to. If this system worked that way, the screen text should read something like “34% more optimal than previous course” or “Most optimal of supplied courses.” Either text could show some fuigetry that conveys a comparison of compared parameters below the Big Text Label. But of course the text conveys how embarrassingly limited this would be for a computer. It shouldn’t wait for supplied courses.

If it’s an active academy model, this scene would work differently. It would have either shown him optimal long ago, or show him that it’s still working on the problem and that Ibanez’ is the “Most optimal found.” Neither is entirely satisfying for purposes of the story.

Hang-on-idea

How could this scene have gone?

We need a quick beat here to show that in fact, Ibanez is not just some cocky upstart. She really knows what’s up. An appeal to authority is a quick way to do it, but then you have to provide some reason the authority—in this case the computer—hasn’t provided that answer already.

A bigger problem than Starship Troopers

This is a perennial problem for sci-fi, and one that’s becoming more pressing as technology gets more and more powerful. Heroes need to be heroic. But how can they be heroic if computers can and do heroic things for them? What’s the hero doing? Being a heroic babysitter to a vastly powerful force? This will ultimately culminate once we get to the questions raised in Her about actual artificial intelligence.

Fortunately the navigator is not a full-blown artificial intelligence. It’s something less than A.I., and that’s an agentive interface, which gives us our answer. Agentive algorithms can only process what they know, and Ibanez could have been working with an algorithm that the computer didn’t know about. She’s just wrapped up school, so maybe it’s something she developed or co-developed there:

  • Barcalow turns to the nav computer and sees a label: “Custom Course: 34% more efficient than models.”
  • BARCALOW
  • Um…OK…How did you find a better course than the computer could?
  • IBANEZ
  • My grad project nailed the formula for gravity assist through trinary star systems. It hasn’t been published yet.

BAM. She sounds like a badass and the computer doesn’t sound like a character in a cheap sitcom.

So, writers, hopefully that model will help you not make the mistake of penning your computers to be stoic gurus. Next up, we’ll discuss this same short scene with more of a focus on interaction designers.

Little boxes on the interface

StarshipT-undocking01

After recklessly undocking we see Ibanez using an interface of…an indeterminate nature.

Through the front viewport Ibanez can see the cables and some small portion of the docking station. That’s not enough for her backup maneuver. To help her with that, she uses the display in front of her…or at least I think she does.

Undocking_stabilization

The display is a yellow wireframe box that moves “backwards” as the vessel moves backwards. It’s almost as if the screen displayed a giant wireframe airduct through which they moved. That might be useful for understanding the vessel’s movement when visual data is scarce, such as navigating in empty space with nothing but distant stars for reckoning. But here she has more than enough visual cues to understand the motion of the ship: If the massive space dock was not enough, there’s that giant moon thing just beyond. So I think understanding the vessel’s basic motion in space isn’t priority while undocking. More important is to help her understand the position of collision threats, and I cannot explain how this interface does that in any but the feeblest of ways.

If you watch the motion of the screen, it stays perfectly still even as you can see the vessel moving and turning. (In that animated gif I steadied the camera motion.) So What’s it describing? The ideal maneuver? Why doesn’t it show her a visual signal of how well she’s doing against that goal? (Video games have nailed this. The “driving line” in Gran Turismo 6 comes to mind.)

Gran Turismo driving line

If it’s not helping her avoid collisions, the high-contrast motion of the “airduct” is a great deal of visual distraction for very little payoff. That wouldn’t be interaction so much as a neurological distraction from the task at hand. So I even have to dispense with my usual New Criticism stance of accepting it as if it was perfect. Because if this was the intention of the interface, it would be encouraging disaster.

StarshipT-undocking17

The ship does have some environmental sensors, since when it is 5 meters from the “object,” i.e. the dock, a voiceover states this fact to everyone in the bridge. Note that it’s not panicked, even though that’s relatively like being a peach-skin away from a hull breach of bajillions of credits of damage. No, the voice just says it, like it was remarking about a penny it happened to see on the sidewalk. “Three meters from object,” is said with the same dispassion moments later, even though that’s a loss of 40% of the prior distance. “Clear” is spoken with the same dispassion, even though it should be saying, “Court Martial in process…” Even the tiny little rill of an “alarm” that plays under the scene sounds more like your sister hasn’t responded to her Radio Shack alarm clock in the next room rather than—as it should be—a throbbing alert.

StarshipT-undocking24

Since the interface does not help her, actively distracts her, and underplays the severity of the danger, is there any apology for this?

1. Better: A viewscreen

Starship Troopers happened before the popularization of augmented reality, so we can forgive the film for not adopting that technology, even though it might have been useful. AR might have been a lot for the film to explain to a 1997 audience. But the movie was made long after the popularization of the viewscreen forward display in Star Trek. Of course it’s embracing a unique aesthetic, but focusing on utility: Replace the glass in front of her with a similar viewscreen, and you can even virtually shift her view to the back of the Rodger Young. If she is distracted by the “feeling” of the thrusters, perhaps a second screen behind her will let her swivel around to pilot “backwards.” With this viewscreen she’s got some (virtual) visual information about collision threats coming her way. Plus, you could augment that view with precise proximity warnings, and yes, if you want, air duct animations showing the ideal path (similar to what they did in Alien).

2. VP

The viewscreen solution still puts some burden on her as a pilot to translate 2D information on the viewscreen to 3D reality. Sure, that’s often the job of a pilot, but can we make that part of the job easier? Note that Starship Troopers was also created after the popularization of volumetric projections in Star Wars, so that might have been a candidate, too, with some third person display nearby that showed her the 3D information in an augmented way that is fast and easy for her to interpret.

3. Autopilot or docking tug-drones

Yes, this scene is about her character, but if you were designing for the real world, this is a maneuver that an agentive interface can handle. Let the autopilot handle it, or adorable little “tug-boat” drones.

StarshipT-undocking25

Dispatch

LOGANS_RUN_map_520

At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.

Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.

LogansRun094

*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at.

But OK, if it’s really that limited of an Übercomputer and can only focus on whatever is occupying it at the moment, at least make the controls usable by people. Let’s do the hard work of reducing the total number of controls, so they can be clustered all within easy reach rather than spread out so you have to move around just to operate them all. Or use your feet or whatever. Differentiate the controls so they are easy to tell apart by sight and touch rather than this undifferentiated mess. Let’s take out a paint pen and actually label the buttons. Do…do something.

LogansRun095

This display could use some rethinking as well. It’s nice that it’s overhead, so that dispatch can be thinking about field strategy rather than ground tactics. But if that’s the case, it could use some design help and some strategic information. How about downplaying the saturation on the things that don’t matter that much, like walls and plants? Then the Sandmen can focus more on the interplay of the Runner and his assailants. Next you could augment the display with information about the runner, and perhaps a best-guess prediction of where they’re likely to run, maybe the health of individuals, or the amount of ammunitition they have.

Which makes me realize that more than anything, this screen could use the hand of a real-time strategy game user interface designer, because that’s what they’re doing. The Sandmen are playing a deadly, deadly video game right here in this room, and they’re using a crappy interface to try and win it.

Profiling “CAT” scan

fifthelement-122

After her escape from the nucleolab, Leeloo ends up on a thin ledge of a building, unsure where to go or what to do. As a police car hovers nearby, the officers use an onboard computer to try and match her identity against their database. One officer taps a few keys into an unseen keyboard, her photograph is taken, and the results displays in about 8 seconds. Not surprisingly, it fails to find a match, and the user is told so with an unambiguous, red “NO FILE” banner across the screen.

fifthelement-128

This interface flies by very quickly, so it’s not meant to be read screen by screen. Still, the wireframes present a clear illustration of what the system doing, and what the results are.

The system shouldn’t just provide dead ends like this, though. Any such system has to account for human faces changing over the time since the last capture: aging, plastic surgery, makeup, and disfiguring accidents, to name a few. Since Leeloo isn’t inhuman, it could provide some results of “closest matches,” perhaps with a confidence percentage alongside individual results. Even if the confidence number was very low, that output would help the officers understand it was an issue with the subject, and not an issue of an incomplete database or weak algorithm.

One subtle element is that we don’t see or hear the officer telling the system where the perp is, or pointing a camera. He doesn’t even have to identify her face. It automatically finds her in the camera few, identifies her face, and starts scanning. The sliding green lines tell the officer what it’s finding, giving him confidence in its process, and offering an opportunity to intervene if it’s getting things wrong.

Nucleolab Progress Indicator

FifthE-nucleolab-009

As the nucleolab is reconstructing Leeloo, the screen on the control panel provides update, detailing the process. For the most part this update is a wireframe version of what everyone can see with their eyes.

FifthE-nucleolab-029

FifthE-nucleolab-015

The only time it describes something we can’t see with our own eyes is when Leeloo’s skin is being “baked” by an ultraviolet light under a metal cover. Of course we know this is a narrative device to heighten the power of the big reveal, but it’s also an opportunity for the interface to actually do something useful. It has a green countdown clock, and visualizes something that’s hidden from view.

FifthE-nucleolab-020

As far as a progress indicator goes, it’s mostly useful. Mactilburgh presumably knows roughly how long things take and even the order of operations. All he needs is confirmation that his system is doing what it’s supposed to be, and the absence of an error is enough for him. The timer helps, too, since he’s like a kid waiting for an Easy Bake Oven…of science.

But Munro doesn’t know what the heck is going on. Sure he knows some of the basics of biology. There’s going to be a skeleton, some muscle, some nerves. But beyond that, he’s got a job to do, and that’s to take this thing out the minute it goes pear-shaped. So he needs to know: Is everything going OK? Should I pop the top on a tall boy of Big Red Button? It might be that the interface has some kind of Dire Warning mode for when things go off the rails, but that doesn’t help during the good times. Giving Munro some small indicator that things are going well would remove any ambiguity and set him at ease.

An argument could be made that you don’t want Munro at ease, but a false positive might kill Leeloo and risk the world. A false negative (or a late negative) just risks her escape. Which happens anyway. Fortunately for us.

FifthE-nucleolab-024