Sci-fi Spacesuits: Biological needs

Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.

Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.

There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.

The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.

Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.

Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.

Crowe’s medical monitor in Aliens (1986).

There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.

Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.

Emergency deployment

One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.

If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.

What do we see in the real world?

Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.

The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.

The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.

Hey, isn’t the text on this thing backwards? Yes, because astronauts can’t look down from inside their helmets, and must view these controls via a wrist mirror. More on this later.

The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.

The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.


Back to sci-fi

So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.

Frito’s F’n Car interface

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading “PULL OVER”

IDIOCRACY-fncar

The car interface has a column of buttons down the left reading:

  • NAV
  • WTF?
  • BEER
  • FART FAN
  • HOME
  • GIRLS

At the bottom is a square of icons: car, radiation, person, and the fourth is obscured by something in the foreground. Across the bottom is Frito’s car ID “FRITO’S F’N CAR” which appears to be a label for a system status of “EVERYTHING’S A-OK, BRO”, a button labeled CHECK INGN [sic], another labeled LOUDER, and a big green circle reading GO.

idiocracy-pullover

But the car doesn’t wait for him to pull over. With some tiny beeps it slows to a stop by itself. Frito says, “It turned off my battery!” Moments after they flee the car, it is converged upon by a ring of police officers with weapons loaded (including a rocket launcher pointed backward.)

Visual Design

Praise where it’s due: Zooming is the strongest visual attention-getting signals there is (symmetrical expansion is detected on the retina within 80 milliseconds!) and while I can’t find the source from which I learned it, I recall that blinking is somewhere in the top 5. Combining these with an audio signal means it’s hard to miss this critical signal. So that’s good.

comingrightatus.png
In English: It’s comin’ right at us!

But then. Ugh. The fonts. The buttons on the chrome seem to be some free Blade Runner font knock off, the text reading “PULL OVER” is in some headachey clipped-corner freeware font that neither contrasts nor compliments the Blade Jogger font, or whatever it is. I can’t quite hold the system responsible for the font of the IPPA licence, but I just threw up a little into my Flaturin because of that rounded-top R.

bladerunner

Then there’s the bad-90s skeuomorphic, Bevel & Emboss buttons that might be defended for making the interactive parts apparent, except that this same button treatment is given to the label Frito’s F’n Car, which has no obvious reason why it would ever need to be pressed. It’s also used on the CHECK INGN and LOUDER buttons, taking their ADA-insulting contrast ratios and absolutely wrecking any readability.

I try not to second-guess designer’s intentions, but I’m pretty sure this is all deliberate. Part of the illustration of a world without much sense. Certainly no design sense.

In-Car Features

What about those features? NAV is pretty standard function, and having a HOME button is a useful shortcut. On current versions of Google Maps there’s an Explore Places Near You Function, which lists basic interests like Restaurants, Bars, and Events, and has a more menu with a big list of interests and services. It’s not a stretch to imagine that Frito has pressed GIRLS and BEER enough that it’s floated to the top nav.

explore_places_near_you

That leaves only three “novel” buttons to think about: WTF, LOUDER, and FART FAN. 

WTF?

If I have to guess, the WTF button is an all-purpose help button. Like a GM OnStar, but less well branded. Frito can press it and get connected to…well, I guess some idiot to see if they can help him with something. Not bad to have, though this probably should be higher in the visual hierarchy.

LOUDER

This bit of interface comedy is hilarious because, well, there’s no volume down affordance on the interface. Think of the “If it’s too loud, you’re too old” kind of idiocy. Of course, it could be that the media is on zero volume, and so it couldn’t be turned down any more, so the LOUDER button filled up the whole space, but…

  • The smarter convention is to leave the button in place and signal a disabled state, and
  • Given everything else about the interface, that’s giving the diegetic designer a WHOLE lot of credit. (And our real-world designer a pat on the back for subtle hilarity.)

FART FAN

This button is a little potty humor, and probably got a few snickers from anyone who caught it because amygdala, but I’m going to boldly say this is the most novel, least dumb thing about Frito’s F’n Car interface.

Heart_Jenkins_960.jpg
Pictured: A sulfuric gas nebula. Love you, NASA!

People fart. It stinks. Unless you have active charcoal filters under the fabric, you can be in for an unpleasant scramble to reclaim breathable air. The good news is that getting the airflow right to clear the car of the smell has, yes, been studied, well, if not by science, at least scientifically. The bad news is that it’s not a simple answer.

  • Your car’s built in extractor won’t be enough, so just cranking the A/C won’t cut it.
  • Rolling down windows in a moving aerodynamic car may not do the trick due to something called the boundary layer of air that “clings” to the surface of the car.
  • Rolling down windows in a less-aerodynamic car can be problematic because of the Helmholtz effect (the wub-wub-wub air pressure) and that makes this a risky tactic.
  • Opening a sunroof (if you have one) might be good, but pulls the stench up right past noses, so not ideal either.

The best strategy—according to that article and conversation amongst my less squeamish friends—is to crank the AC, then open the driver’s window a couple of inches, and then the rear passenger window half way.

But this generic strategy changes with each car, the weather (seriously, temperature matters, and you wouldn’t want to do this in heavy precipitation), and the skankness of the fart. This is all a LOT to manage when one’s eyes are meant to be on the road and you’re in an nauseated panic. Having the cabin air just refresh at the touch of one button is good for road safety.

If it’s so smart, then, why don’t we have Fart Fan panic buttons in our cars today?

I suspect car manufacturers don’t want the brand associations of having a button labeled FART FAN on their dashboards. But, IMHO, this sounds like a naming problem, not some intractable engineering problem. How about something obviously overpolite, like “Fast freshen”? I’m no longer in the travel and transportation business, but if you know someone at one of these companies, do the polite thing and share this with them.

Idiocracy-car
Another way to deal with the problem, in the meantime.

So aside from the interface considerations, there are also some strategic ones to discuss with the remote kill switch, but that deserves it’s own post, next.

The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)

Simulated Body

Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.

Black_Mirror_Cookie_White_Room.png

I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)

Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.

Black_Mirror_Cookie_Simulated_Body.png

The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)

Time “Adjustment”

The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.

Cookie_settime.gif

The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.

Toast.gif

Improvements?

Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.

1. Use fast forward models

It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:

  • Matt reaches up to the console
  • He taps the center button of the time dial
  • He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down  in the left panel.
  • He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
  • He taps the overlay.

Please tell me this is more post-actor interface design. Because that interaction is bonkers.

Cookie_stop.gif

If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.

2. Add calendar controls

A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.

3. Add microinteraction feedback

Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.

Cookie_settime.gif

Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.

That, or of course, show feedback while he’s dialing.

Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.

Black_Mirror_Cookie_31.png

Add psychological state feedback

There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?

I would add trendline indicators or sparklines showing things like:

  • Stress
  • Agitation
  • Valence of speech

I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.

Remote wingman via EYE-LINK

EYE-LINK is an interface used between a person at a desktop who uses support tools to help another person who is live “in the field” using Zed-Eyes. The working relationship between the two is very like Vika and Jack in Oblivion, or like the A.I. in Sight.

In this scene, we see EYE-LINK used by a pick-up artist, Matt, who acts as a remote “wingman” for pick-up student Harry. Matt has a group video chat interface open with paying customers eager to lurk, comment, and learn from the master.

Harry’s interface

Harry wears a hidden camera and microphone. This is the only tech he seems to have on him, only hearing his wingman’s voice, and only able to communicate back to his wingman by talking generally, talking about something he’s looking at, or using pre-arranged signals.

image1.gif
Tap your beer twice if this is more than a little creepy.

Matt’s interface

Matt has a three-screen setup:

  1. A big screen (similar to the Samsung Series 9 displays) which shows a live video image of Harry’s view.
  2. A smaller transparent information panel for automated analysis, research, and advice.
  3. An extra, laptop-like screen where Matt leads a group video chat with a paying audience, who are watching and snarkily commenting on the wingman scenario. It seems likely that this is not an official part of the EYE-LINK software.
image55.png
image47.png
image28.png
Please make a note of the hilarious and condemning screen names of the peanut gallery: Pie Ape, Popkorn, El Nino, Nixon, Fappucino [sic], Stingray, I_AM_WALDO, and Wigwam.

Harry communicates to Matt by speaking or enacting a crude sign language for the video camera. Matt communicates back to Harry using an audiolink through a headset. Setting up the connection is similar to Skype/Hangouts (even featuring an icon of an archaic laptop.) Every first-person EYE-LINK view is characterized by a pixelated gradient at the sides of the screen.

Matt’s wingman support tools

We see that Matt has a number of tools to help him act as a remote wingman for Harry, evident through six main navigation items on his side screen…A home icon, Web, News, Image, Video, and Social Media. The home icon is always bright white, but the section he’s currently viewing is a bolded gray.  

In the Image mode, it runs a face recognition on a still image from Matt’s video feed, and provides its best match for further research.

image20.png

Somehow he can also get information on the event that Harry is attending. In this view, there’s a floor plan of the venue, which Matt can use to instruct Harry.

image11.png

OK. This is of course a creepy use of this interface, but it’s easy to imagine scenarios where something like the EYE-LINK is used virtuously:

  • A nurse practitioner needing to call on the expertise of a remote, more senior caregiver.
  • An airplane maintenance worker needing to speak to the aircraft engineers about a problem she’s encountering.
  • Paintball players coordinating their game through a centralized team captain.

So with that in mind, let’s review this with the caveat that of course the specific wingman scenario is super creepy.

Analysis: Harry’s feedback

The communication channel back from Harry to Matt doesn’t need to be too rich for these purposes, but there are ways that it could be richer. Of course Harry could pick up his phone and simply type something that Matt could see. But if the communication needed to be undetectable to a casual observer, there are other options. Subvocalization is nascent, but a possibility and mostly-natural for the speaker.

78105main_ACD04-0024-001.jpg
Image courtesy of the NASA Ames Research demo of subvocalization.

If the remote user has time for training, subgestural detection might be another option. This is like subvocal detection, but instead of detecting throat movements used in speech, it would be an armband (like the Myo) that could detect gentle finger presses allowing the user chorded keyboard input which he could use while, say, gripping the beer bottle.

tw_hand.png

Either way, richer “undetectable” communication mechanisms exist, and could be incorporated.

Analysis: Graphics

One of the refreshing things about the interfaces in Black Mirror generally—and these screens in particular—is how understated they are, especially compared to the Roccoco interfaces that populate much of sci-fi. (Compare the two below.)

The color palette is spartan grayscale. The typeface is Helvetica (or adjacent). Nothing 3D, nothing swoopy, no complexity for complexity’s sake.

Analysis: Navigation and layout

The navigation for the information panel is a little confusing. Sure, it looks like lots of websites. But this chunking of information into separate screens requires that Matt hunt for information that’s of interest. Better would be to have a single, dynamic screen, and have the system do real-time parsing, providing suggestions and notifications in the context of the event. If he needed to dive down into some full-screen mode, let it fill the screen with some easy way to return to context.

Also, how did he get to the event view? Is that just a web view? What bar puts its floorplan on its site? There is no primary navigation element that would on first glance explain how he got there, or once there, how he might get back to other screens. The home icon is obscured. (Maybe this is designed by Apple, though, and has some entirely hidden swipe gesture or long press to request the event screen or force a return to home?) It’s really hard to say, and so fails affordance.

Analysis: Group chat

A quick look at any modern group video chat software shows that this is too pared down, with lots of controls for audio and video controls missing, as well as controls for the “meeting.” It’s possible that these appear only if Matt interacts with the cursor on that laptop, but again, affordances.

Analysis: More wingman tools?

There are more tools that would be useful to a wingman’s job, which could be built even now—without the strong AI that this diegesis has. They could be more virtuous, like…

  • Ways to keep Harry calm, focused, and feeling confident.
  • Reminders of general best practices for making a good impression.
  • Automatic privacy blackout when Harry approaches people for conversation.
thegame

Or they could be…uh…more questionable. (Here I’ll confess to referencing The Game: Penetrating the Secret Society of Pickup Artists by Neil Strauss, for how a real PUA might handle it.)

  • A transcript of the conversation with key phrases highlit, indicating the “target’s” attitudes and levels of interest.
  • Personality analysis on social media, listing derived topics that these particular “targets” would find engaging.
  • A list of Harry’s practiced “routines” for Matt to quickly review, and suggest. The AI could even highlight its best-guess suggestion.
  • Counts of “indicators of interest.”
  • An overview of Matt’s favored stages of pickup, with an indicator of where Harry is and how well he performed on the prior stages.

Either way, the support that these tools are offering are pretty minimal compared to what could be done, but then again, that kind of fits the story. Yes, the creepiness of the remote wingman support tools is part of the point. But the whole reason the peanut gallery pays for the honor of watching Matt coach Harry is (yes, voyeurism, but also) to witness a master wingman at his work. If the system was too much of a support, the peanut gallery would be less incentivized to pay to see him in action.

Viper Controls

image03

The Viper is the primary space fighter of the Colonial Fleet.  It comes in several varieties, from the Mark II (shown above), to the Mark VII (the latest version).  Each is made for a single pilot, and the controls allow the pilot to navigate short distances in space to dogfight with enemy fighters.

image09

Mark II Viper Cockpit

The Mark II Viper is an analog machine with a very simple Dradis, physical gauges, and paper flight plans.  It is a very old system.  The Dradis sits in the center console with the largest screen real-estate.  A smaller needle gauge under the Dradis shows fuel levels, and a standard joystick/foot pedal system provides control over the Viper’s flight systems.

image06

Mark VII Viper Cockpit

The Viper Mk VII is a mostly digital cockpit with a similar Dradis console in the middle (but with a larger screen and more screen-based controls and information).  All other displays are digital screens.  A few physical buttons are scattered around the top and bottom of the interface.  Some controls are pushed down, but none are readable.  Groups of buttons are titled with text like “COMMS CIPHER” and “MASTER SYS A”.

Eight buttons around the Dradis console are labeled with complex icons instead of text.

image07 image08

When the Mk VII Vipers encounter Cylons for the first time, the Cylons use a back-door computer virus to completely shut down the Viper’s systems.  The screens fuzz out in the same manner as when Apollo gets caught in an EMP burst.

The Viper Mk VII is then completely uncontrollable, and the pilot’s’ joystick-based controls cease to function.

Overall, the Viper Mk II is set up similarly to a WWII P-52 Mustang or early production F-15 Eagle, while the Viper Mk VII is similar to a modern-day F-16 Falcon or F-22 Raptor .

 

Usability Concerns

The Viper is a single seat starfighter, and appears to excel in that role.  The pilots focus on their ship, and the Raptor pilots following them focus on the big picture.  But other items, including color choice, font choice, and location are an issue.

Otherwise, Items appear a little small, and it requires a lot of training to know what to look for on the dashboards. Also, the black lines radiating from the large grouper labels appear to go nowhere and provide no extra context or grouping.  Additionally, the controls (outside of the throttle and joystick) require quite a bit of reach from the seat.

Given that the pilots are accelerating at 9+ gs, reaching a critical control in the middle of a fight could be difficult.  Hopefully, the designers of the Vipers made sure that ‘fighting’ controls are all within arms reach of the seat, and that the controls requiring more effort are secondary tasks.

Similarly, all-caps text is the hardest to read at a glance, and should be avoided for interfaces like the Viper that require quick targeting and actions in the middle of combat.  The other text is very small, and it would be worth doing a deeper evaluation in the cockpit itself to determine if the font size is too small to read from the seat.

If anyone reading this blog has an accurate Viper cockpit prop, we’d be happy to review it! 

Fighter pilots in the Battlestar Galactica universe have quick reflexes, excellent vision, and stellar training.  They should be allowed to use all of those abilities for besting Cylons in a dogfight, instead of being forced to spend time deciphering their Viper’s interface.

Avengers, assembly!

Avengers-lookatthis.png

When Coulson hands Tony a case file, it turns out to be an exciting kind of file. For carrying, it’s a large black slab. After Tony grabs it, he grabs the long edges and pulls in opposite directions. One part is a thin translucent screen that fits into an angled slot in the other part, in a laptop-like configuration, right down to a built-in keyboard.

The grip edge

The grip edge of the screen is thicker than the display, so it has a clear, physical affordance as to what part is meant to be gripped and how to pull it free from its casing, and simultaneously what end goes into the base. It’s simple and obvious. The ribbing on the grip unfortunately runs parallel to the direction of pull. It would make for a better grip and a better affordance if the grip was perpendicular to the direction of pull. Minor quibble.

I’d be worried about the ergonomics of an unadjustable display. I’d be worried about the display being easily unseated or dislodged. I’d also be worried about the strength of the join. Since there’s no give, enough force on the display might snap it clean off. But then again this is a world where “vibrium steel” exists, so material critiques may not be diegetically meaningful.

Login

Once he pulls the display from the base, the screen boops and animated amber arcs spin around the screen, signalling him to login via a rectangular panel on the right hand side of the screen. Tony puts his four fingers in the spot and drags down. A small white graphic confirms his biometrics. As a result, a WIMP display appears in grays and amber colors.

Avengers-asset-browser05

Briefing materials

One window on the left hand side shows a keypad, and he enters 1-8-5-4. The keypad disappears and a series of thumbnail images—portraits of members of the Avengers initiative—appear in its place. Pepper asks Tony, “What is all this?” Tony replies, saying, “This is, uh…” and in a quick gesture, places his ten fingertips on the screen at the portraits, and then throws his hands outward, off the display.

The portraits slide offscreen to become ceiling-height volumetric windows filled with rich media dossiers on Thor, Steve Rogers, and David Banner. There are videos, portraits, schematics, tables of data, cellular graphics, and maps. There’s a smaller display near the desktop where the “file” rests about the tesseract. (More on this bit in the next post.)

Briefing.gif

Insert standard complaint here about the eye strain that a translucent display causes, and the apology that yes, I understand it’s an effective and seemingly high-tech way to show actors and screens simultaneously. But I’d be remiss if I didn’t mention it.

The two-part login shows an understanding of multifactor authentication—a first in the survey, so props for that. Tony must provide something he “is”, i.e. his fingerprints, and something he knows, i.e. the passcode. Only then does the top secret information become available.

I have another standard grouse about the screen providing no affordances that content has an alternate view available, and that a secret gesture summons that view. I’d also ordinarily critique the displays for having nearly no visual hierarchy, i.e. no way for your eyes to begin making sense of it, and a lot of pointless-motion noise that pulls your attention in every which way.

But, this beat is about the wonder of the technology, the breadth of information SHIELD in its arsenal, and the surprise of familiar tech becoming epic, so I’m giving it a narrative pass.

Also, OK, Tony’s a universe-class hacker, so maybe he’s just knowledgeable/cocky enough to not need the affordances and turned them off. All that said, in my due diligence: Affordances still matter, people.

Ford Explorers

image01

The Ford Explorer is an automated vehicle driven on an electrified track through a set route in the park.  It has protective covers over its steering wheel, and a set of cameras throughout the car:

  • Twin cameras at the steering wheel looking out the windshield to give a remote chauffeur or computer system stereoscopic vision
  • A small camera on the front bumper looking down at the track right in front of the vehicle
  • Several cameras facing into the cab, giving park operators an opportunity to observe and interact with visitors. (See the subsequent SUV Surveillance post.)

Presumably, there are protective covers over the gas/brake pedal as well, but we never see that area of the interior; evidence comes from when Dr. Grant and Dr. Saddler want to stop and look at the triceratops they don’t even bother to try and reach for the brake pedal, but merely hop out of the SUV.

image02

The SUVs also have an interactive CD-ROM player in the center console with a touchscreen.  The CD-ROM has narrated, basic information about the park and exhibits, and has set points during the tour that it plays information about specific areas or dinosaurs.

image00

The Single, Central Screen

For what should be a focal point and value add for everyone in the car is poorly placed and in-optimally set up.  This would be the perfect situation for a second screen in the second console, at least.  If we look to more modern technology, we could start to include HUD overlays on all the windows of the Ford Explorer to track dinosaurs (so passengers would know where to look).  This could integrate with the need for better Night Vision Goggles.

A second concern is the hand-controlled interface.  Suddenly, everyone in the SUV is subservient to the two people who are within touch distance of the screen. Jurassic Park has enough location data and content in the presentation to be able to customize the play order to the tour.  This would keep an overactive kid from taking control of the screen and ruining the tour for everyone else in the car.

Steering Controls

The Ford Explorers maintain the steering wheel and gear selectors from their off-the-shelf compatriots.  This has two detriments on the passengers:

  • Cramps the person in the driver’s seat
  • Gives a false impression of control

The space is the most detrimental to the tour experience.  While the passenger has legroom, arm room, and plenty of space to turn around; the driver is forced to deal with the space hogging controls that are unusable.

By keeping the steering wheel, the SUV also implies that the driver could take control of the car.  We see no evidence of that, and Dr. Grant even climbs into the back of the Explorer instead of staying in the driver’s position.

The SUV drives itself, and shouldn’t give a false affordance that people are used to.

Comfort

image03
The Mercedes F015 Self Driving Concept Car

A more radical concept would be completely custom vehicles.  Mercedes recently revealed a concept car focused around a lounge feel.  Other carmakers have done the same (Ford, Chevy, ect…).  It’s advantages are the increased social focus of the interior, and the easier access to all the windows.

Would this be more expensive? Yes, but as Hammond mentions frequently, they “spared no expense” to improve the experience for the guests.

The original article referenced these as Jeep Grand Cherokees… which they definitely are not.  As pointed out by Cary (http://smokeythejeep.wordpress.com/), the only Jeeps on the island are the gas powered models that the park rangers and staff use to get around the island.  These, as the article now states, are Ford Explorers ca. 1992.

TETVision

image05

The TETVision display is the only display Vika is shown interacting with directly—using gestures and controls—whereas the other screens on the desktop seem to be informational only. This screen is broken up into three main sections:

  1. The left side panel
  2. The main map area
  3. The right side panel

The left side panel

The communications status is at the top of the left side panel and shows Vika the status of whether the desktop is online or offline with the TET as it orbits the Earth. Directly underneath this is the video communications feed for Sally.

Beneath Sally’s video feed is the map legend section, which serves the dual purposes of providing data transfer to the TET and to the Bubbleship as well as a simple legend for the icons used on the map.

The communications controls, which are at the bottom of the left side panel, allow Vika to toggle the audio communications with Jack and with Sally.

The main map area

The largest section is the viewport where the various live feeds are displayed. The main map, which serves as a radar, as well as the remote video feeds she uses to monitor Jack are both in this section of the display.

The right side panel

The panel on the right side of the map contains the video feed controls, which allow Vika to toggle between live footage from the Bubbleship, the TET, and of course, the main map view.

Although never shown in use in the film, the bottom right of the screen houses the tower rotation controls. This unused control is the only indication the capability even exists, so it is unknown whether the tower rotates 360 degrees or whether it’s limited to set points. (More on this below.)

It has robust capabilities

image02

At one point in the movie, Vika is able to use the drones to search for bio trail signatures when Jack is abducted by the scavs.

image06

Vika is also able to detect and decode various types of signals such as the morse code message sent by Jack or the rogue signal sent out by the scavs.

image08

And, probably unbeknownst to Jack and Vika, the TETVision can be controlled remotely from the TET to allow Sally access to the data stored on the desktop—as shown at one point in the movie, when Sally pulls up a past bio trail signature to send drones after Jack and the scavs.

It’s missing a critical layer of data

image03

At the beginning of the film, as Jack heads toward the downed drone 166, he suddenly encounters a dangerous lightning storm and nearly plunges to his death when the Bubbleship loses power. His signature disappears from the TETVision map, but from Vika’s perspective there is no indication as to what could have happened — or that there was any danger to begin with.

image01

Since the weather is unstable and constantly changing, it would have been better to include a weather overlay so that Vika could have notified Jack of the storm—allowing him to fly around it instead of straight into it.

It’s got some useless bits

image09

The tower rotation controls are never shown in use in the film, so it’s not clear what benefit rotating the tower would serve. The main purpose of their mission is to ensure the hydro-rigs are secure and functioning properly, not getting an optimal view.

image04

The tower is almost completely surrounded by windows as it is. And since the tower windows already face the hydro-rigs, what would be the benefit of changing vantage points?

It seems that the space could be used for something more beneficial to Vika such as bike, hydro-rig and drone cam feeds. This would provide Vika with more eyes on the ground, allowing her the additional support to keep Jack safe and monitor scav activity.

From an clustering standpoint, it would also fall in line logically with the other feed controls on the right side panel.

And some unnecessary visual feedback

image07

Towards the end of the movie, Sally is trying to find Jack and the scavs. She accesses Vika’s desktop remotely in order to pull up the bio trail records. Although no one is around to see the information, the TETVision displays the process as it happens. Of course, this is necessary for the narrative to progress, but in a real-life situation Sally would only need to see the data on her side—not from the desktop in Tower 49. If they’ve managed interstellar travel, cloning, terraforming, and cognitive reprogramming of alien species, they’re not likely still using VNC. This type of interaction should simply run in the background and not be visible on screen.

Better: Provide useful visuals

When a drone picks up a bio trail signal, a visual of a DNA sequence is displayed. Since the analysis is being conducted by Sally on the TET, it seems that this information isn’t really useful to Vika at all.

image00

From Vika’s point of view it seems like the actual trail would be more important, so why not show a drone cam feed complete with the HUD overlay? She could instantly gain more information by seeing that there are two bio trails—proving that Jack has been captured by the scavs and taken to another location.

Drone Programmer

A close-up of a hand wearing a glove holding a futuristic device with a screen displaying a holographic globe and various data interfaces.

One notable hybrid interface device, with both physical and digital aspects, is the Drone Programmer. It is used to encode key tasks or functions into the drone. Note that it is seen only briefly—so we’re going off very little information. It facilitates a crucial low-level reprogramming of Drone 172.

This device is a handheld item, grasped on the left, approximately 3 times as wide as it is tall. Several physical buttons are present, but are unused in the film: aside from grasping, all interaction is done through use of a small touchscreen with enough sensitivity to capture fingertip taps on very small elements.

Jack uses the Programmer while the drone is disabled. When he pulls the cord out of the drone, the drone restarts and immediately begins to try and move/understand its surroundings.

A person stands facing a large, futuristic robotic head with multiple cameras and sensors, while two armed figures are positioned nearby in a dimly lit environment.
When Drone 172 is released from the Programmer cable, it is in a docile and inert state…
A person standing in front of a large, futuristic robotic machine with glowing lights and mechanical arms, set in a dimly lit environment.
…but it quickly becomes aware, its failsafes shut down and its onboard programming taking over.

From this we understand that drones are controlled via internal software; this is the only time we see them programmed or their behavior otherwise influenced by a human. This reprogramming requires an external device wired into the drone in direct physical proximity, which suggests an otherwise high level of autonomy for each drone.

(Narrative implications) Following Orders

The Drone Programmer, and the way it interacts with Drone 172, suggests useful information about the Drones’ default states—namely, that their default state is autonomous, aggressive, and proactive, depending upon their orders and programming.

Drone 172 does not attack at this stage, and we have seen through Jack’s eyes on the screen that this is due to an overriding primary objective, implanted directly into the Drone’s firmware / low level programming: Rendezvous with the Tet.

Low Level Controller: Handle With Care

A gloved hand holding a futuristic device with a digital screen displaying various readings and graphs.

Its suggestion of a provisional or failsafe role is reinforced by warning text above the display, (legible at high resolution,) reflective of its power: “Electric Hazard Do Not Touch Terminals on Both Lines at Same Time: Lead Ends May Be Energized…

Between this and the sparks ignited when the cable is detached from the Drone, one gets the sense of a device somewhere between a terminal and a jumper cable. Potent, hazardous, direct.

A close-up image of a hand holding a wire while interacting with the interior of a mechanical object.
A close-up of a male astronaut in a futuristic suit, focused on a mechanical device above him, set in a dimly lit environment with sparks and steam.

Jack is clearly at ease with the Programmer and its usage from repair sessions at home and in the field. This ease suggests either that his training (or memory replacement) is thorough, or that such low level work is needed frequently enough to be quite familiar.

The latter explanation, along with the Programmer’s nature as a physical device requiring direct proximity, would reinforce the interpretation that Tet places a remarkable amount of trust in instances of the human Maintenance team, and that the equipment in question is nearly symbiotic with the Team(s) in its need for frequent recovery.

Thus through this one seemingly incidental device, and its low level role in the chain of command, we can deduce that the combination of Drones and Team(s) is much more effective than either could be individually. Jack was reprogrammed by his time spent in curious wandering, crossed with the opportunity presented by the book quotation mentioned as a trigger. In the case of Jack, the book and its couplet is the low-level reprogramming device, shocking in its directness.

Dialogue within the film reinforces the analogy directly: We learn during this sequence that the first invasion phase entailed many instances of a short-lived (non-learning) Jack as soldier. We also learn that phase two is this symbiotic maintenance arrangement between human and machine. When it is suggested that Drone 172 is the weapon, Jack corrects that it is he himself—its user and maintainer—who is the weapon. Without his role as user and maintainer, the machine would ultimately be a neutralized mechanical husk.

Lessons:

  1. Low level interfaces suggest fundamental programming and activity.
    (NOTE: Compare to interfaces such as the Nostromo Self Destruct pulls in Alien, etc.)
  2. Use of low level interfaces suggests familiarity and/or “grace under pressure”, as well as systemic trust in the user.
  3. Low level interfaces suggest a deep symbiosis between the user and the machine, to the point of interdependence.
    (NOTE: Compare to failsafe systems and manual overrides in aeronautics and (a few realistic moments in) space films such as Sunshine. In an alternate universe, I have the time to cover/analyse Sunshine to uncover this very dynamic…)
  4. Bonus Lesson (Oblivion-centric): By analogy, in highly technological or post-apocalyptic settings, books are, for humans, a low level interface, forcing the user to slow down and absorb sometimes startling, unexpected, or course-changing information.

Drone Status Feed

Oblivion-Desktop-Overview-002
Oblivion-Desktop-DroneMonitor-001

 

As Vika is looking at the radar and verifying visuals on the dispatched drones with Jack, the symbols for drones 166 and 172 begin flashing red. An alert begins sounding, indicating that the two drones are down.

Oblivion-Desktop-DroneCoordinates

Vika wants to send Jack to drone 166 first. To do this she sends Jack the drone coordinates by pressing and holding the drone symbol for 166 at which time data coordinates are displayed. She then drags the data coordinates with one finger to the Bubbleship symbol and releases. The coordinates immediately display on Jack’s HUD as a target area showing the direction he needs to go.

Simple interactions

Overall, the sequence of interactions for this type of situation is pretty simple and well thought out. Sending coordinates is as simple as:

  1. Tap and hold on the symbol of the target (in this case the drone) using one finger
  2. A summary of coordinates data is displayed around the touchpoint (drone symbol)
  3. Drag data over to the symbol of the receiver (in this case the Bubbleship)

Then on Jack’s side, the position of the coordinates target on his HUD adjusts as he flies toward the drone. Can’t really get much simpler than that.

However…

When Vika initially powers up the desktop, the drone status feed already shows drones 166 and 172 down. This is fine, except the alert sound and blinking icons on the TETVision don’t occur until Jack has already reached the hydro-rigs. This is quite a significant time lag between the drone status feed and the TETVision feed. It would be understandable if there was a slight delay in the alert sound upon startup. An immediate alert sound would likely mean there is something wrong with the TETVision system itself. That said, the TETVision drone icons should at the very least already be blinking red on load.

Monitoring drone 166

Oblivion-Desktop-DroneMonitor-005

As Jack is repairing drone 166, Vika watches the drone status feed on her desktop. The drone status feed is a dedicated screen to the right of the TETVision feed.

Oblivion-Desktop-DroneMonitor-000

It is divided into two main sections, the drone diagnostic matrix to the left and the drone deployment table to the right.

The dispatched drone table lists all drones currently working the security perimeter and lists an overview of information including drone ID, a diagram and operational status. The drone diagnostic matrix shows data such as fuel status and drone positioning along the perimeter as well as a larger detailed diagram of the selected drone.

Oblivion-Desktop-DroneMonitor

By looking at the live diagnostics diagram, Vika is able to immediately tell Jack that the central core is off alignment. As soon as Jack finishes repairing the central core, the diagram updates that the core is back in alignment and an alert sound pings.

How does the feed know which drone to focus on?

Since there is no direct interaction with this monitor shown in the film, it is assumed to be an informational display. So, how does the feed know which drone to focus on for diagnostics?

One possibility could be that Jack transmits data from the ground through his mobile drone programmer handset, which is covered in another post. However, a great opportunity for an example of agentive tech would be that when Vika sends the drone coordinates to the Bubbleship, the drone status feed automatically focuses on that one for diagnostics.

Clear messaging in real-time…almost

Overall, the messaging for drone status feed is clear and simple. As seen in the drone deployment table, the dataset for operational drones includes the drone ID number and a rotating view of the drone schematic. If the drone is down, the ID number fades and the drone schematic is replaced with a flashing red message stating that the drone is offline. Yet, when the drone is repaired, the display immediately updates to show that everything is operational again.

This is one of the basic fundamentals of good user interface design. Don’t let the UI get in the way and distract the user.

Keep it simple.