Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.
Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.
There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.
The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.
Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.
Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.
There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.
Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.
One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.
If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.
What do we see in the real world?
Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.
The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.
The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.
The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.
The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.
Back to sci-fi
So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.
Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.
The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.
The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.
In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.
-2. Wouldn’t a genetic test make more sense?
If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.
-1. Wouldn’t an fMRI make more sense?
An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.
0. Wouldn’t a metal detector make more sense?
If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.
(OK, those aren’t interface issues but seriously wtf. Onward.)
1. Labels, people
Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.
2. It should be less intimidating
The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.
I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.
2a. Holden should be less intimidating and not tip his hand
While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.
In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.
3. It should display history
The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.
4. It should track the subject’s eyes
Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.
5. Really? A bellows?
The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.
6. It should show the actual subject’s eye
The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.
7. It should visualize things in ways that make it easy to detect differences in key measurements
Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?
The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.
8. The machine should, you know, help them
The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.
People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.
So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.
Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.
Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.
There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.
I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.
But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.
So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.
Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.
Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.
Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.
For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.
If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.
Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.
Lying to Leon
There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.
The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.
On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.
This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.
Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.
In addition to its registers, OmniBro also makes fast-food vending machines. The one we see in the film is free-standing kiosk with five main panels, one for each of the angry star’s severed arms. A nice touch that flies by in the edit is that the roof of the kiosk is a giant star, but one of the arms has broken and fallen onto a car. Its owners have clearly just abandoned it, and things have been like this long enough for the car to rust.
Each panel in the kiosk has:
A small screen and two speakers just above eye level
Two protruding, horizontal slots of unknown purpose
A metallic nozzle
A red laser barcode scanner
A 3×4 panel of icons (similar in style to what’s seen in the St. God’sinterfaces) in the lower left. Sadly we don’t see these buttons in use.
But for the sake of completeness, the icons are, in western reading order:
No money, do not enter symbol, question
Taco, plus, fries
Burger, pizza, sundae
Asterisk, up-down, eye
The bottom has an illuminated dispenser port.
Joe approaches the kiosk and, hungry, watches to figure out how people get food. He hears a transaction in progress, with the kiosk telling the customer, “Enjoy your EXTRA BIG ASS FRIES.” She complains, saying, “You didn’t give me no fries. I got an empty box.”
She reaches inside the food port to see if it just got stuck, and tinto the take-out port and fishes inside to see if it just got stuck. The kiosk asks her, “Would you like another EXTRA BIG ASS FRIES?” She replies loudly into the speaker, “I said I didn’t get any.” The kiosk ignores her and continues, “Your account has been charged. Your balance is zero. Please come back when you afford to make a purchase.” The screen shows her balance as a big dollar sign with a crossout circle over it.
Frustrated, she bangs the panel, and a warning screen pops up, reading, “WARNING: Carl’s Junior frowns upon vandalism.”
She hits it again, saying, “Come on! My kids’re starving!” (Way to take it super dark, there, Judge.) Another screen reads, “Please step back.”
A mist sprays from the panel into her face as the voice says, “This should help you calm down. Please come back when you can afford to make a purchase! Your kids are starving. Carl’s Junior believes no child should go hungry. You are an unfit mother. Your children will be placed in the custody of Carl’s Junior.”
She stumbles away, and the kiosk wraps up the whole interaction with the tagline, “Carl’s Junior: Fuck you. I’m eating!” (This treatment of brands, it should be noted, is why the film never got broad release. See the New York Times article, or, if you can’t get past the paywall, the Mental Floss listicle, number seven.)
Joe approaches the kiosk and sticks a hand up the port. The kiosk recognizes the newcomer and says, “Welcome to Carl’s Junior. Would you like to try our EXTRA BIG ASS TACO, now with more MOLECULES?” Then the cops arrive to arrest the mom.
Now, I don’t think Judge is saying that automation is stupid. (There are few automated technologies in the film that work just fine.) I think he’s noting that poorly designed—and inhumanely designed—systems are stupid. It’s a reminder for all of us to consider the use cases where things go awry, and design for graceful degradation. (Noting the horrible pun so implied.) If we don’t, people can lose money. People can go hungry. The design matters.
I have questions
The interface inputs raise a lot of questions that are just unanswerable. Are there only four things on the menu? Why are they distributed amongst other categories of icons? Is “plus” the only customization? Does that mean another of the same thing I just ordered, or a larger size? What have I ordered already? How much is my current total? Do I have enough to pay for what I have ordered? There all sorts of purchase path best practice standards being violated or unaddressed by the scene. Of course. It’s not a demo. A lot of sci-fi scenes involve technology breaking down.
Just to make sure I’m covering the bases, here, let me note what I hope is obvious. No automation system/narrow AI is perfect. Designers and product owners must presume that there will be times when the system fails—and the system itself does not know about it. The kiosk thinks it has delivered EXTRA BIG ASS FRIES, but it’s wrong. It’s delivered an empty box. It still charged her, so it’s robbed her.
We should always be testing, finding, and repairing these failure points in the things we help make. But we should also design an easy recourse for when the automation fails and doesn’t know. This could be a human attendant (or even a button that connects to a remote human operator who could check the video feed) to see that the woman is telling the truth, mark that panel as broken and use overrides to get her EXTRA BIG ASS FRIES from one of the functioning panels or refund her money to, I guess, go get a tub of Flaturin instead? (The terrible nutrition of Idiocracy is yet another layer for some speculative scifinutrition blog to critique.)
Again, privacy. Again, respectfulness.
The financial circumstances of a customer are not the business of any other customer. The announcement and unmistakable graphic could be an embarrassment. Adding the disingenuous 🙁 emoji when it was the damned machine’s fault only adds insult to injury. We have to make sure and not get cute when users are faced with genuine problems.
Benefit of the doubt
Anther layer of the stupid here is that OmniBro has the sensors to detect frustrated customers. (Maybe it’s a motion sensor in the panel or dispense port. Possibly emotion detectors in the voice input.) But what it does with that information is revolting. Instead of presuming that the machine has made some irritating mistake, it presumes a hostile customer, and not only gasses her into a stupor while it calls the cops, it is somehow granted the authority to take her children as indentured servants for the problems it helped cause. If you have a reasonable customer base, it’s better for the customer experience, for the brand, and the society in which it operates to give the customers the benefit of the doubt rather than the presumption of guilt.
Prevention > remedy
Another failure of the kiosk is that it discovers that she has no money only after it believes it has dispensed EXTRA BIG ASS FRIES. As we see elsewhere in the film, the OmniBro scanners work accurately at a huge distance even while the user is moving along at car speeds. It should be able to read customers in advance to know that they have no ability to pay for food. It should prevent problems rather than try (and, as it does here, fail) to remedy them. At the most self-serving level, this helps avoid the potential loss or theft of food.
At a collective level, a humane society would still find some way to not let her starve. Maybe it could automatically deduct from a basic income. Maybe it could provide information on where a free meal is available. Maybe it could just give her the food and assign a caseworker to help her out. But the citizens of Idiocracy abide a system where, instead, children can be taken away from their mothers and turned into indentured servants because of a kiosk error. It’s one thing for the corporations and politicians to be idiots. It’s another for all the citizens to be complicit in that, too.
Fighting American Idiocracy
Since we’re on the topic of separating families: Since the fascist, racist “zero-tolerance” policy was enacted as a desperate attempt to do something in light of his failed and ridiculous border wall promise, around 3000 kids were horrifically and forcibly separated from their families. Most have been reunited, but as of August there were at least 500 children still detained, despite the efforts of many dedicated resisters. The 500 include, according to the WaPo article linked below, 22 kids under 5. I can’t imagine the permanent emotional trauma it would be for them to be ripped from their families. The Trump administration chose to pursue scapegoating to rile a desperate, racist base. The government had no reunification system. The Trump administration ignored Judge Sabraw’s court-ordered deadline to reunite these families. The GOP largely backed him on this. They are monsters. Vote them out. Early voting is open in many states. Do it now so you don’t miss your chance.
If you’re reading these chronologically, let me note here that I had to skip Bea Arthur’s marvelous turn as Ackmena, as she tends the bar and rebuffs the amorous petitions of the lovelorn, hole-in-the-head Krelman, before singing her frustrated patrons out of the bar when a curfew is announced. To find the next interface of note, we have to forward to when…
Han and Chewie arrive, only to find a Stormtrooper menacing Lumpy. Han knocks the blaster out of his hand, and when the Stormtrooper dives to retrieve it, he falls through the bannister of the tree house and to his death.
Why aren’t these in any way affiiiiixxxxxxeeeeeeddddddd?
Han enters the home and wishes everyone a Happy Life Day. Then he bugs out.
But I still have to return for the insane closing number. Hold me.
After ditching Chewie, Boba Fett heads to a public video phone to make a quick report to his boss who turns out to be…Darth Vader (this was a time long before the Expanded Universe/Legends, so there was really only one villain to choose from).
To make the call, he approaches an alcove off an alley. The alcove has a screen with an orange bezel, and a small panel below it with a 12-key number panel to the left, a speaker, and a vertical slot. Below that is a set of three phone books. For our young readers, phone books are an ancient technology in which telephone numbers were printed in massive books, and copies kept at every public phone for reference by a caller.
On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)
He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.
A stay-state toggle switch
A stay-state rocker switch
The lower two dials have rings under them on the panel that accentuate their color.
The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.
The Galactica’s fighter launch catapults are each controlled by a ‘shooter’ in an armored viewing pane. There is one ‘shooter’ for every two catapults. To launch a Viper, he has a board with a series of large twist-handles, a status display, and a single button. We can also see several communication devices:
Ear-mounted mic and speaker
Board mounted mic
Phone system in the background
These could relate to one of several lines of communication each:
The Viper pilot
Any crew inside the launch pod
Crew just outside the launch pod
CIC (for strategic status updates)
Other launch controllers at other stations
‘On call’ rooms for replacement operators
Each row on the launch display appears to conform to some value coming off of the Viper or the Galactica’s magnetic catapults. The ‘shooter’ calls off Starbuck’s launch three times due to some value he sees on his status board (fluctuating engine power right before launch).
We do not see any other data inputs. Something like a series of cameras on a closed circuit could show him an exterior view of the entire Viper, providing additional information to the sensors.
When Starbuck is ready to launch on the fourth try, the ‘shooter’ twists the central knob and, at the same time and with the same hand, pushes down a green button. The moment the ‘shooter’ hits the button, Starbuck’s Viper is launched into space.
There are other twist knobs across the entire board, but these do not appear to conform directly to the act of launching the Viper, and they do not act like the central knob. They appear instead to be switches, where turning them from one position to another locks them in place.
There is no obvious explanation for the number of twist knobs, but each one might conform to an electrical channel to the catapult, or some part of the earlier launch sequence.
Nothing in the launch control interprets anything for the ‘shooter’. He is given information, then expected to interpret it himself. From what we see, this information is basic enough to not cause a problem and allow him to quickly make a decision.
Without networking the launch system together so that it can poll its own information and make its own decisions, there is little that can improve the status indicators. (And networking is made impossible in this show because of Cylon hackers.) The board is easily visible from the shooter chair, each row conforms directly to information coming in from the Viper, and the relate directly to the task at hand.
The most dangerous task the shooter does is actually decide to launch the Viper into space. If either the Galactica or the Viper isn’t ready for that action, it could cause major damage to the Viper and the launch systems.
A two-step control for this is the best method, and the system now requires two distinct motions (a twist-and-hold, then a separate and distinct *click*). This is effective at confirming that the shooter actually wants to send the Viper into space.
To improve this control, the twist and button could be moved far enough apart (reference, under “Two-Hand Controls” ) that it requires two hands to operate the control. That way, there is no doubt that the shooter intends to activate the catapult.
If the controls are separated like that, it would take some amount of effort to make sure the two controls are visually connected across the board, either through color, or size, or layout. Right now, that would be complicated by the similarity in the final twist control, and the other handles that do different jobs.
Changing these controls to large switches or differently shaped handles would make the catapult controls less confusing to use.
The phone system aboard the Galactica is a hardwired system that can be used in two modes: Point-to-point, and one-to-many. The phones have an integrated handset wired to a control box and speaker. The buttons on the control box are physical keys, and there are no automatic voice controls.
In Point-to-point mode, the phones act as a typical communication system, where one station can call a single other station. In the one-to-many mode the phones are used as a public address system, where a single station can broadcast to the entire ship.
The phones are also shown acting as broadcast speakers. These speakers are able to take in many different formats of audio, and are shown broadcasting various different feeds:
Ship-wide Alerts (“Action Stations!”)
Local alarms (Damage control/Fire inside a specific bulkhead)
Radio Streams (pilot audio inside the launch prep area)
Addresses (calling a person to the closest available phone)
Each station is independent and generic. Most phones are located in public spaces or large rooms, with only a few in private areas. These private phones serve the senior staff in their private quarters, or at their stations on the bridge.
In each case, the phone stations are used as kiosks, where any crewmember can use any phone. It is implied that there is a communications officer acting as a central operator for when a crewmember doesn’t know the appropriate phone number, or doesn’t know the current location of the person they want to reach.
There is not a single advanced piece of technology inside the phone system. The phones act as a dirt-simple way to communicate with a place, not a person (the person just happens to be there while you’re talking).
The largest disadvantage of this system is that it provides no assistance for its users: busy crewmembers of an active warship. These crew can be expected to need to communicate in the heat of battle, and quickly relay orders or information to a necessary party.
This is easy for the lower levels of crewmembers: information will always flow up to the bridge or a secondary command center. For the officers, this task becomes more difficult.
First, there are several crewmember classes that could be anywhere on the ship:
Without broadcasting to the entire ship, it could be extremely difficult to locate these specific crewmembers in the middle of a battle for information updates or new orders.
The primary purpose of the Galactica was to fight the Cylons: sentient robots capable of infiltrating networked computers. This meant that every system on the Galactica was made as basic as possible, without regard to its usability.
The Galactica’s antiquated phone system does prevent Cylon infiltration of a communications network aboard an active warship. Nothing the phone system does requires executing outside pieces of software.
A very basic upgrade to the phone system that could provide better usability would be a near-field tag system for each crew member. A passive near-field chip could be read by a non-networked phone terminal each time a crew member approached near the phone. The phone could then send a basic update to a central board at the Communications Center informing the operators of where each crewmember is. Such a system would not provide an attack surface (a weakness for them to infiltrate) for the enemy, and make finding officers and crew in an emergency situation both easier and faster: major advantages for a warship.
The near field sensors would add a second benefit, in that only registered crew could access specific terminals. As an example, the Captain and senior staff would be the only ones allowed to use the central phone system.
Brutally efficient hardware
The phone system succeeds in its hardware. Each terminal has an obvious speaker that makes a distinct sound each time the terminal is looking for a crewmember. When the handset is in use, it is easy to tell which side is up after a very short amount of training (the cable always comes out the bottom).
It is also obvious when the handset is active or inactive. When a crewmember pulls the handset out of its terminal, the hardware makes a distinctive audible and physical *click* as the switch opens a channel. The handset also slots firmly back into the terminal, making another *click* when the switch deactivates. This is very similar to a modern-day gas pump.
With a brief amount of training, it is almost impossible to mistake when the handset activates and deactivates.
For a ship built in the heat of war at a rapid pace, the designers focused on what they could design quickly and efficiently. There is little in the way of creature comforts in the Phone interface.
Minor additions in technology or integrated functionality could have significantly improved the interface of the phone system, and may have been integrated into future ships of the Galactica’s line. Unfortunately, we never see if the military designers of the Galactica learned from their haste.
Hello, readers. Hope your Life Days went well. The blog is kicking off 2016 by continuing to take the Star Wars universe down another peg, here, at this heady time of its revival. Yes, yes, I’ll get back to The Avengers soon. But for now, someone’s in the kitchen with Malla.
After she loses 03:37 of her life calmly eavesviewing a transaction at a local variety shop, she sets her sights on dinner. She walks to the kitchen and rifles through some translucent cards on the counter. She holds a few up to the light to read something on them, doesn’t like what she sees, and picks up another one. Finding something she likes, she inserts the card into a large flat panel display on the kitchen counter. (Don’t get too excited about this being too prescient. WP tells me models existed back in the 1950s.)
In response, a prerecorded video comes up on the screen from a cooking show, in which the quirky and four-armed Chef Gourmaand shows how to prepare the succulent “Bantha Surprise.”
And that’s it for the interaction. None of the four dials on the base of the screen are touched throughout the five minutes of the cooking show. It’s quite nice that she didn’t have to press play at all, but that’s a minor note.
The main thing to talk about is how nice the physical tokens are as a means of finding a recipe. We don’t know exactly what’s printed on them, but we can tell it’s enough for her to pick through, consider, and make a decision. This is nice for the very physical environment of the kitchen.
This sort of tangible user interface, card-as-media-command hasn’t seen a lot of play in the scifiinterfaces survey, and the only other example that comes to mind is from Aliens, when Ripley uses Carter Burke’s calling card to instantly call him AND I JUST CONNECTED ALIENS TO THE STAR WARS HOLIDAY SPECIAL.
Of course an augmented reality kitchen might have done even more for her, like…
Cross-referencing ingredients on hand (say it with me: slab of tender Bantha loin)with food preferences, family and general ratings, budget, recent meals to avoid repeats, health concerns, and time constraints to populate the tangible cards with choices that fit the needs of the moment, saving her from even having to consider recipes that won’t work;
Make the material of the cards opaque so she can read them without holding them up to a light source;
Augmenting the surfaces with instructional graphics (or even air around her with volumetric projections) to show her how to do things in situ rather than having to keep an eye on an arbitrary point in her kitchen;
Slowed down when it was clear Malla wasn’t keeping up, or automatically translated from a four-armed to a two-armed description;
Shown a visual representation of the whole process and the current point within it;
…but then Harvey wouldn’t have had his moment. And for your commitment to the bit, Harvey, we thank you.
Jennifer is amazed to find a window-sized video display in the future McFly house. When Lorraine arrives at the home, she picks up a remote to change the display. We don’t see it up close, but it looks like she presses a single button to change the scene from a sculpted garden to one of a beach sunset, a city scape, and a windswept mountaintop. It’s a simple interface, though perhaps more work than necessary.
We don’t know how many scenes are available, but having to click one button to cycle through all of them could get very frustrating if there’s more than say, three. Adding a selection ring around the button would allow the display to go from a selected scene to a menu from which the next one might be selected from amongst options.