Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.
One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)
In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.
Sticky feet (and hands)
Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.
With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.
Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.
Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.
In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.
Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.
Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.
If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.
In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.
The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.
If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.
For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.
Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.
All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.
Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.
Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.
There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.
The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.
Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.
Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.
There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.
Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.
One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.
If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.
What do we see in the real world?
Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.
The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.
The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.
The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.
The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.
Back to sci-fi
So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.
When Agent Ross is shot in the back during Klaue’s escape from the Busan field office, T’Challa stuffs a kimoyo bead into the wound to staunch the bleeding, but the wounds are still serious enough that the team must bring him back to Wakanda for healing. They float him to Shuri’s lab on a hover-stretcher.
The hover-stretcher gets locked into place inside a bay. The bay is a small room in the center of Shuri’s lab, open on two sides. The walls are covered in a gray pattern suggesting a honeycomb. A bas-relief volumetric projection displays some medical information about the patient like vital signs and a subtle fundus image of the optic nerve.
Shuri holds her hand flat and raises it above the patient’s chest. A volumetric display of 9 of his thoracic vertebrae rises up in response. One of the vertebrae is highlighted in a bright red. A section of the wall display displays the same information in 2D, cyan with orange highlights. That display section slides out from the wall to draw observer’s attentions. Hexagonal tiles flip behind the display for some reason, but produce no change in the display.
Shuri reaches her hands up to the volumetric vertebrae, pinches her forefingers and thumbs together, and pull them apart. In response, the space between the vertebrae expands, allowing her to see the top and bottom of the body of the vertebra.
She then turns to the wall display, and reading something there, tells the others that he’ll live. Her attention is pulled away with the arrival of Wakabe, bringing news of Killmonger. We do not see her initiate a treatment in the scene. We have to presume that she did it between cuts. (There would have to be a LOT of confidence in an AI’s ability to diagnose and determine treatment before they would let Griot do that without human input.)
We’ll look more closely at the hover-stretcher display in a moment, but for now let’s pause and talk about the displays and the interaction of this beat.
A lab is not a recovery room
This doesn’t feel like a smart environment to hold a patient. We can bypass a lot of the usual hospital concerns of sterilization (it’s a clean room) or readily-available equipment (since they are surrounded by programmable vibranium dust controlled by an AGI) or even risk of contamination (something something AI). I’m mostly thinking about the patient having an environment that promotes healing: Natural light, quiet or soothing music, plants, furnishing, and serene interiors. Having him there certainly means that Shuri’s team can keep an eye on him, and provide some noise that may act as a stimulus, but don’t they have actual hospital rooms in Wakanda?
Why does she need to lift it?
The VP starts in his chest, but why? If it had started out as a “translucent skin” illusion, like we saw in Lost in Space (1998, see below), then that might make sense. She would want to lift it to see it in isolation from the distracting details of the body. But it doesn’t start this way, it starts embedded within him?!
It’s a good idea to have a representation close to the referent, to make for easy comparison between them. But to start the VP within his opaque chest just doesn’t make sense.
This is probably the wrong gesture
In the gestural interfaces chapter of Make It So, I described a pidgin that has been emerging in sci-fi which consisted of 7 “words.” The last of these is “Pinch and Spread to Scale.” Now, there is nothing sacred about this gestural language, but it has echoes in the real world as well. For one example, Google’s VR painting app Tilt Brush uses “spread to scale.” So as an increasingly common norm, it should only be violated with good reason. In Black Panther, Shuri uses spread to mean “spread these out,” even though she starts the gesture near the center of the display and pulls out at a 45° angle. This speaks much more to scaling than to spreading. It’s a mismatch and I can’t see a good reason for it. Even if it’s “what works for her,” gestural idiolects hinder communities of practice, and so should be avoided.
Better would have been pinching on one end of the spine and hooking her other index finger to spread it apart without scaling. The pinch is quite literal for “hold” and the hook quite literal for “pull.” This would let scale be scale, and “hook-pull” to mean “spread components along an axis.”
If we were stuck with the footage of Shuri doing the scale gesture, then it would have made more sense to scale the display, and fade the white vertebrae away so she could focus on the enlarged, damaged one. She could then turn it with her hand to any arbitrary orientation to examine it.
An object highlight is insufficient
It’s quite helpful for an interface that can detect anomalies to help focus a user’s attention there. The red highlight for the damaged vertebrae certainly helps draw attention. Where’s the problem? Ah, yes. There’s the problem. But it’s more helpful for the healthcare worker to know the nature of the damage, what the diagnosis is, to monitor the performance of the related systems, and to know how the intervention is going. (I covered these in the medical interfaces chapter of Make It So, if you want to read more.) So yes, we can see which vertebra is damaged, but what is the nature of that damage? A slipped disc should look different than a bone spur, which should look different than one that’s been cracked or shattered from a bullet. The thing-red display helps for an instant read in the scene, but fails on close inspection and would be insufficient in the real world.
Put critical information near the user’s locus of attention
Why does Shuri have to turn and look at the wall display at all? Why not augment the volumetric projection with the data that she needs? You might worry that it could obscure the patient (and thereby hinder direct observations) but with an AGI running the show, it could easily position those elements to not occlude her view.
Note that Shuri is not the only person in the room interested in knowing the state of things, so a wall display isn’t bad, but it shouldn’t be the only augmentation.
Lastly, why does she need to tell the others that Ross will live? if there was signifcant risk of his death, there should be unavoidable environmental signals. Klaxons or medical alerts. So unless we are to believe T’Challa has never encountered a single medical emergency before (even in media), this is a strange thing for her to have to say. Of course we understand she’s really telling us in the audience that we don’t need to wonder about this plot development any more, but it would be better, diegetically, if she had confirmed the time-to-heal, like, “He should be fine in a few hours.”
Alternatively, it would be hilarious turnabout if the AI Griot had simply not been “trained” on data that included white people, and “could not see him,” which is why she had to manually manage the diagnosis and intervention, but that would have massive impact on the remote piloting and other scenes, so isn’t worth it. Probably.
Thoughts toward a redesign
So, all told, this interface and interaction could be much better fit-to-purpose. Clarify the gestural language. Lose the pointless flipping hexagons. Simplify the wall display for observers to show vitals, diagnosis and intervention, as well as progress toward the goal. Augment the physician’s projection with detailed, contextual data. And though I didn’t mention it above, of course the bone isn’t the only thing damaged, so show some of the other damaged tissues, and some flowing, glowing patterns to show where healing is being done along with a predicted time-to-completion.
Later, when Ross is fully healed and wakes up, we see a shot of of the med table from above. Lots of cyan and orange, and *typography shudder* stacked type. Orange outlines seem to indicate controls, tough they bear symbols rather than full labels, which we know is better for learnability and infrequent reuse. (Linguist nerds: Yes, Wakandan is alphabetic rather than logographic.)
These feel mostly like FUIgetry, with the exception of a subtle respiration monitor on Ross’ left. But it shows current state rather than tracked over time, so still isn’t as helpful as it could be.
Then when Ross lifts his head, the hexagons begin to flip over, disabling the display. What? Does this thing only work when the patient’s head is in the exact right space? What happens when they’re coughing, or convulsing? Wouldn’t a healthcare worker still be interested in the last-recorded state of things? This “instant-off” makes no sense. Better would have been just to let the displays fade to a gray to indicate that it is no longer live data, and to have delayed the fade until he’s actually sitting up.
All told, the Wakandan medical interfaces are the worst of the ones seen in the film. Lovely, and good for quick narrative hit, but bad models for real-world design, or even close inspection within the world of Wakanda.
MLK Day Matters
Each post in the Black Panther review is followed by actions that you can take to support black lives.
Today is Martin Luther King Day. Normally there would be huge gatherings and public speeches about his legacy and the current state of civil rights. But the pandemic is still raging, and with the Capitol in Washington, D.C. having seen just last week an armed insurrection by supporters of outgoing and pouty loser Donald Trump, (in case that WP article hasn’t been moved yet, here’s the post under its watered-down title) worries about additional racist terrorism and violence.
So today we celebrate virtually, by staying at home, re-experiening his speeches and letters, and listening to the words of black leaders and prominent thinkers all around us, reminding us of the arc of the moral universe, and all the work it takes to bend it toward justice.
With the Biden team taking the reins on Wednesday, and Kamala Harris as our first female Vice President of color, things are looking brighter than they have in 4 long, terrible years. But Trump would have gotten nowhere if there hadn’t been a voting block and party willing to indulge his racist fascism. There’s still much more to do to dismantle systemic racism in the country and around the world. Let’s read, reflect, and use whatever platforms and resources we are privileged to have, act.
I presume my readership are adults. I honestly cannot imagine this site has much to offer the 3-to-8-year-old. That said, if you are less than 8.8 years old, be aware that reading this will land you FIRMLY on the naughty list. Leave before it’s too late. Oooh, look! Here’s something interesting for you.
For those who celebrate Yule (and the very hybridized version of the holiday that I’ll call Santa-Christmas to distinguish it from Jesus-Christmas or Horus-Christmas), it’s that one time of year where we watch holiday movies. Santa features in no small number of them, working against the odds to save Christmas and Christmas spirit from something that threatens it. Santa accomplishes all that he does by dint of holiday magic, but increasingly, he has magic-powered technology to help him. These technologies are different for each movie in which they appear, with different sci-fi interfaces, which raises the question: Who did it better?
Unraveling this stands to be even more complicated than usual sci-fi fare.
These shows are largely aimed at young children, who haven’t developed the critical thinking skills to doubt the core premise, so the makers don’t have much pressure to present wholly-believable worlds. The makers also enjoy putting in some jokes for adults that are non-diegetic and confound analysis.
Despite the fact that these magical technologies are speculative just as in sci-fi, makers cannot presume that their audience are sci-fi fans who are familiar with those tropes. And things can’t seem too technical.
The sci in this fi is magical, which allows makers to do all-sorts of hand-wavey things about how it’s doing what it’s doing.
Many of the choices are whimsical and serve to reinforce core tenets of the Santa Claus mythos rather than any particular story or worldbuilding purpose.
But complicated-ness has rarely cowed this blog’s investigations before, why let a little thing like holiday magic do it now?
A Primer on Santa
I have readers from all over the world. If you’re from a place that does not celebrate the Jolly Old Elf, a primer should help. And if you’re from a non-USA country, your Saint Nick mythos will be similar but not the same one that these movies are based on, so a clarification should help. To that end, here’s what I would consider the core of it.
Santa Claus is a magical, jolly, heavyset old man with white hair, mustache, and beard who lives at the North Pole with his wife Ms. Claus. The two are almost always caucasian. He can alternately be called Kris Kringle, Saint Nick, Father Christmas, or Klaus. The Clark Moore poem calls him a “jolly old elf.” He is aware of the behavior of children, and tallies their good and bad behavior over the year, ultimately landing them on the “naughty” or “nice” list. Santa brings the nice ones presents. (The naughty ones are canonically supposed to get coal in their stockings though in all my years I have never heard of any kids actually getting coal in lieu of presents.) Children also hang special stockings, often on a mantle, to be filled with treats or smaller presents. Adults encourage children to be good in the fall to ensure they get presents. As December approaches, Children write letters to Santa telling him what presents they hope for. Santa and his elves read the letters and make all the requested toys by hand in a workshop. Then the evening of 24 DEC, he puts all the toys in a large sack, and loads it into a sleigh led by 8 flying reindeer. Most of the time there is a ninth reindeer up front with a glowing red nose named Rudolph. He dresses in a warm red suit fringed with white fur, big black boots, thick black belt, and a stocking hat with a furry ball at the end. Over the evening, as children sleep, he delivers the presents to their homes, where he places them beneath the Christmas tree for them to discover in the morning. Families often leave out cookies and milk for Santa to snack on, and sometimes carrots for the reindeer. Santa often tries to avoid detection for reasons that are diegetically vague.
There is no single source of truth for this mythos, though the current core text might be the 1823 C.E. poem, “A Visit from St. Nicholas” by Clement Clarke Moore. Visually, Santa’s modern look is often traced back to the depictions by Civil War cartoonist Thomas Nast, which the Coca-Cola Corporation built upon for their holiday advertisements in 1931.
There are all sorts of cultural conversations to have about the normalizing a magical panopticon, what effect hiding the actual supply chain has, and asking for what does perpetuating this myth train children; but for now let’s stick to evaluating the interfaces in terms of Santa’s goals.
Given all of the above, we can say that the following are Santa’s goals.
Sort kids by behavior as naughty or nice
Many tellings have him observing actions directly
Manage the lists of names, usually on separate lists
Sending toy requests to the workshop
Travel to kids’ homes
Find the most-efficient way there
Control the reindeer
Maintain air safety
Avoid air obstacles
Find a way inside and to the tree
Enjoy the cookies / milk
Deliver all presents before sunrise
For each child:
Know whether they are naughty or nice
If nice, match the right toy to the child
Stage presents beneath the tree
Avoid being seen
We’ll use these goals to contextualize the Santa interfaces against.
Nearly every story tells of Santa working with other characters to save Christmas. (The metaphor that we have to work together to make Christmas happen is appreciated.) The challenges in the stories can be almost anything, but often include…
Inclement weather (usually winter, but Santa is a global phenomenon)
Air obstacles (Planes, helicopters, skyscrapers)
Ingress/egress into homes
Home security systems / guard dogs
Imdb.com lists 847 films tagged with the keyword “santa claus,” which is far too much to review. So I looked through “best of” lists (two are linked below) and watched those films for interfaces. There weren’t many. I even had to blend CGI and live action shows, which I’m normally hesitant to do. As always, if you know of any additional shows that should be considered, please mention it in the comments.
After reviewing these films, the ones with Santa interfaces came down to four, presented below in chronological order.
The Santa Clause (1994)
This movie deals with the lead character, Scott Calvin, inadvertently taking on the “job” of Santa Clause. (If you’ve read Anthony’s Incarnations of Immortality series, this plot will feel quite familiar.)
The sleigh he inherits has a number of displays that are largely unexplained, but little Charlie figures out that the center console includes a hot chocolate and cookie dispenser. There is also a radar, and far away from it, push buttons for fog, planes, rain, and lightning. There are several controls with Christmas bell icons associated with them, but the meaning of these are unclear.
This is the oldest of the candidates. Its interfaces are quite sterile and “tacked on” compared to the others, but was novel for its time.
This movie tells the story of Santa’s n’er do well brother Fred, who has to work in the workshop for one season to work off bail money. While there he winds up helping forestall foreclosure from an underhanded supernatural efficiency expert, and un-estranging himself from his family. A really nice bit in this critically-panned film is that Fred helps Santa understand that there are no bad kids, just kids in bad circumstances.
Fred is taken to the North Pole in a sled with switches that are very reminiscent of the ones in The Santa Clause. A funny touch is the “fasten your seatbelt” sign like you might see in a commercial airliner. The use of Lombardic Capitals font is a very nice touch given that much of modern Western Santa Claus myth (and really, many of our traditions) come from Germany.
This chamber is where Santa is able to keep an eye on children. (Seriously panopticony. They have no idea they’re being surveilled.) Merely by reading the name and address of a child a volumetric display appears within the giant snowglobe. The naughtiest children’s names are displayed on a digital split-flap display, including their greatest offenses. (The nicest are as well, but we don’t get a close up of it.)
The final tally is put into a large book that one of the elves manages from the sleigh while Santa does the actual gift-distribution. The text in the book looks like it was printed from a computer.
In this telling, the Santa job is passed down patrilineally. The oldest Santa, GrandSanta, is retired. The dad, Malcolm, is the current-acting Santa one, and he has two sons. One is Steve, a by-the-numbers type into military efficiency and modern technology. The other son, Arthur, is an awkward fellow who has a semi-disposable job responding to letters. Malcolm currently pilots a massive mile-wide spaceship from which ninja elves do the gift distribution. They have a lot of tech to help them do their job. The plot involves Arthur working with Grandsanta using his old Sleigh to get a last forgotten gift to a young girl before the sun rises.
To help manage loud pets in the home who might wake up sleeping people, this gun has a dial for common pets that delivers a treat to distract them.
Elves have face scanners which determine each kids’ naughty/nice percentage. The elf then enters this into a stocking-filling gun, which affects the contents in some unseen way. A sweet touch is when one elf scans a kid who is read as quite naughty, the elf scans his own face to get a nice reading instead.
The S-1 is the name of the spaceship sleigh at the beginning (at the end it is renamed after Grandsanta’s sleigh). Its bridge is loaded with controls, volumetric displays, and even a Little Tree air freshener. It has a cloaking display on its underside which is strikingly similar to the MCUS.H.I.E.L.D. helicarrier cloaking. (And this came out the year before The Avengers, I’m just sayin’.)
The north pole houses the command-and-control center, which Steve manages. Thousands of elves manage workstations here, and there is a huge shared display for focusing and informing the team at once when necessary. Smaller displays help elf teams manage certain geographies. Its interfaces fall to comedy and trope, mostly, but are germane to the story beats
One of the crisis scenarios that this system helps manage is for a “waker,” a child who has awoken and is at risk of spying Santa.
Grandsanta’s outmoded sleigh is named Eve. Its technology is much more from the early 20th century, with switches and dials, buttons and levers. It’s a bit janky and overly complex, but gets the job done.
One notable control on S-1 is this trackball with dark representations of the continents. It appears to be a destination selector, but we do not see it in use. It is remarkable because it is very similar to one of the main interface components in the next candidate movie, The Christmas Chronicles.
The Christmas Chronicles follows two kids who stowaway on Santa’s sleigh on Christmas Eve. His surprise when they reveal themselves causes him to lose his magical hat and wreck his sleigh. They help him recover the items, finish his deliveries, and (well, of course) save Christmas just in time.
Santa’s sleight enables him to teleport to any place on earth. The main control is a trackball location selector. Once he spins it and confirms that the city readout looks correct, he can press the “GO” button for a portal to open in the air just ahead of the sleigh. After traveling in a aurora borealis realm filled with famous landmarks for a bit, another portal appears. They pass through this and appear at the selected location. A small magnifying glass above the selection point helps with precision.
Santa wears a watch that measures not time, but Christmas spirit, which ranges from 0 to 100. In the bottom half, chapter rings and a magnifying window seem designed to show the date, with 12 and 31 sequential numbers, respectively. It’s not clear why it shows mid May. A hemisphere in the middle of the face looks like it’s almost a globe, which might be a nice way to display and change time zone, but that may be wishful thinking on my part.
Santa also has a tracking device for finding his sack of toys. (Apparently this has happened enough time to warrant such a thing.) It is an intricate filligree over a cool green and blue glass. A light within blinks faster the closer the sphere is to the sack.
Since he must finish delivering toys before Christmas morning, the dashboard has a countdown clock with Nixie tube numbers showing hours, minutes, and milliseconds. They ordinary glow a cyan, but when time runs out, they turn red and blink.
This Santa also manages his list in a large book with lovely handwritten calligraphy. The kids whose gifts remain undelivered glow golden to draw his attention.
The hard problem here is that there is a lot of apples-to-oranges comparisons to do. Even though the mythos seems pretty locked down, each movie takes liberties with one or two aspects. As a result not all these Santas are created equally. Calvin’s elves know he is completely new to his job and will need support. Christmas Chronicles Santa has perfect memory, magical abilities, and handles nearly all the delivery duties himself, unless he’s enacting a clever scheme to impart Christmas wisdom. Arthur Christmas has intergenerational technology and Santas who may not be magic at all, but fully know their duty from their youths but rely on a huge army of shock troop elves to make things happen. So it’s hard to name just one. But absent a point-by-point detailed analysis, there are two that really stand out to me.
Coverage of goals
Arthur Christmas movie has, by far, the most interfaces of any of the candidates, and more coverage of the Santa-family’s goals. Managing noisy pets? Check? Dealing with wakers? Check. Navigating the globe? Check. As far as thinking through speculative technology that assists its Santa, this film has the most.
Keeping the holiday spirit
I’ll confess, though, that extradiegetically, one of the purposes of annual holidays is to mark the passage of time. By trying to adhere to traditions as much as we can, time and our memory is marked by those things that we cannot control (like, say, a pandemic keeping everyone at home and hanging with friends and family virtually). So for my money, the thoroughly modern interfaces that flood Arthur Christmas don’t work that well. They’re so modern they’re not…Christmassy. Grandsanta’s sleigh Eve points to an older tradition, but it’s also clearly framed as outdated in the context of the story.
Compare this to The Christmas Chronicles, with its gorgeous steampunk-y interfaces that combine a sense of magic and mechanics. These are things that a centuries-old Santa would have built and use. They feel rooted in tradition while still helping Santa accomplish as many of his goals as he needs (in the context of his Christmas adventure for the stowaway kids). These interfaces evoke a sense of wonder, add significantly to the worldbuilding, and which I’d rather have as a model for magical interfaces in the real world.
Of course it’s a personal call, given the differences, but The Christmas Chronicles wins in my book.
For those that celebrate Santa-Christmas, I hope it’s a happy one, given the strange, strange state of the world. May you be on the nice list.
Remote operation appears twice during Black Panther. This post describes the first, in which Shuri remotely operates an automobile during a chase sequence. The next post describes the other, in which Ross remotely pilots the Talon.
In the scene, Okoye has dropped a remote control kimoyo bead onto a car in Singapore. (It’s unclear why this is necessary. During the chase, Klawe tells his minion the car is made of vibranium, which tells us it’s Wakandan. Wouldn’t remote control be built in? But I digress…)
T’Challa, leaving the Singaporean casino, shouts, “Shuri!” Shuri, in her lab in Wakanda, hears the call. The lab’s AI, Griot, says, “Remote driving system activated.” The vibranium dust / programmable matter of the lab forms a seat and steering wheel for her that match the controlled car’s. A projection of the scene around the controlled car gives her a live visual to work with. She pauses to ask, “Wait. Which side of the road is it?” T’Challa shouts, “For Bast’s sake, just drive!” She floors the gas pedal in her lab, and we see the gas pedal of the controlled car depress in Singapore. There ensues a nail-biting car chase.
Now, I don’t want to de-hero-ize our heroes, but let’s face it, Griot must be doing a significant portion of the driving here. Here’s my rationale: The system has a feedback loop that must shuttle video data from Singapore to Wakanda, then Shuri has to respond, and her control signal must be digitized and sent back from Wakanda to Singapore, continuously. Presuming some stuff, that’s a distance of 7633 kilometers / 4743 miles. If that signal was unimpeded light (and these quora estimates are correct) and Shuri’s response time instantaneous, it would take that signal on the order of 600 milliseconds round trip. Sure, this is specualtively-advanced, but it’s still technology, and there are analog-to-digital, digital-to-analog, encryption, and decryption conversions to be managed, signal boosts along the way, and the impedance of whatever network these signals are riding. Plus as awesome as Shuri is, her response time is longer than 0. The feedback loop would be way longer than the 100 milliseconds minimum required to feel like instantaneous response.
Without presuming some physics-breaking stuff, there will a significant lag between what’s happening around the actual car and Shuri’s remote reaction getting back to that car. In a high-speed chase like this, the lag would prove disastrous, and the only way I can apologize my way around it is if Griot spun-up some aspect of himself in the kimoyo bead sitting on the car that is doing the majority of the stunt driving. For all the excitement that Shuri is feeling, she is likely just providing broad suggestions to what she thinks should happen, and Griot is doing the rest. (Long-time readers will note this would be similar to the relationship I describe between JARVIS and Tony Stark.) Shuri is just an input. An important one—and one that would dislike being disregarded—but still, an input.
The HUD bears two quick notes about its display.
First, the video feed around the remote operators is a sphere, onto which 2D photorealistic video projects. Modern racing games mostly use the 2D displays of televisions as well, and they’re enjoyable, but I should think that immersion and responses would be better if it was a three-dimensional volumetric display instead, improving the visual data with parallax . That would be difficult to convey on screen for the audience, but I don’t think impossible.
Third, when Klawe’s minions cause a pile-up in an intersection, Shuri’s view shows the scene with the obstacles overlaid in red. As a bit of assistance, that shows us several things. Griot is watching the scene, and able to augment the display in real time. She would find more of this context- and goal-awareness augmentation useful. For instance, she wouldn’t have had to ask which side of the road Singaporeans drive on. (It’s the left, by the way, like the UK. Her steering wheel, if it was to match the car’s, should have been on the right. Nearly all of the driving in the scene happens on the wrong side of the road to feel “correct” to right-driving audiences.)
It’s also really interesting to note that the seat provides strong haptic feedback. When T’Challa dumps a minion from the SUV in front of the car, the controlled car speed-bumps over the body. Shuri’s seat matches the bump, and she asks T’Challa, “What was that?” (This is a slightly unbelievable moment. Her focus is on the scene, and her startle response could not help but alert her to a dark shape symmetrically expanding.) We know from motion simulators that tilting a seat up and down can strongly mimic momentum as if traveling, so I’m guessing that Shuri’s very much feeling the chase.
We are not shown what happens when T’Challa sharpens that emergency turn and lifts the real car by around 35 degrees, but Griot must have supplied her with a just-in-time seatbelt if she was angled similarly.
When Klawe manages to shoot his arm-cannon at the remotely-controlled car, destroying it, for some reason Shuri’s vibranium dust simply…collapses, dropping her rudely to the floor. This had to be added in to the design of the system, and I cannot for the life of me figure out why this would be a good thing. Just…no.
Fit to purpose?
Shuri’s remote driving interface gives her well-mapped controls with rich sensory feedback, low latency, and at least the appearance of control, even if Griot is handling all the details. The big critiques are that Griot must be “there” quietly doing most of the work, that the HUD could provide a richer augmentation to help her make better real-time suggestions, and the failure event should not risk a broken coccyx.
Black Georgia Matters
Each post in the Black Panther review is followed by actions that you can take to support black lives.
Looking back at these posts, I am utterly floored at the number of things that have occurred in the world that are worth remarking on with each post. Floyd’s murder. Boseman’s passing. Ginsberg’s passing and hasty, hypocritical replacement. The national election. And while there is certainly more to say about anti-racism in general, for this post let’s talk about Georgia.
Despite outrageous, anti-democratic voter suppression by the GOP, for the first time in 28 years, the state went blue for the presidential election, verified with two hand recounts. Credit to Stacey Abrams and her team’s years of effort to get out the Georgian—and particularly the powerful black Georgian—vote.
But the story doesn’t end there. Though the Biden/Harris ticket won the election, if the Senate stays majority red, Moscow Mitch McConnell will continue the infuriating obstructionism with which he held back Obama’s efforts in office for eight years. The Republicans will, as they have done before, ensure that nothing gets done.
To start to undo the damage the fascist and racist Trump administration has done, and maybe make some actual progress in the US, we need the Senate majority blue. Georgia is providing that opportunity. Neither of the wretched Republican incumbents got 50% of the vote, resulting in a special runoff election January 5, 2021. If these two seats go to the Democratic challengers, Warnock and Ossof, it will flip the Senate blue, and the nation can begin to seriously right the sinking ship that is America.
Residents can also volunteer to become a canvasser for either of the campaigns, though it’s a tough thing to ask in the middle of the raging pandemic.
The rest of us (yes, even non-American readers) can contribute either to the campaigns directly using the links above, or to Stacey Abrams’ Fair Fight campaign. From the campaign’s web site:
We promote fair elections in Georgia and around the country, encourage voter participation in elections, and educate voters about elections and their voting rights. Fair Fight brings awareness to the public on election reform, advocates for election reform at all levels, and engages in other voter education programs and communications.
If you don’t want to donate money directly, you can join a letter writing campaign to help get out the vote, via the Vote Forward campaign.
We will continue moving the country into the anti-racist future regardless of the runoff, but we can make much, much more progress if we win this election. Please join the efforts as best you can even as you take care of yourself and your loved ones over the holidays. So very much depends on it.
At around the midpoint of the movie, Deckard calls Rachel from a public videophone in a vain attempt to get her to join him in a seedy bar. Let’s first look at the device, then the interactions, and finally take a critical eye to this thing.
The lower part of the panel is a set of back-lit instructions and an input panel, which consists of a standard 12-key numeric input and a “start” button. Each of these momentary pushbuttons are back-lit white and have a red outline.
In the middle-right of the panel we see an illuminated orange logo panel, bearing the Saul Bass Bell System logo and the text reading, “VID-PHŌN” in some pale yellow, custom sans-serif logotype. The line over the O, in case you are unfamiliar, is a macron, indicating that the vowel below should be pronounced as a long vowel, so the brand should be pronounced “vid-phone” not “vid-fahn.”
In the middle-left there is a red “transmitting” button (in all lower case, a rarity) and a black panel that likely houses the camera and microphone. The transmitting button is dark until he interacts with the 12-key input, see below.
At the top of the panel, a small cathode-ray tube screen at face height displays data before and after the call as well as the live video feed during the call. All the text on the CRT is in a fixed-width typeface. A nice bit of worldbuilding sees this screen covered in Sharpie graffiti.
His interaction is straightforward. He approaches the nook and inserts a payment card. In response, the panel—including its instructions and buttons—illuminates. A confirmation of the card holder’s identity appears in the in the upper left of the CRT, i.e. “Deckard, R.,” along with his phone number, “555-6328” (Fun fact: if you misdialed those last four numbers you might end up talking to the Ghostbusters) and some additional identifying numbers.
A red legend at the bottom of the CRT prompts him to “PLEASE DIAL.” It is outlined with what look like ASCII box-drawing characters. He presses the START button and then dials “555-7583” on the 12-key. As soon as the first number is pressed, the “transmitting” button illuminates. As he enters digits, they are simultaneously displayed for him on screen.
His hands are not in-frame as he commits the number and the system calls Rachel. So whether he pressed an enter key, #, or *; or the system just recognizes he’s entered seven digits is hard to say.
After their conversation is complete, her live video feed goes blank, and TOTAL CHARGE $1.25, is displayed for his review.
Chapter 10 of the book Make It So: Interaction Design Lessons from Science Fiction is dedicated to Communication, and in this post I’ll use the framework I developed there to review the VID-PHŌN, with one exception: this device is public and Deckard has to pay to use it, so he has to specify a payment method, and then the system will report back total charges. That wasn’t in the original chapter and in retrospect, it should have been.
Turns out this panel is just the right height for Deckard. How do people of different heights or seated in a wheelchair fare? It would be nice if it had some apparent ability to adjust for various body heights. Similarly, I wonder how it might work for differently-abled users, but of course in cinema we rarely get to closely inspect devices for such things.
Deckard has to insert a payment card before the screen illuminates. It’s nice that the activation entails specifying payment, but how would someone new to the device know to do this? At the very least there should be some illuminated call to action like “insert payment card to begin,” or better yet some iconography so there is no language dependency. Then when the payment card was inserted, the rest of the interface can illuminate and act as a sort of dial-tone that says, “OK, I’m listening.”
Specifying a recipient: Unique Identifier
In Make It So, I suggest five methods of specifying a recipient: fixed connection, operator, unique identifier, stored contacts, and global search. Since this interaction is building on the experience of using a 1982 public pay phone, the 7-digit identifier quickly helps audiences familiar with American telephone standards understand what’s happening. So even if Scott had foreseen the phone explosion that led in 1994 to the ten-digit-dialing standard, or the 2053 events that led to the thirteen-digital-dialing standard, it would have likely have confused audiences. So it would have slightly risked the read of this scene. It’s forgivable.
I have a tiny critique over the transmitting button. It should only turn on once he’s finished entering the phone number. That way they’re not wasting bandwidth on his dialing speed or on misdials. Let the user finish, review, correct if they need to, and then send. But, again, this is 1982 and direct entry is the way phones worked. If you misdialed, you had to hang up and start over again. Still, I don’t think having the transmitting light up after he entered the 7th digit would have caused any viewers to go all hruh?
There are important privacy questions to displaying a recipient’s number in a way that any passer-by can see. Better would have been to mount the input and the contact display on a transverse panel where he could enter and confirm it with little risk of lookie-loos and identity theives.
Audio & Video
Hopefully, when Rachel received the call, she was informed who it was and that the call was coming from a public video phone. Hopefully it also provided controls for only accepting the audio, in case she was not camera-ready, but we don’t see things from her side in this scene.
Gaze correction is usually needed in video conversation systems since each participant naturally looks at the center of the screen and not at the camera lens mounted somewhere next to its edge. Unless the camera is located in the center of the screen (or the other person’s image on the screen), people would not be “looking” at the other person as is almost always portrayed. Instead, their gaze would appear slightly off-screen. This is a common trope in cinema, but one which we’re become increasingly literate in, as many of us are working from home much more and gaining experience with videoconferencing systems, so it’s beginning to strain suspension of disbelief.
Also how does the sound work here? It’s a noisy street scene outside of a cabaret. Is it a directional mic and directional speaker? How does he adjust the volume if it’s just too loud? How does it remain audible yet private? Small directional speakers that followed his head movements would be a lovely touch.
And then there’s video privacy. If this were the real world, it would be nice if the video had a privacy screen filter. That would have the secondary effect of keeping his head in the right place for the camera. But that is difficult to show cinemagentically, so wouldn’t work for a movie.
Ending the call
Rachel leans forward to press a button on her home video phone end her part of the call. Presumably Deckard has a similar button to press on his end as well. He should be able to just yank his card out, too.
The closing screen is a nice touch, though total charges may not be the most useful thing. Are VID-PHŌN calls a fixed price? Then this information is not really of use to him after the call as much as it is beforehand. If the call has a variable cost, depending on long distance and duration, for example, then he would want to know the charges as the call is underway, so he can wrap things up if it’s getting too expensive. (Admittedly the Bell System wouldn’t want that, so it’s sensible worldbuilding to omit it.) Also if this is a pre-paid phone card, seeing his remaining balance would be more useful.
But still, the point was that total charges of $1.25 was meant to future-shocked audiences of the time, since public phone charges in the United States at the time were $0.10. His remaining balance wouldn’t have shown that and not had the desired effect. Maybe both? It might have been a cool bit of worldbuilding and callback to build on that shock to follow that outrageous price with “Get this call free! Watch a video of life in the offworld colonies! Press START and keep your eyes ON THE SCREEN.”
Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.
The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.
The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.
In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.
-2. Wouldn’t a genetic test make more sense?
If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.
-1. Wouldn’t an fMRI make more sense?
An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.
0. Wouldn’t a metal detector make more sense?
If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.
(OK, those aren’t interface issues but seriously wtf. Onward.)
1. Labels, people
Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.
2. It should be less intimidating
The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.
I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.
2a. Holden should be less intimidating and not tip his hand
While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.
In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.
3. It should display history
The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.
4. It should track the subject’s eyes
Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.
5. Really? A bellows?
The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.
6. It should show the actual subject’s eye
The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.
7. It should visualize things in ways that make it easy to detect differences in key measurements
Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?
The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.
8. The machine should, you know, help them
The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.
People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.
So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.
Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.
Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.
There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.
I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.
But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.
So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.
Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.
Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.
Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.
For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.
If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.
Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.
Lying to Leon
There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.
The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.
On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.
This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.
Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.
In the prior Idiocracy post I discussed the car interface, especially in terms of how it informs the passengers what is happening when it is remotely shut down. Today let’s talk about the passive interface that shuts it down: Namely, Joe’s tattoo and the distance-scanning vending machine.
It’s been a while since that prior post, so here’s a recap of what’s happening in Idiocracy in this scene:
When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”
Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading PULL OVER.
It’s a fast scene and the beat feels more like a filmmaker’s excuse to get them out of the car and on foot as they hunt for the Time Masheen. I breezed by it in an earlier post, but it bears some more investigation.
This is a class of transaction where, like taxes and advertising, the subject is an unwilling and probably uncooperative participant. But this same interface has to work for payment, in which the subject is a willing participant. Keep this in mind as we look first at the proximate problem, i.e. locating the fugitive for apprehension; and at the ultimate goal, i.e. how a culture deals with crime.
A quick caveat: While it’s fair to say I’m an expert on interaction design, I’m Just a Guy when it comes to criminology and jurisprudence. And these are ideas with some consequence. Feel free to jump in and engage in friendly debate on any of these points.
Proximate problem: Finding the fugitive
The red scan is fast, but it’s very noticable. The sudden flash of light, the red color. This could easily tip a fugitive off and cause them to redouble efforts at evasion, maybe even covering up the tattoo, making the law’s job of apprehending them that much harder. Better would be some stealthier means of detection like RFID chips. I know, that’s not as cinegenic, so the movie version would instead use image recognition, showing the point of view from the vending machine camera (machine point of view or MPOV), with some UI clues showing it identifying, zooming in to, and confirming the barcode.
So we can solve stealth-detection cinematically, using tropes. But anytime a designer is asked to consider a scenario, it is a good idea to see if the problem can be more effectively addressed somewhere higher up the goal chain. Is stealth-detection really better?
Why is the system locating him? To tell authorities so they can go there and apprehend him.
Why are they apprehending him? He has shown an inability to regulate damaging anti-social behavior (in the eyes of the law, anyway) and the offender must be incarcerated.
Why do we try to incarcerate criminals? To minimize potential damage to society while the offender is rehabilitated.
Why do we try to rehabilitate criminals? Well, in the Idiocracy, it’s an excuse for damnatio ad vehiculum, that is, violent public spectacle based on the notion that jurisprudence is about punishment-as-deterrent. (Pro-tip: That doesn’t work. Did I say that doesn’t work? Because that doesn’t work.) In a liberal democracy like ours, it’s because we understand that the mechanisms of law are imperfect and we don’t want the state to enact irreversible capital punishment when it could be wrong, and, moreover, that human lives have intrinsic value. We should try to give people who have offended a chance to demonstrate an understanding of their crime and the willingness to behave lawfully in the future. Between incarceration and rehabilitation, we seek to minimize crime.
Why do we try to minimize crime? (This ought to be self-evident, but juuust in case…) Humans thrive when they do not need to guard against possible attack by every other human they encounter. They can put their resources towards the pursuit of happiness rather than the defense of encroachment. Such lawful societies benefit from network effects.
The MPOV suggestion above fixes the problem at the low level of detection, but each step in the goal chain invites design at a more effective level. It’s fun to look at each of these levels and imagine an advanced-technology solution (and even findsci-fiexamples of each), but for this post, let’s look at the last one, minimizing crime, in the context of the tattoo scanner.
Ultimate problem: Preventing crime
In his paper “Deterrence in the Twenty-First Century,” Daniel Nagin reviewed state of the art criminology findings and listed five things about deterrence. Number one in his list is that the chance of being caught is a vastly more effective deterrent than even draconian punishment.
Research shows clearly that the chance of being caught is a vastly more effective deterrent than even draconian punishment.
How might we increase the evident chance of being caught?
Fund police forces well so they are well-staffed, well-trained, and have a near-constant, positive presence in communities, and impressive capture rates. Word would get around.
Nagin himself suggests concentrating police presence in criminal hotspots, ensuring that they have visible handcuffs and walkie-talkies.
Another way might be media: Of making sure that potential criminals hear an overwhelming number of stories through their network of criminals being captured successfully. This could involve editorial choice, or even media manipulation, filtering to ensure that “got caught” narratives appear in feeds more than “got away with it” ones. But we’re hopefully becoming more media savvy as a result of Recent Things, and this seems more deceptive than persuasive.
The other way is to increase the sense of observation. And that leads us (as so many things do) to the panopticon.
The Panopticon is almost a trope at this point, but that’s what this scene points to. If you’re not familiar, it is an idea about the design of buildings in which “a number of persons are meant to be kept under inspection,” conceived in the late 1700s by Samuel Bentham and formalized by his brother James in letters to their father. Here is a useful illustration.
*Elaboratory was one of the alternate terms he suggested for the idea. It didn’t catch on since it didn’t have the looming all-seeing-eye ring of the other term.
The design of the panopticon is circular, with prisoners living in isolated cells along the perimeter. The interior wall of each cell is open to view so the inmate can be observed by a person in a central tower or “inspector’s lodge.” Things are structured so the inmates cannot tell whether or not they are being observed. (Bentham suggested louvers.) Over time, the idea goes, the inmate internalizes the unseen authority as a constant presence, and begins to regulate themselves, behaving as they believe the guard would have them behave. Bentham thought this was ideal from an efficacy and economic standpoint.
“Ideal perfection, if that were the object, would require that each person should actually [be under the eyes of the persons who should inspect them], during every instant of time.”
It’s an idea that has certainly enjoyed currency. If you hadn’t come across the idea via Bentham, you may have come across it via Foucault in Discipline and Punish, who regarded it not as a money-saving design, but as an illustration of the effect of power. Or maybe Orwell, who did not use the term, but extended it to all of society in 1984. Or perhaps you heard it from Shoshana Zuboff, who in The Age of the Smart Machine reconceived it for information technology in a work environment.
In Benjamen Walker’s podcast Theory of Everything, he dedicates an episode to the argument that as a metaphor it needs to be put away, since…
It builds on one-way observation, and modern social media has us sharing information about ourselves willingly, all the time. The diagram is more dream catcher than bicycle wheel. We volunteer ourselves to the inspector, any inspector, and can become inspectors to anyone else any time. Sousveillance. Stalking.
Most modern uses of the metaphor are anti-government, but surveillance capitalism is a more pernicious problem (here in the West), where advertising uses all the information it can to hijack your reward systems and schlorp money out of you.
Bentham regarded it as a tool for behavior modification, but the metaphor is not used to talk about how surveillance changes us and our identities, but rather as a violation of privacy rights.
To Walker’s list I will add another major difference: Panopticon inmates must know they are being watched. It’s critical to the desired internalization of authority. But modern surveillance tries its best to be invisible despite the fact that it gathers an enormous amount of information. (Fortunately it often fails to be invisible, and social media channels can be used to expose the surveillance.)
But then, Idiocracy
In Idiocracy, this interface—of the tattoo and the vending machine—is what puts this squarely back in Bentham’s metaphor. The ink is in a place that will be seen very often by the owner, and a place that’s very difficult to casually hide. (I note that the overwhelming majority of Hillfinger [sic] shirts in the movie are even short-sleeved.) So it serves as that permanent—and permanently-visible—identifier. You are being watched. (Holy crap now I have yet another reason to love Person of Interest. It’s adding to our collective media impression the notion of AI surveillance. Anyway…) In this scene, it’s a clear signal that he and his co-offenders could see, which means they would tell their friends this story of how easily Joe was caught. It’s pretty cunningly designed as a conspicuous signal.
Imagine how this might work throughout that world. As people went around their business in the Idiocracy, stochastic flashes of light on their and other people’s wrists keep sending a signal that everyone is being watched. It’s crappy surveillance which we don’t like for all the reasons we don’t like it, but it illustrates why stealth-detection may not be the ideal for crime preventions and why this horrible tattoo might be the thing that a bunch of doomed eggheads might have designed for the future when all that was left was morons. Turns out at least for the Idiocracy, this is a pretty well-designed signal for deterrence, which is the ultimate goal of this interface.
When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”
Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading “PULL OVER”
The car interface has a column of buttons down the left reading:
At the bottom is a square of icons: car, radiation, person, and the fourth is obscured by something in the foreground. Across the bottom is Frito’s car ID “FRITO’S F’N CAR” which appears to be a label for a system status of “EVERYTHING’S A-OK, BRO”, a button labeled CHECK INGN [sic], another labeled LOUDER, and a big green circle reading GO.
But the car doesn’t wait for him to pull over. With some tiny beeps it slows to a stop by itself. Frito says, “It turned off my battery!” Moments after they flee the car, it is converged upon by a ring of police officers with weapons loaded (including a rocket launcher pointed backward.)
Praise where it’s due: Zooming is the strongest visual attention-getting signals there is (symmetrical expansion is detected on the retina within 80 milliseconds!) and while I can’t find the source from which I learned it, I recall that blinking is somewhere in the top 5. Combining these with an audio signal means it’s hard to miss this critical signal. So that’s good.
But then. Ugh. The fonts. The buttons on the chrome seem to be some free Blade Runner font knock off, the text reading “PULL OVER” is in some headachey clipped-corner freeware font that neither contrasts nor compliments the Blade Jogger font, or whatever it is. I can’t quite hold the system responsible for the font of the IPPA licence, but I just threw up a little into my Flaturin because of that rounded-top R.
Then there’s the bad-90s skeuomorphic, Bevel & Emboss buttons that might be defended for making the interactive parts apparent, except that this same button treatment is given to the label Frito’s F’n Car, which has no obvious reason why it would ever need to be pressed. It’s also used on the CHECK INGN and LOUDER buttons, taking their ADA-insulting contrast ratios and absolutely wrecking any readability.
I try not to second-guess designer’s intentions, but I’m pretty sure this is all deliberate. Part of the illustration of a world without much sense. Certainly no design sense.
What about those features? NAV is pretty standard function, and having a HOME button is a useful shortcut. On current versions of Google Maps there’s an Explore Places Near You Function, which lists basic interests like Restaurants, Bars, and Events, and has a more menu with a big list of interests and services. It’s not a stretch to imagine that Frito has pressed GIRLS and BEER enough that it’s floated to the top nav.
That leaves only three “novel” buttons to think about: WTF, LOUDER, and FART FAN.
If I have to guess, the WTF button is an all-purpose help button. Like a GM OnStar, but less well branded. Frito can press it and get connected to…well, I guess some idiot to see if they can help him with something. Not bad to have, though this probably should be higher in the visual hierarchy.
This bit of interface comedy is hilarious because, well, there’s no volume down affordance on the interface. Think of the “If it’s too loud, you’re too old” kind of idiocy. Of course, it could be that the media is on zero volume, and so it couldn’t be turned down any more, so the LOUDER button filled up the whole space, but…
The smarter convention is to leave the button in place and signal a disabled state, and
Given everything else about the interface, that’s giving the diegetic designer a WHOLE lot of credit. (And our real-world designer a pat on the back for subtle hilarity.)
This button is a little potty humor, and probably got a few snickers from anyone who caught it because amygdala, but I’m going to boldly say this is the most novel, least dumb thing about Frito’s F’n Car interface.
People fart. It stinks. Unless you have active charcoal filters under the fabric, you can be in for an unpleasant scramble to reclaim breathable air. The good news is that getting the airflow right to clear the car of the smell has, yes, been studied, well, if not by science, at least scientifically. The bad news is that it’s not a simple answer.
Your car’s built in extractor won’t be enough, so just cranking the A/C won’t cut it.
Rolling down windows in a moving aerodynamic car may not do the trick due to something called the boundary layer of air that “clings” to the surface of the car.
Rolling down windows in a less-aerodynamic car can be problematic because of the Helmholtz effect (the wub-wub-wub air pressure) and that makes this a risky tactic.
Opening a sunroof (if you have one) might be good, but pulls the stench up right past noses, so not ideal either.
The best strategy—according to that article and conversation amongst my less squeamish friends—is to crank the AC, then open the driver’s window a couple of inches, and then the rear passenger window half way.
But this generic strategy changes with each car, the weather (seriously, temperature matters, and you wouldn’t want to do this in heavy precipitation), and the skankness of the fart. This is all a LOT to manage when one’s eyes are meant to be on the road and you’re in an nauseated panic. Having the cabin air just refresh at the touch of one button is good for road safety.
If it’s so smart, then, why don’t we have Fart Fan panic buttons in our cars today?
I suspect car manufacturers don’t want the brand associations of having a button labeled FART FAN on their dashboards. But, IMHO, this sounds like a naming problem, not some intractable engineering problem. How about something obviously overpolite, like “Fast freshen”? I’m no longer in the travel and transportation business, but if you know someone at one of these companies, do the polite thing and share this with them.
So aside from the interface considerations, there are also some strategic ones to discuss with the remote kill switch, but that deserves it’s own post, next.
Joe and Rita climb into the pods and situate themselves comfortably. Officer Collins and his assistant approach and insert some necessary intravenous chemicals. We see two canisters, one empty (for waste?) and one filled with the IV fluid. To each side of the subject’s head is a small raised panel with two lights (amber and ruby) and a blue toggle switch. None of these are labeled. The subjects fall into hibernation and the lids close.
Collins and his assistant remove a cable labeled “MASTER” from the interface and close a panel which seals the inputs and outputs. They then close a large steel door, stenciled “TOP SECRET,” to the hibernation chamber.
The external interface panel includes:
A red LED display
3 red safety cover toggle switches labeled “SET 1” “SET 2” and “SET 3.”
A 5×4 keypad
Four unlabeled white buttons
500 years later, after the top secret lab is destroyed, the pods become part of the mountains of garbage that just pile up. Sliding down an avalanche of the stuff, the pods wind up in a downtown area. Joe’s crashes through Frito’s window. At this moment the pod decides enough is enough and it wakes him. Clamps around the edge unlock. The panel cover has fallen off somewhere, and the LED display blinks the text, “unfreezing.” Joe drowsily pushes the lids open and gets out.
Its purpose in the narrative
This is a “segue” interface, mostly useful in explaining how Joe and Rita are transported safely 500 years in the future. At its base, all it needs to convey is:
Scienciness (lights and interfaces, check)
See them pass into sleep (check)
See why how they are kept safe (rugged construction details, clamped lid, check)
See the machine wake them up (check)
Is it ideal?
The ergonomics are nice. A comfortable enough coffin to sleep in. And it seems…uh…well engineered, seeing as how it winds up lasting 500 times its intended use and takes some pretty massive abuse as it slides down the mountains of garbage and through Frito’s window into his apartment. But that’s where the goodness ends. It looks solid enough to last a long long time. But there are questions.
From Collins’ point of view:
Why was it engineered to last 500 years, but you know, fail to have any of its interior lights or toggle switches labeled? Or have something more informative on the toggles than “SET 1”?
How on earth did they monitor the health of the participants over time? (Compare Prometheus’ hibernation screens.) Did they just expect it to work perfectly? Not a lot of comfort to the subjects. Did they monitor it remotely? Why didn’t that monitoring screen arouse the suspicions of the foreclosers?
How are subjects roused? If the procedure is something that Collins just knows, what if something happens to him? That information should be somewhere on the pod with very clear instructions.
How does it gracefully degrade as it runs out of resources (power, water, nutrition, air, water storage or disposal) to keep it’s occupants alive? What if the appointed person doesn’t answer the initial cry for help?
From the hibernators’ point of view:
How do the participants indicate their consent to go into hibernation? Can this be used as an involuntary prison?
How do they indicate consent to be awakened? (Not an easy problem, but Passengers illustrates why it’s necessary.)
What if they wake early? How do they get out or let anyone know to release them?
Why does the subject have to push the lid if they’re going to be weak and woozy when they waken? Can’t it be automatic, like the hibernation lids in Aliens?
How does the sleeper know it’s safe to get out? Certainly Joe and Rita expected to wake up in the military laboratory. But while we’re putting in the effort to engineer it to last 500 years, maybe we could account for the possibility that it’s somewhere else.
Can’t you put me at ease in the disorientating hypnopompic phase? Maybe some soothing graphic on the interior lid? A big red label reading, “DON’T PANIC” with an explanation?
Can you provide some information to help orient me, like where I am and when I am? Why does Joe have to infer the date from a magazine cover?
From a person-in-the-future point of view
How do the people nearby know that it contains living humans? That might be important for safekeeping, or even to take care in case the hibernators are carrying some disease to which the population has lost resistance.
How do we know if they’ve got some medical conditions that will need specialized care? What food they eat? Whether they are dangerous?
Can we get a little warning so we can prepare for all this stuff?
Is the interface believable?
Oh yes. Prototypes tend to be minimum viable thing, and usability lags far behind basic utility. Plus, this is military, known to be tough people expecting their people to be tough people without the need for civilian niceties. Plus, Collins didn’t seem too big on “details.” So very believable.
Note that this doesn’t equate to the thing itself being believable. I mean, it was an experiment meant to last only a year. How did it have the life support resources—including power—to run for 500 times the intended duration? What brown fluid has the 273,750,000 calories needed to sustain Luke Wilson’s physique for 500 years? (Maya Rudoph lucks out needing “only” 219,000,000.) How did it keep them alive and prevent long-term bedridden problems, like pressure sores, pneumonia, constipation, contractures, etc. etc.? See? Comedy is hard to review.
Fight US Idiocracy: Donate to close races
Reminder: Every post in this series includes some U.S.-focused calls to action for readers to help reverse the current free fall into our own Idiocracy. In the last post I provided information about how to register to vote in your state. DO THAT. If you accidentally missed the deadline (and triple check because many states have some way to register right up to and including election day, which is 06 NOV this year), there are still things you can do. Sadly, one of the most powerful things feels crass: Donate money to close campaigns. Much of this money is spent reaching out to undecided voters via media channels, and that means the more money the more reach.
There are currently 68 highly competitive seats—those considered a toss up between the two parties or leaning slightly toward one. You can look at the close campaigns and donate directly, or you can donate to Act Blue, and let that organization make the call. That’s what I did. Just now. Please join me.