Comparing Sci-Fi HUDs in 2024 Movies

As in previous years, in preparation for awarding the Fritzes, I watched as many sci-fi movies as I could find across 2024. One thing that stuck out to me was the number of heads-up displays (HUDs) across these movies. There were a lot to them. So in advance of the awards, lets look and compare these. (Note the movies included here are not necessarily nominees for a Fritz award.)

I usually introduce the plot of every movie before I talk about it. This provides some context to understanding the interface. However, that will happen in the final Fritzes post. I’m going to skip that here. Still, it’s only fair to say there will be some spoilers as I describe these.

If you read Chapter 8 of Make It So: Interaction Lessons from Science Fiction, you’ll recall that I’d identified four categories of augmentation.

  1. Sensor displays
  2. Location awareness
  3. Context awareness (objects, people)
  4. Goal awareness

These four categories are presented in increasing level of sophistication. Let’s use these to investigate and compare five primary examples from 2024, in order of their functional sophistication.

Dune 2

Lady Margot Fenring looks through augmented opera glasses at Feyd-Rautha in the arena. Dune 2 (2024).

True to the minimalism that permeates much of the interfaces film, the AR of this device has a rounded-rectangle frame from which hangs a measure of angular degrees to the right. There are a few ticks across the center of this screen (not visible in this particular screen shot). There is a row of blue characters across the bottom center. I can’t read Harkonnen, and though the characters change, I can’t quite decipher what most of them mean. But it does seem the leftmost character indicates azimuth and the rightmost character angular altitude of the glasses. Given the authoritarian nature of this House, it would make sense to have some augmentation naming the royal figures in view, but I think it’s a sensor display, which leaves the user with a lot of work to figure out how to use that information.

You might think this indicates some failing of the writer’s or FUI designers’ imagination. However, an important part of the history of Dune is a catastrophic conflict known as the Butlerian Jihad. This conflict involved devastating, large-scale wars against intelligent machines. As a result, machines with any degree of intelligence are considered sacrilege. So it’s not an oversight, but as a result, we can’t look to this as a model for how we might handle more sophisticated augmentations.

Alien: Romulus

Tyler teaches Rain how to operate a weapon aboard the Renaissance. Alien: Romulus (2024)

A little past halfway through the movie, the protagonists finally get their hands on some weapons. In a fan-service scene similar to one between Ripley and Hicks from Aliens (1986), Tyler shows Rain how to hold an FAA44 pulse rifle. He also teaches her how to operate it. The “AA” stands for “aiming assist”, a kind of object awareness. (Tyler asserts this is what the colonial marines used, which kind of retroactively saps their badassery, but let’s move on.) Tyler taps a small display on the user-facing rear sight, and a white-on-red display illuminates. It shows a low-res video of motion happening before it. A square reticle with crosshairs shows where the weapon will hit. A label at the top indicates distance. A radar sweep at the bottom indicates movement in 360° plan view, a sensor display.

When Rain pulls the trigger halfway, the weapon quickly swings to aim at the target. There is no indication of how it would differentiate between multiple targets. It’s also unclear how Rain told it that the object in the crosshairs earlier is what she wants it to track now. Or how she might identify a friendly to avoid. Red is a smart choice for low-light situations as red is known to not interfere with night vision. Also it’s elegantly free of flourishes and fuigetry.

I’m not sure the halfway-trigger is the right activation mechanism. Yes, it allows the shooter to maintain a proper hold and remain ready with the weapon, and allows them not have to look at the display to gain its assistance, but also requires them to be in a calm, stable circumstance that allows for fine motor control. Does this mean that in very urgent, chaotic situations, users are just left to their own devices? Seems questionable.

Alien: Romulus is beholden to the handful of movies in the franchise that preceded it. Part of the challenge for its designers is to stay recognizably a part of the body of work that was established in 1979 while offering us something new. This weapon HUD stays visually simple, like the interfaces from the original two movies. It narratively explains how a civilian colonist with no weapons training can successfully defend herself against a full-frontal assault by a dozen of this universe’s most aggressive and effective killers. However, it leaves enough unexplained that it doesn’t really serve as a useful model.

The Wild Robot

Roz examines an abandoned egg she finds. The Wild Robot (2024)

HUD displays of artificially intelligent robots are always difficult to analyze. It’s hard to determine what’s an augmentation, here loosely defined as an overlay on some datastream created for a user’s benefit but explicitly not by that user. It opposes a visualization of the AI’s own thoughts as they are happening. I’d much rather analyze these as augmentation provided for Roz, but it just doesn’t hold up to scrutiny that way. What we see in this film are visualizations of Roz’ thoughts.

In the HUD, there is an unchanging frame around the outside. Static cyan circuit lines extend to the edge. (In the main image above, the screen-green is an anomaly.) A sphere rotates in the upper left unconnected to anything. A hexagonal grid on the left has some hexes which illuminate and blink unconnected to anything. The grid moves unrelated to anything. These are fuigetry and neither conveys information nor provides utility.

Inside that frame, we see Roz’ visualized thinking across many scenes.

  • Locus of attention—Many times we see a reticle indicating where she’s focused, oftentimes with additional callout details written in robot-script.
  • “Customer” recognition—(pictured) Since it happens early in the film, you might think this is a goofy error. The potential customer she has recognized is a crab. But later in the film, Roz learns the language common to the animals of the island. All the animals display a human-like intelligence, so it’s completely within the realm of possibility that this blue little crustacean could be her customer. Though why that customer needed a volumetric wireframe augmentation is very unclear.
  • X-ray vision—While looking around for a customer, she happens upon an egg. The edge detection indicates her attention. Then she performs scans that reveal the growing chick inside and a vital signs display.
  • Damage report—After being attacked by a bear, Roz does an internal damage check and she notes the damage on screen.
  • Escape alert—(pictured) When a big wave approaches the shore on which she is standing, Roz estimates the height of the wave to be five time her height. Her panic expresses itself in a red tint around the outside edge.
  • Project management—Roz adopts Brightbill and undertakes the mission to mother him—specifically to teach him to eat, swim, and fly. As she successfully teaches him each of these things, she checks it off by updating one of three graphics that represent the topics.
  • Language acquisition—(pictured) Of all the AR in this movie, this scene frustrates me the most. There is a sequence in which Roz goes torpid to focus on learning the animal language. Her eyes are open the entire time she captures samples and analyzes them. The AR shows word bubbles associated with individual animal utterances. At first those bubbles are filled with cyan-colored robo-ese script. Over the course of processing a year’s worth of samples, individual characters are slowly replaced in the utterances with bold, green, Latin characters. This display kind of conveys the story beat of “she’s figuring out the language), but befits cryptography much more than acquisition of a new language.

If these were augmented reality, I’d have a lot of questions about why it wasn’t helping her more than it does. It might seem odd to think an AI might have another AI helping it, but humans have loads of systems that operate without explicit conscious thought, like preattentive processing, all the functions of our autonomic nervous system, sensory filtering, and recall, just to name a few. So I can imagine it would be a fine model for AI-supporting-AI.

Since it’s not augmented reality, it doesn’t really act as a model for real world designs except perhaps for its visual styling.

Borderlands

Claptrap is a little one-wheel robot that accompanies Lilith though her adventures on and around Pandora. We see things through his POV several times.

Claptrap sizes up Lilith from afar. Borderlands (2024).

When Claptrap first sees Lilith, it’s from his HUD. Like Roz’ POV display in The Wild Robot, the outside edge of this view has a fixed set of lines and greebles that don’t change, not even for a sensor display. I wish those lines had some relationship to his viewport, but that’s just a round lens and the lines are vaguely like the edges of a gear.

Scrolling up from the bottom left is an impressive set of textual data. It shows that a DNA match has been made (remotely‽ What kind of resolution is Claptrap’s CCD?) and some data about Lilith from what I presume is a criminal justice data feed: Name and brief physical description. It’s person awareness.

Below that are readouts for programmed directive and possible directive tasks. They’re funny if you know the character. Tasks include “Supply a never-ending stream of hilarious jokes and one-liners to lighten the mood in tense situations” and “Distract enemies during combat. Prepare the Claptrap dance of confusion!” I also really like the last one “Take the bullets while others focus on being heroic.” It both foreshadows a later scene and touches on the problem raised with Dr. Strange’s Cloak of Levitation: How do our assistants let us be heroes?

At the bottom is the label “HYPERION 09 U1.2” which I think might be location awareness? The suffix changes once they get near the vault. Hyperion a faction in the game. Not certain what it means in this context.

When driving in a chase sequence, his HUD gives him a warning about a column he should avoid. It’s not a great signal. It draws his attention but then essentially says “Good luck with that.” He has to figure out what object it refers to. (The motion tracking, admittedly, is a big clue.) But the label is not under the icon. It’s at the bottom left. If this were for a human, it would add a saccade to what needs to be a near-instantaneous feedback loop. Shouldn’t it be an outline or color overlay to make it wildly clear what and where the obstacle is? And maybe some augmentation on how to avoid it, like an arrow pointing right? As we see in a later scene (below) the HUD does have object detection and object highlighting. There it’s used to find a plot-critical clue. It’s just oddly not used here, you know, when the passengers’ lives are at risk.

When the group goes underground in search of the key to the Vault, Claptrap finds himself face to face with a gang of Psychos. The augmentation includes little animated red icons above the Psychos. Big Red Text summarizes “DANGER LEVEL: HIGH” across the middle, so you might think it’s demonstrating goal and context awareness. But Claptrap happens to be nigh-invulnerable, as we see moments later when he takes a thousand Psycho bullets without a scratch. In context, there’s no real danger. So,…holup. Who’s this interface for, then? Is it really aware of context?

When they visit Lilith’s childhood home, Claptrap finds a scrap of paper with a plot-critical drawing on it. The HUD shows a green outline around the paper. Text in the lower right tracks a “GARBAGE CATALOG” of objects in view with comments, “A PSYCHO WOULDN’T TOUCH THAT”, “LIFE-CHOICE QUESTIONING TRASH”, “VAULT HUNTER THROWBACK TRASH”. This interface gives a bit of comedy and leads to the Big Clue, but raises questions about consistency. It seems the HUDs in this film are narrativist.

In the movie, there are other HUDs like this one, for the Crimson Lance villains. They fly their hover-vehicles using them, but we don’t nearly get enough time to tease the parts apart.

Atlas

The HUD in Atlas happens when the titular character Atlas is strapped into an ARC9 mech suit, which has its own AGI named Smith. Some of the augmentations are communications between Smith and Atlas, but most are augmentations of the view before her. The viewport from the pilot’s seat is wide and the augmentations appear there.

Atlas asks Smith to display the user manuals. Atlas (2024)

On the way to evil android Harlan’s base, we see the frame of the HUD has azimuth and altitude indicators near the edge. There are a few functionless flourishes, like arcs at the left and right edges. Later we see object and person recognition (in this case, an android terrorist, Casca Decius). When Smith confirms they are hostile, the square reticles go from cyan to red, demonstrating context awareness.

Over the course of the movie Atlas has resisted Smith’s call to “sync” with him. At Harlan’s base, she is separated from the ARC9 unit for a while. But once she admits her past connection to Harlan, she and Smith become fully synched. She is reunited with the ARC9 unit and its features fully unlock.

As they tear through the base to stop the launch of some humanity-destroying warheads, they meet resistance from Harlan’s android army. This time the HUD wholly color codes the scene, making it extremely clear where the combatants are amongst the architecture.

Overlays indicate the highest priority combatants that, I suppose, might impede progress. A dashed arrow stretches through the scene indicating the route they must take to get to their goal. It focuses Atlas on their goal and obstacles, helping her decision-making around prioritization. It’s got rich goal awareness and works hard to proactively assist its user.

Despite being contrasting colors, they are well-controlled to not vibrate. You might think that the luminance of the combatants and architecture might be flipped, but the ARC9 is bulletproof, so there’s no real danger from the gunfire. (Contrast Claptrap’s fake danger warning, above.) Saving humanity is the higher priority. So the brightest (yellow) means “do this”, the second brightest (cyan) means “through this” and darkest (red) means “there will be some nuisances en route.” The luminescence is where it should be.

In the climactic fight with Harlan, the HUD even displays a predictive augmentation, illustrating where the fast-moving villain is likely to be when Atlas’ attacks land. This crucial augmentation helps her defeat the villain and save the day. I don’t think I’ve seen predictive augmentation outside of video games before.


If I was giving out an award for best HUD of 2024, Atlas would get it. It is the most fully-imagined HUD assistance across the year, and consistently, engagingly styled. If you are involved with modern design or the design of sci-fi interfaces, I highly recommend you check it out.

Stay tuned for the full Fritz awards, coming later this year.

Fritzes 2024 Winners

So I missed synchronizing the Fritzes with the Oscars. By like, a lot. A lot a lot. That hype curve has come and gone. (In my defense, it’s been an intensely busy year.) I don’t think providing nominees and then waiting to reveal winners makes sense now, so I’ll just talk about them. It was another year where there weren’t a lot of noteworthy speculative interfaces, from an interaction design point of view. This is true enough that I didn’t have enough candidates to fill out my usual three categories of Believable, Narrative, and Overall. So, I’m just going to do a round-up of some of the best interfaces as I saw them, and at the end, name an absolute favorite.

The Kitchen

In a dystopian London, the rich have eliminated all public housing but one last block known as The Kitchen. Izi and Benji live there and are drawn together by the death of Benji’s mother, who turns out to be one of Izi’s romantic partners from the past. The film is full of technology, but the one part that really struck me was the Life After Life service where Izi works and where Benji’s mom’s funeral happens. It’s reminiscent of the Soylent Green suicide service, but much better done, better conceived. The film has a sci-fi setting, but don’t expect easy answers and Marvel-esque plot here. This film about relationships amid struggle and ends quite ambiguously.

The funerary interfaces are mostly translucent cyans with pinstripe dividing lines to organize everything. In the non-funerary the cyan is replaced with bits of saturated red. Everything funerary and non- feels as if it has the same art direction, which lends to reading the interfaces extradiegetically, but maybe that’s part of the point?

Pod Generation

This dark movie considers what happens if we gestated babies in technological wombs called pods.  The interactions with the pod are all some corporate version of intuitive, as if Apple had designed them. (Though the swipe-down to reveal is exactly backwards. Wouldn’t an eyelid or window shade metaphor be more natural? Maybe they were going for an oven metaphor, like bun in the oven? But cooking a child implications? No, it’s just wrong.)

The design is largely an exaggeration of Apple’s understated aesthetic, except for the insane, giant floral eyeball that is the AI therapist. I love how much it reads like a weirdcore titan and the characters are nonplussed, telegraphing how much the citizens of this world have normalized to inhumanity. I have to give a major ding to the iPad interface by which parents take care of their fetuses, as its art direction is a mismatch to everything else in the film and seems quite rudimentary, like a Flash app circa 1998.

Before I get to the best interfaces of the year, let’s take a moment to appreciate two trends I saw emerging in 2023. That of hyperminimalist interfaces and of interface-related comedy.

Hyperminimalist interfaces

This year I noticed that many movies are telling stories with very minimal interfaces. As in, you can barely call them designed since they’re so very minimalist. This feels like a deliberate contrast to the overwhelming spectacle that permeates, say, the MCU. They certainly reduce the thing down to just the cause and effect that are important to the story. Following are some examples that illustrate this hyperminimalism.

This could be a cost-saving tactic, but per the default New Criticism stance of this blog, we’ll take it as a design choice and note it’s trending.

Shout-out: Interface Comedy

I want to give a special shout-out to interface-related comedy over the past year.

Smoking Causes Coughing

The first comes from the French gonzo horror sci-fi Smoking Causes Coughing. In a nested story told by a barracuda that is on a grill being cooked, Tony is the harried manager of a log-processing plant whose day is ruined by her nephew’s somehow becoming stuck in an industrial wood shredder. Over the scene she attempts to reverse the motor, failing each time, partly owing to the unlabeled interface and bad documentation. It’s admittedly not sci-fi, just in a sci-fi film, and a very gory, very hilarious bit of interface humor in an schizoid film.

Guardians of the Galaxy 3

The second is Guardians of the Galaxy 3. About a fifth of the way into the movie, the team spacewalks from the Milano to the surface of Orgocorp to infiltrate it. Once on the surface, Peter, who still pines for alternate-timeline Gamora, tries to strike up a private conversation with her. The suits have a forearm interface featuring a single row of colored stay-state buttons that roughly match the colors of the spacesuits they’re wearing. Quill presses the blue one and tries in vain to rekindle the spark between him and Gamora in a private conversation. But then a minute into the conversation, Mantis cuts in…

  • Mantis
  • Peter you know this is an open line, right?
  • Peter
  • What?
  • Mantis
  • We’re listening to everything you’re saying.
  • Drax
  • And it is painful.
  • Quill
  • And you’re just telling me now‽
  • Nebula
  • We were hoping it would stop on its own.
  • Peter
  • But I switched it over to private!
  • Mantis
  • What color button did you push?
  • Peter
  • Blue! For the blue suit!
  • Drax
  • Oh no.
  • Nebula
  • Blue is the open line for everyone.
  • Mantis
  • Orange is for blue.
  • Peter
  • What‽
  • Mantis
  • Black is for orange. Yellow is for green. Green is for red. And red is for yellow.
  • Drax
  • No, yellow is for yellow. Green is for red. Red is for green.
  • Mantis
  • I don’t think so.
  • Drax
  • Try it then.
  • Mantis (screaming)
  • HELLO!
  • Peter writhes in pain
  • Mantis
  • You were right.
  • Peter
  • How the hell and I supposed to know all of that?
  • Drax
  • Seems intuitive.

The Marvels

A third comedy bit happens in The Marvels, when Kamala Khan is nerding out over Monica Rambeau’s translucent S.H.I.E.L.D. tablet. She says…

  • Khan
  • Is this the new iPad? I haven’t seen it yet.
  • Rambeau
  • I wish.
  • Khan
  • Wait, if this is all top secret information, why is it on a clear case?

Rambeau has no answer, but there are, in fact, some answers.

Anyway, I want to give a shout-out to the writers for demonstrating with these comedy bits some self-awareness and good-natured self-owning of tropes. I see you and appreciate you. You are so valid.

Best Interfaces of 2023

But my favorite interfaces of 2023 come from Spider-Man: Across the Spider-Verse. The interfaces throughout are highly stylized (so might be tough to perform the detailed analysis, which is this site’s bread-and-butter) but play the plot points perfectly.

In Across the Spider-Verse, while dealing difficulties with his home life and chasing down a new supervillain called The Spot, Miles Morales learns about The Society. The Society is a group of (thousands? Tens of thousands? of) Spider-people of every stripe and sort from across the Multiverse, whose overriding mission is to protect “canon” events in each universe that, no matter how painful, they believe are necessary to keep the fabric of reality from unraveling. It’s full of awesome interfaces.

Lyla is the general artificial intelligence that has a persistent volumetric avatar. She’s sassy and disagreeable and stylish and never runs, just teleports.

The wrist interfaces—called the Multiversal Gizmo—worn by members of The Society all present highly-contextual information with most-likely actions presented as buttons, and, as needed, volumetric alerts. Also note that Miguel’s Gizmo is longer, signaling his higher status within The Society.

Of special note is volumetric display that Spider Gwen uses to reconstruct the events at the Alchemax laboratory. The interface is so smart, telegraphs its complex functioning quickly and effectively, and describes a use that builds on conceivable but far-future applications of inference. The little dial that pops up allowing her to control time of the playback reminds me of Eye of Agamatto (though sadly I didn’t see evidence of the important speculative time-control details I’d provided in that analysis). The in-situ volumetric reconstruction reminds me of some of the speculative interfaces I’d proposed in the review of Deckard’s photo inspector from Blade Runner, and so was a big thrill to see.

All of the interfaces have style, are believable for the diegesis, and contribute to the narrative with efficiency. Congratulations to the team crafting these interfaces, and if you haven’t seen it yet, what are you waiting for? Go see it. It’s in a lot of places and the interfaces are awesome. (For full disclosure, I get no kickback from these referral links.)

Hackers (1995)

Our third film is from 1995, directed by Iain Softley.

Hackers is about a group of teenage computer hackers, of the ethical / playful type who are driven by curiosity and cause no harm — well, not to anyone who doesn’t deserve it. One of these hackers breaks into the “Gibson” computer system of a high profile company and partially downloads what he thinks is an unimportant file as proof of his success. However this file is actually a disguised worm program, created by the company’s own chief of computer security to defraud the company of millions of dollars. The security chief tries to frame the hackers for various computer crimes to cover his tracks, so the hackers must break back into the system to download the full worm program and reveal the true culprit.

The film was made in the time before Facebook when it was common to have an online identity, or at least an online handle (nick), distinct from the real world. Our teenage hacker protagonists are:

  • Crash Override, real name Dade.
  • Acid Burn, real name Kate.
  • Cereal Killer, Lord Nikon, and Phantom Phreak, real names not given.
  • Joey, the most junior, who doesn’t have a handle yet.

As hackers they don’t have a corporate budget, so use a variety of personal computers rather than the expensive SGI workstations we saw in the previous films. And since it’s the 1990s, their network connections are made with modems over the analog phone system and important files will fit on 1.44 megabyte floppy disks. 

The Gibson, though, is described as “big iron”, a corporate supercomputer. Again this was the 1990s when a supercomputer would be a single very big and very expensive computer, not thousands of PC CPUs and GPUs jammed into racks as in the early 21st C. A befitting such an advanced piece of technology it has a three dimensional file browsing interface which is on display both times the Gibson is hacked.

First run

The first hack starts at about 24 minutes. Junior hacker Joey  has been challenged by his friends to break into something important such as a Gibson. The scene starts with Joey sitting in front of his Macintosh personal computer and reviewing a list of what appear to be logon or network names and phone numbers. The camera flies through a stylised cyberspace representation of the computer network, the city streets, then the physical rooms of the target company (which we will learn is Ellingson Minerals), and finally past a computer operator sitting at a desk in the server room and into the 3D file system. This single “shot” actually switches a few times between the digital and real worlds, a stylistic choice repeated throughout the film. Although never named in the film this file system is the “City of Text” according to the closing credits.

Joey looks down on the City of Text. Hackers (1995)

The file system is represented as a virtual cityscape of skyscraper-like blocks. The ground plane looks like a printed circuit board with purple traces (lines). The towers are simple box shapes, all the same size, as if constructed from blue tinted glass or acrylic plastic. Each of the four sides and the top shows a column of text in white lettering, apparently the names of directories or files. Because the tower sides are transparent the reverse facing text on the far sides is also visible, cluttering the display.

This 3D file system is the most dynamic of those in this review. Joey flies among the towers rather than walking, with exaggerated banking and tilting as he turns and dives. At ground level we can see some simple line graphics at the left as well as text.

Joey flies through the City of Text, banking as he changes direction. Hackers (1995)

The city of text is even busier due to animation effects. Highlight bars move up and down the text lists on some panes. Occasionally a list is cleared and redrawn top to bottom, while others cycle between two sets of text. White pulses flow along the purple ground lanes and fly between the towers. These animations do not seem to be interface elements. They could be an indicator of overall activity with more pulses per second meaning more data being accessed, like the blinking LED on your Ethernet port or disk drive. Or they could be a screensaver, as it was important on the CRT displays of the 1990s to not display a static image for long periods as it would “burn in” and become permanent.

Next there is a very important camera move, at least for analysing the user interface. So far the presentation has been fullscreen and obviously artificial. Now the camera pulls back slightly to show that this City of Text is what Joey is seeing on the screen of his Macintosh computer. Other shots later in the film will make it clear that this is truly interactive, he is the one controlling the viewpoint.

Joey looks at a particular list of directories/files on one face of a skyscraper. Hackers (1995)

I’ll discuss how this might work later in the analysis section. For now it’s enough to remember that this is a true file browser, the 3D equivalent of the Macintosh Finder or Windows File Explorer.

While Joey is exploring, we cut to the company server room. This unusual activity has triggered an alarm so the computer operator telephones the company security chief at home. At this stage we don’t know that he’s evil, but he does demand to be addressed by his hacker handle “The Plague” which doesn’t inspire confidence. (The alarm itself shows that a superuser / root / administrator account is in use by displaying the password for everyone to see on a giant screen. But we’re not going to talk about that.) 

Joey wants to prove he has hacked the Gibson by downloading a file, but by the ethics of the group it shouldn’t be something valuable. He selects what he thinks will be harmless, the garbage or trash directory on a particular tower. It’s not very clear but there is another column of text to the right which is dimmed out.

Joey selects the GARBAGE directory and a list of contents appears. Hackers (1995)

There’s a triangle to the right of the GARBAGE label indicating that it is a directory, and when selected a second column of text shows the files within it. When Joey selects one of these the system displays what today would be called a Live Tile in Windows, or File Preview in the Mac Finder. But in this advanced system it’s an elaborate animation of graphics and mathematical notation.

Joey decides this is the file he wants and starts a download. Since he’s dialled in through an old analog phone modem, this is a slow process and will eventually be interrupted when Joey’s mother switches his Macintosh off to force him to get some sleep.

Joey looks at the animation representing the file he has chosen. Hackers (1995)

Physical View

Back in the server room of Ellingson Minerals and while Joey is still searching, the security chief AKA “The Plague” arrives. And here we clearly see that there is also a physical 3D representation of the file system.

The Plague makes a dramatic entrance into the physical City of Text. Hackers (1995)

Just like the virtual display it is made up of rectangular towers made of blue tinted glass or plastic, arranged on a grid pattern like city skyscrapers. Each is about 3 metres high and about 50cm wide and deep. Again matching the virtual display, there is white text on all the visible sides, being updated and highlighted. However there is one noticeable difference, the bottom of each tower is solid black.

What are the towers for? Hackers is from 1995, when hard drives and networked file servers were shoebox- to pizza-box-sized, so one or two would fit into the base of each tower. The physical displays could be just blinkenlights, an impressive but not particularly useful visual display, but in a later shot there’s a technician in the background looking at one of the towers and making notes on a pad, so they are intended to show something useful. My assumption is that each tower displays information about the actual files being stored inside, mirroring the virtual city of text shown online.

When he reaches the operator’s desk, The Plague switches the big wall display to the same 3D virtual file system.

The Plague on the left and the night shift operator watch what Joey is doing on a giant wall screen. Hackers (1995)

He uses an “echo terminal” command to see exactly what Joey is doing, so sees the same garbage directory and that the file is being copied. We’ll later learn that this seemingly harmless file is actually the worm program created by The Plague, and that discovering it had been copied was a severe shock. Here he arranges for the phone connection to be traced and Joey questioned by his government friends in the US Secret Service (which at the time was responsible for investigating some computer security incidents and crimes), setting in motion the main plot elements.

Tagged: animated, architecture, big screens, busted!, control room, cyan, doorway, drama, eavesdropping, emergency, flashing, flying, glow, hacking, industrial espionage, labeling, monitoring, navigating, orange, purple, security, surveillance, terminal, translucency, translucent display, wall interface

Second run

After various twists and turns our teenage hackers are resolved to hack into the Gibson again to obtain a full copy of the worm program which will prove their innocence. But they also know that The Plague knows they know about the worm, Ellingson Minerals is alerted, and the US Secret Service are watching them. This second hacking run starts at about 1 hour 20 minutes.

The first step is to evade the secret service agents by a combination of rollerblading and hacking the traffic lights. (Scenes like this are why I enjoy the film so much.) Four of our laptop-wielding hackers dial in through public phone booths. The plan is that Crash will look for the file while Acid, Nikon, and Joey will distract the security systems, and they are expecting additional hacker help from around the world.

We see a repeat of the earlier shot flying through the streets and building into the City of Text, although this time on Crash’s Macintosh Powerbook.

Crash enters the City of Text. Hackers (1995)

It seems busier with many more pulses travelling back and forth between towers, presumably because this is during a workday.

The other three start launching malware attacks on the Gibson. Since the hacking attempt has been anticipated, The Plague is in the building and arrives almost immediately.

The Plague walks through the physical City of Text as the attack begins. Hackers (1995)

The physical tower display now shows a couple of blocks with red sides. This could indicate the presence of malware, or just that those sections of the file system are imposing a heavy CPU or IO load due to the malware attacks.

This time The Plague is assisted by a full team of technicians. He primarily uses a “System Command Shell” within a larger display that presumably shows processor and memory usage. It’s not the file system, but has a similar design style and is too cool not to show:

The Plague views system operations on a giant screen, components under attack highlighted in red on the right. Hackers (1995)

Most of the shots show the malware effects and The Plague responding, but Crash is searching for the worm. His City of Text towers show various “garbage” directories highlighted in purple, one after the other.

Crash checks the first garbage directory, in purple. Other possible matches in cyan on towers to the right. Hackers (1995)

What’s happening here? Most likely Crash has typed in a search wildcard string and the file browser is showing the matching files and folders.

Why are there multiple garbage directories? Our desktop GUIs always show a single trashcan, but under the hood there is more than one. A multiuser system needs at least one per user, because otherwise files deleted by Very Important People working with Very Sensitive Information would be visible, or at least the file names visible, to everyone else. Portable storage devices, floppy disks in Hackers and USB drives today, need their own trashcan because the user might still expect to be able to undelete files even if it has been moved to another computer. For the same reason a networked drive needs its own trashcan that isn’t stored on the connecting computer. So Crash really does have to search for the right garbage directory in this giant system.

As hackers from around the world join in, the malware effects intensify. More tower faces, both physical and digital, are red. The entire color palette of the City of Text becomes darker.

Crash flies through the City of Text, a skyscraper under siege. Hackers (1995)

This could be an automatic effect when the Gibson system performance drops below some threshold, or activated by the security team as the digital equivalent of traffic cones around a door. Anyone familiar with the normal appearance of the City of Text can see at a glance that something is wrong and, presumably, that they should log out or at least not try to do anything important.

Crash finds the right file and starts downloading, but The Plague hasn’t been fully distracted and uses his System Command Shell to disconnect Crash’s laptop entirely. Rather than log back in, Crash tells Joey to download the worm and gives him the full path to the correct garbage directory, which for the curious is root/.workspace/.garbage (the periods are significant, meaning these names should not normally be displayed to non-technical users).

We don’t see how Joey enters this into the file browser but there is no reason it should be difficult. Macintosh Finder windows have a clickable text search box, and both the Ubuntu Desktop Shell and Microsoft Windows start screen will automatically start searching for files and folders that match any typed text.

Joey downloads the worm, this time all of it. The combined malware attacks crash The Gibson. Unfortunately the secret service agents arrive just in time to arrest them, but all ends well with The Plague being exposed and arrested and our hacker protagonists released.

Tagged: 3D rendering, animation, architecture, big screens, blue, bright is more, call to action, color cue, command and control, control room, crisis, cyan, dark, defense, flashing, flowing, flying, glow, hacking, industrial espionage, keyboard, mission, motion cue, navigating, nerdsourcing, personal computer, red, red is warning, search, search, status indicator, threshold alert, translucency, translucent display, trap, trash, wall mounted, yellow

Analysis

How believable is the interface?

The City of Text has two key differences from the other 3D file browsers we’ve seen so far.

  1.  It must operate over a network connection, specifically over a phone modem connection, which in the 1990s would be much slower than any Ethernet LAN.
  2. This 3D view is being rendered on personal computers, not specialised 3D workstations. 

Despite these constraints, the City of Text remains reasonably plausible.

Would the City of Text require more bandwidth than was available? What effect would we expect from a slow network connection? It’s a problem when copying files, upload or download, but much less so for browsing a file system. The information being passed from the Gibson to the 3D file browser is just a list of names in each directory and a minimal set of attributes for each, not the file contents. In 1995 2D file browsers on personal computers were already showing icons, small raster images, for each file which took up more memory than the file names. The City of Text doesn’t, so the file data would certainly fit in the bandwidth available.

The flying viewpoint doesn’t require much bandwidth either. There is no avatar or other representation of the user, just an abstract viewpoint. Only 9 numbers are needed to describe where you are and what you’re looking at in 3D space, and predictive techniques developed for games and simulations can reduce the network bandwidth required even more.

Networked file systems and file browsers already existed in 1995, for example FTP and Gopher, although with pure text interfaces rather than 3D or even 2D graphics. The only missing component would be the 3D viewpoint coordinates.

PCs in the 1990s, especially laptops, rarely had any kind of 3D graphics acceleration and would not have been able to run the Jurassic Park or Disclosure 3D file browsers. The City of Text, though, is much less technically demanding even though it displays many more file and folder names.

Notice that there is no hidden surface removal, where the front sides of a 3D object hide those that are further away. There’s no lighting, with everything rendered in flat colors that don’t depend on the direction of the sun or other light sources, and no shadows. There are no images or textures, just straight lines and plain text. And finally everything is laid out on an axis-aligned grid; meaning all the graphics are straight up/down, left/right, or forwards/back; and all the towers and text are the same size. Similar shortcuts were used in 1990s PC games and demo scene animations, such as the original Doom in which players could look from side to side but not up or down.

I’m not saying that the City of Text on a 1990s PC or laptop would be easy, especially on Joey’s Macintosh LC, but it is plausible.

Alas the worm animation shown when that particular file is selected is not possible. We see fractal graphics and mathematical notation in 3D, and it’s a full screen image rather than a simple file icon. Whether it’s a pre-rendered animation or being generated on the fly there’s way too much to push through a modem connection, even though at the time “full screen” meant a lot less pixels than now in the 21st C.

The physical towers were also not possible. Three metre high flat screen displays didn’t exist in 1995, and I don’t see how that many projectors could be installed in the ceiling without interfering with each other.

How well does the interface inform the narrative of the story?

Hackers is a film all about computers and the people who work with them, and therefore must solve the problem (which still exists today) of making what is happening visible and understandable to a non-technical audience. Director Iain Softley said he wanted a metaphorical representation of how the characters perceived the digital world, not a realistic one. Some scenes use stylised 2D graphics and compositing to create a psychedelic look, while the 3D file browser is meant to be a virtual equivalent to the physical city of New York where Hackers is set. At least for some viewers, myself included, it works.

The worm animation also works well. Joey is looking for an interesting file, a trophy, and the animation makes it clear that this is indeed an extraordinary file without needing to show the code.

The physical towers, though, are rather silly. The City of Text is meant to be metaphorical, a mental landscape created by hackers, so we don’t need a physical version.

How well does the interface equip the character to achieve their goals?

The City of Text is very well suited to the character goals, because they are exploring the digital world. Looking cool and having fun are what’s important, not being efficient.

Now if you’ll excuse me, I have a rollerblading lesson before the next review…

Sci-fi Spacesuits: Moving around

Whatever it is, it ain’t going to construct, observe, or repair itself. In addition to protection and provision, suits must facilitate the reason the wearer has dared to go out into space in the first place.

One of the most basic tasks of extravehicular activity (EVA) is controlling where the wearer is positioned in space. The survey shows several types of mechanisms for this. First, if your EVA never needs you to leave the surface of the spaceship, you can go with mountaineering gear or sticky feet. (Or sticky hands.) We can think of maneuvering through space as similar to piloting a craft, but the outputs and interfaces have to be made wearable, like wearable control panels. We might also expect to see some tunnel in the sky displays to help with navigation. We’d also want to see some AI safeguard features, to return the spacewalker to safety when things go awry. (Narrator: We don’t.)

Mountaineering gear

In Stowaway (2021) astronauts undertake unplanned EVAs with carabiners and gear akin to mountaineers use. This makes some sense, though even this equipment needs to be modified for use by astronauts’ thick gloves.

Stowaway (2021) Drs Kim and Levinson prepare to scale to the propellant tank.

Sticky feet (and hands)

Though it’s not extravehicular, I have to give a shout out to 2001: A Space Odyssey (1969), where we see a flight attendant manage their position in the microgravity with special shoes that adhere to the floor. It’s a lovely example of a competent Hand Wave. We don’t need to know how it works because it says, right there, “Grip shoes.” Done. Though props to the actress Heather Downham, who had to make up a funny walk to illustrate that it still isn’t like walking on earth.

2001: A Space Odyssey (1969)
Pan Am: “Thank god we invented the…you know, whatever shoes.

With magnetic boots, seen in Destination Moon, the wearer simply walks around and manages the slight awkwardness of having to pull a foot up with extra force, and have it snap back down on its own.

Battlestar Galactica added magnetic handgrips to augment the control provided by magnetized boots. With them, Sergeant Mathias is able to crawl around the outside of an enemy vessel, inspecting it. While crawling, she holds grip bars mounted to circles that contain the magnets. A mechanism for turning the magnet off is not seen, but like these portable electric grabbers, it could be as simple as a thumb button.

Iron Man also had his Mark 50 suit form stabilizing suction cups before cutting a hole in the hull of the Q-Ship.

Avengers: Infinity War (2018)

In the electromagnetic version of boots, seen in Star Trek: First Contact, the wearer turns the magnets on with a control strapped to their thigh. Once on, the magnetization seems to be sensitive to the wearer’s walk, automatically lessening when the boot is lifted off. This gives the wearer something of a natural gait. The magnetism can be turned off again to be able to make microgravity maneuvers, such as dramatically leaping away from Borg minions.

Star Trek: Discovery also included this technology, but with what appears to be a gestural activation and a cool glowing red dots on the sides and back of the heel. The back of each heel has a stack of red lights that count down to when they turn off, as, I guess, a warning to anyone around them that they’re about to be “air” borne.

Quick “gotcha” aside: neither Destination Moon nor Star Trek: First Contact bothers to explain how characters are meant to be able to kneel while wearing magnetized boots. Yet this very thing happens in both films.

Destination Moon (1950): Kneeling on the surface of the spaceship.
Star Trek: First Contact (1996): Worf rises from operating the maglock to defend himself.

Controlled Propellant

If your extravehicular task has you leaving the surface of the ship and moving around space, you likely need a controlled propellant. This is seen only a few times in the survey.

In the film Mission to Mars, the manned mobility unit, or MMU, seen in the film is based loosely on NASA’s MMU. A nice thing about the device is that unlike the other controlled propellant interfaces, we can actually see some of the interaction and not just the effect. The interfaces are subtly different in that the Mission to Mars spacewalkers travel forward and backward by angling the handgrips forward and backward rather than with a joystick on an armrest. This seems like a closer mapping, but also seems more prone to error by accidental touching or bumping into something.

The plus side is an interface that is much more cinegenic, where the audience is more clearly able to see the cause and effect of the spacewalker’s interactions with the device.

If you have propellent in a Moh’s 4 or 5 film, you might need to acknowledge that propellant is a limited resource. Over the course of the same (heartbreaking) scene shown above, we see an interface where one spacewalker monitors his fuel, and another where a spacewalker realizes that she has traveled as far as she can with her MMU and still return to safety.

Mission to Mars (2000): Woody sees that he’s out of fuel.

For those wondering, Michael Burnham’s flight to the mysterious signal in that pilot uses propellant, but is managed and monitored by controllers on Discovery, so it makes sense that we don’t see any maneuvering interfaces for her. We could dive in and review the interfaces the bridge crew uses (and try to map that onto a spacesuit), but we only get snippets of these screens and see no controls.

Iron Man’s suits employ some Phlebotinum propellant that lasts for ever, can fit inside his tailored suit, and are powerful enough to achieve escape velocity.

Avengers: Infinity War (2018)

All-in-all, though sci-fi seems to understand the need for characters to move around in spacesuits, very little attention is given to the interfaces that enable it. The Mission to Mars MMU is the only one with explicit attention paid to it, and that’s quite derived from NASA models. It’s an opportunity for film makers should the needs of the plot allow, to give this topic some attention.

Sci-fi Spacesuits: Biological needs

Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.

Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.

There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.

The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.

Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.

Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.

Crowe’s medical monitor in Aliens (1986).

There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.

Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.

Emergency deployment

One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.

If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.

What do we see in the real world?

Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.

The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.

The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.

Hey, isn’t the text on this thing backwards? Yes, because astronauts can’t look down from inside their helmets, and must view these controls via a wrist mirror. More on this later.

The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.

The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.


Back to sci-fi

So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.

Panther Glove Guns

As I rule I don’t review lethal weapons on scifiinterfaces.com. The Panther Glove Guns appear to be remote-bludgeoning beams, so this kind of sneaks by. Also, I’ll confess in advance that there’s not a lot that affords critique.

We first see the glove guns in the 3D printer output with the kimoyo beads for Agent Ross and the Dora Milaje outfit for Nakia. They are thick weapons that fit over Shuri’s hands and wrists. I imagine they would be very useful to block blades and even disarm an opponent in melee combat, but we don’t see them in use this way.

The next time we see them, Shuri is activating them. (Though we don’t see how) The panther heads thrust forward, their mouths open wide, and the “neck” glows a hot blue. When the door before her opens, she immediately raises them at the guards (who are loyal to usurper Killmonger) and fires.

A light-blue beam shoots out of the mouths of the weapons, knocking the guards off the platform. Interestingly, one guard is lifted up and thrown to his 4-o-clock. The other is lifted up and thrown to his 7-o-clock. It’s not clear how Shuri instructs the weapons to have different and particular knock-down effects. But we’ve seen all over Black Panther that brain-computer interfaces (BCI) are a thing, so it’s diegetically possible she’s simply imagining where she wants them to be thrown, and then pulling a trigger or clenching her fist around a rod or just thinking “BAM!” to activate. The force-bolt strikes them right where they need to so that, like a billiard ball, they get knocked in the desired direction. As with all(?) brain-computer interfaces, there is not an interaction to critique.

After she dispatches the two guards, still wearing the gloves, she throws a control bead onto the Talon. The scene is fast and blurry, but it’s unclear how she holds and releases the bead from the glove. Was it in the panther’s jaw the whole time? Could be another BCI, of course. She just thought about where she wanted it, flung her arm, and let the AI decide when to release it for perfect targeting. The Talon is large and she doesn’t seem to need a great deal of accuracy with the bead, but for more precise operations, the AI targeting would make more sense than, say, letting the panther heads disintegrate on command so she would have freedom of her hands. 

Later, after Killmonger dispatches the Dora Milaje, Shuri and Nakia confront him by themselves. Nakia gets in a few good hits, but is thrown from the walkway. Shuri throws some more bolts his way though he doesn’t appear to even notice. I note that the panther gloves would be very difficult to aim since there’s no continuous beam providing feedback, and she doesn’t have a gun sight to help her. So, again—and I’m sorry because it feels like cheating—I have to fall back to an AI assist here. Otherwise it doesn’t make sense. 

Then Shuri switches from one blast at a time to a continuous beam. It seems to be working, as Killmonger kneels from the onslaught.

This is working! How can I eff it up?

But then for some reason she—with a projectile weapon that is actively subduing the enemy and keeping her safe at a distance—decides to close ranks, allowing Killmonger to knock the glove guns with a spear tip, thereby free himself, and destroy the gloves with a clutch of his Panther claws. I mean, I get she was furious, but I expected better tactics from the chief nerd of Wakanda. Thereafter, they spark when she tries to fire them. So ends this print of the Panther Guns.

As with all combat gear, it looks cool for it to glow, but we don’t want coolness to help an enemy target the weapon. So if it was possible to suppress the glow, that would be advisable. It might be glowing just for the intimidation factor, but for a projectile weapon that seems strange.

The panther head shapes remind an opponent that she is royalty (note no other Wakandan combatants have ranged weapons) and fighting in Bast’s name, which I suppose if you’re in the business of theocratic warfare is fine, I guess.

It’s worked so well in the past. More on this aspect later.

So, if you buy the brain-computer interface interpretation, AI targeting assist, and theocratic design, these are fine, with the cinegenic exception of the attention-drawing glow.


Black History Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

When The Watchmen series opened with the Tulsa Race Massacre, many people were shocked to learn that this event was not fiction, reminding us just how much of black history is erased and whitewashed for the comfort of white supremacy (and fuck that). Today marks the beginning of Black History Month, and it’s a good opportunity to look back and (re)learn of the heroic figures and stories of both terror and triumph that fill black struggles to have their citizenship and lives fully recognized.

Library of Congress, American National Red Cross Photograph Collection

There are lots of events across the month. The African American History Month site is a collaboration of several government organizations (and it feels so much safer to share such a thing now that the explicitly racist administration is out of office and facing a second impeachment):

  • The Library of Congress
  • National Archives and Records Administration
  • National Endowment for the Humanities
  • National Gallery of Art
  • National Park Service
  • Smithsonian Institution and United States Holocaust Memorial Museum

The site, https://www.africanamericanhistorymonth.gov/, has a number of resources, including images, video, and calendar of events for you.

Today we can take a moment to remember and honor the Greensboro Four.

On this day, February 1, 1960: Through careful planning and enlisting the help of a local white businessman named Ralph Johns, four Black college students—Ezell A. Blair, Jr., Franklin E. McCain, Joseph A. McNeil, David L. Richmond—sat down at a segregated lunch counter at Woolworth’s in Greensboro, North Carolina, and politely asked for service. Their request was refused. When asked to leave, they remained in their seats.

Police arrived on the scene, but were unable to take action due to the lack of provocation. By that time, Ralph Johns had already alerted the local media, who had arrived in full force to cover the events on television. The Greensboro Four stayed put until the store closed, then returned the next day with more students from local colleges.

Their passive resistance and peaceful sit-down demand helped ignite a youth-led movement to challenge racial inequality throughout the South.

A last bit of amazing news to share today is that Black Lives Matter has been nominated for the Nobel Peace Prize! The movement was co-founded by Alicia Garza, Patrisse Cullors and Opal Tometi in response to the acquittal of Trayvon Martin’s murderer, got a major boost with the outrage following and has grown to a global movement working to improve the lives of the entire black diaspora. May it win!

Wakandan Med Table

When Agent Ross is shot in the back during Klaue’s escape from the Busan field office, T’Challa stuffs a kimoyo bead into the wound to staunch the bleeding, but the wounds are still serious enough that the team must bring him back to Wakanda for healing. They float him to Shuri’s lab on a hover-stretcher.

Here Shuri gets to say the juicy line, “Great. Another white boy for us to fix. This is going to be fun.
Sorry about the blurry screen shot, but this is the most complete view of the bay.

The hover-stretcher gets locked into place inside a bay. The bay is a small room in the center of Shuri’s lab, open on two sides. The walls are covered in a gray pattern suggesting a honeycomb. A bas-relief volumetric projection displays some medical information about the patient like vital signs and a subtle fundus image of the optic nerve.

Shuri holds her hand flat and raises it above the patient’s chest. A volumetric display of 9 of his thoracic vertebrae rises up in response. One of the vertebrae is highlighted in a bright red. A section of the wall display displays the same information in 2D, cyan with orange highlights. That display section slides out from the wall to draw observer’s attentions. Hexagonal tiles flip behind the display for some reason, but produce no change in the display.

Shuri reaches her hands up to the volumetric vertebrae, pinches her forefingers and thumbs together, and pull them apart. In response, the space between the vertebrae expands, allowing her to see the top and bottom of the body of the vertebra.

She then turns to the wall display, and reading something there, tells the others that he’ll live. Her attention is pulled away with the arrival of Wakabe, bringing news of Killmonger. We do not see her initiate a treatment in the scene. We have to presume that she did it between cuts. (There would have to be a LOT of confidence in an AI’s ability to diagnose and determine treatment before they would let Griot do that without human input.)

We’ll look more closely at the hover-stretcher display in a moment, but for now let’s pause and talk about the displays and the interaction of this beat.

A lab is not a recovery room

This doesn’t feel like a smart environment to hold a patient. We can bypass a lot of the usual hospital concerns of sterilization (it’s a clean room) or readily-available equipment (since they are surrounded by programmable vibranium dust controlled by an AGI) or even risk of contamination (something something AI). I’m mostly thinking about the patient having an environment that promotes healing: Natural light, quiet or soothing music, plants, furnishing, and serene interiors. Having him there certainly means that Shuri’s team can keep an eye on him, and provide some noise that may act as a stimulus, but don’t they have actual hospital rooms in Wakanda? 

Why does she need to lift it?

The VP starts in his chest, but why? If it had started out as a “translucent skin” illusion, like we saw in Lost in Space (1998, see below), then that might make sense. She would want to lift it to see it in isolation from the distracting details of the body. But it doesn’t start this way, it starts embedded within him?!

The “translucent skin” display from Lost in Space (1998)

It’s a good idea to have a representation close to the referent, to make for easy comparison between them. But to start the VP within his opaque chest just doesn’t make sense.

This is probably the wrong gesture

In the gestural interfaces chapter of  Make It So, I described a pidgin that has been emerging in sci-fi which consisted of 7 “words.” The last of these is “Pinch and Spread to Scale.” Now, there is nothing sacred about this gestural language, but it has echoes in the real world as well. For one example, Google’s VR painting app Tilt Brush uses “spread to scale.” So as an increasingly common norm, it should only be violated with good reason. In Black Panther, Shuri uses spread to mean “spread these out,” even though she starts the gesture near the center of the display and pulls out at a 45° angle. This speaks much more to scaling than to spreading. It’s a mismatch and I can’t see a good reason for it. Even if it’s “what works for her,” gestural idiolects hinder communities of practice, and so should be avoided.

Better would have been pinching on one end of the spine and hooking her other index finger to spread it apart without scaling. The pinch is quite literal for “hold” and the hook quite literal for “pull.” This would let scale be scale, and “hook-pull” to mean “spread components along an axis.”

Model from https://justsketch.me/

If we were stuck with the footage of Shuri doing the scale gesture, then it would have made more sense to scale the display, and fade the white vertebrae away so she could focus on the enlarged, damaged one. She could then turn it with her hand to any arbitrary orientation to examine it.

An object highlight is insufficient

It’s quite helpful for an interface that can detect anomalies to help focus a user’s attention there. The red highlight for the damaged vertebrae certainly helps draw attention. Where’s the problem? Ah, yes. There’s the problem. But it’s more helpful for the healthcare worker to know the nature of the damage, what the diagnosis is, to monitor the performance of the related systems, and to know how the intervention is going. (I covered these in the medical interfaces chapter of Make It So, if you want to read more.) So yes, we can see which vertebra is damaged, but what is the nature of that damage? A slipped disc should look different than a bone spur, which should look different than one that’s been cracked or shattered from a bullet. The thing-red display helps for an instant read in the scene, but fails on close inspection and would be insufficient in the real world.

This is not directly relevant to the critique, but interesting that spinal VPs have been around since 1992. Star Trek: The Next Generation, “Ethics” (Season 5, Episode 16).

Put critical information near the user’s locus of attention

Why does Shuri have to turn and look at the wall display at all? Why not augment the volumetric projection with the data that she needs? You might worry that it could obscure the patient (and thereby hinder direct observations) but with an AGI running the show, it could easily position those elements to not occlude her view.

Compare this display, which puts a waveform directly adjacent to the brain VP. Firefly, “Ariel” (Episode 9, 2002).

Note that Shuri is not the only person in the room interested in knowing the state of things, so a wall display isn’t bad, but it shouldn’t be the only augmentation.

Lastly, why does she need to tell the others that Ross will live? if there was signifcant risk of his death, there should be unavoidable environmental signals. Klaxons or medical alerts. So unless we are to believe T’Challa has never encountered a single medical emergency before (even in media), this is a strange thing for her to have to say. Of course we understand she’s really telling us in the audience that we don’t need to wonder about this plot development any more, but it would be better, diegetically, if she had confirmed the time-to-heal, like, “He should be fine in a few hours.”

Alternatively, it would be hilarious turnabout if the AI Griot had simply not been “trained” on data that included white people, and “could not see him,” which is why she had to manually manage the diagnosis and intervention, but that would have massive impact on the remote piloting and other scenes, so isn’t worth it. Probably.

Thoughts toward a redesign

So, all told, this interface and interaction could be much better fit-to-purpose. Clarify the gestural language. Lose the pointless flipping hexagons. Simplify the wall display for observers to show vitals, diagnosis and intervention, as well as progress toward the goal. Augment the physician’s projection with detailed, contextual data. And though I didn’t mention it above, of course the bone isn’t the only thing damaged, so show some of the other damaged tissues, and some flowing, glowing patterns to show where healing is being done along with a predicted time-to-completion.

Stretcher display

Later, when Ross is fully healed and wakes up, we see a shot of of the med table from above. Lots of cyan and orange, and *typography shudder* stacked type. Orange outlines seem to indicate controls, tough they bear symbols rather than full labels, which we know is better for learnability and infrequent reuse. (Linguist nerds: Yes, Wakandan is alphabetic rather than logographic.)

These feel mostly like FUIgetry, with the exception of a subtle respiration monitor on Ross’ left. But it shows current state rather than tracked over time, so still isn’t as helpful as it could be.

Then when Ross lifts his head, the hexagons begin to flip over, disabling the display. What? Does this thing only work when the patient’s head is in the exact right space? What happens when they’re coughing, or convulsing? Wouldn’t a healthcare worker still be interested in the last-recorded state of things? This “instant-off” makes no sense. Better would have been just to let the displays fade to a gray to indicate that it is no longer live data, and to have delayed the fade until he’s actually sitting up.

All told, the Wakandan medical interfaces are the worst of the ones seen in the film. Lovely, and good for quick narrative hit, but bad models for real-world design, or even close inspection within the world of Wakanda.


MLK Day Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Today is Martin Luther King Day. Normally there would be huge gatherings and public speeches about his legacy and the current state of civil rights. But the pandemic is still raging, and with the Capitol in Washington, D.C. having seen just last week an armed insurrection by supporters of outgoing and pouty loser Donald Trump, (in case that WP article hasn’t been moved yet, here’s the post under its watered-down title) worries about additional racist terrorism and violence.

So today we celebrate virtually, by staying at home, re-experiening his speeches and letters, and listening to the words of black leaders and prominent thinkers all around us, reminding us of the arc of the moral universe, and all the work it takes to bend it toward justice.

With the Biden team taking the reins on Wednesday, and Kamala Harris as our first female Vice President of color, things are looking brighter than they have in 4 long, terrible years. But Trump would have gotten nowhere if there hadn’t been a voting block and party willing to indulge his racist fascism. There’s still much more to do to dismantle systemic racism in the country and around the world. Let’s read, reflect, and use whatever platforms and resources we are privileged to have, act.

Okoye’s grip shoes

Like so much of the tech in Black Panther, this wearable battle gear is quite subtle, but critical to the scene, and much more than it seems at first. When Okoye and Nakia are chasing Klaue through the streets of Busan, South Korea, she realizes she would be better positioned on top of their car than within it.

She holds one of her spears out of the window, stabs it into the roof, and uses it to pull herself out on top of the swerving, speeding car. Once there, she places her feet into position, and the moment the sole of her foot touches the roof, it glows cyan for a moment.

She then holds onto the stuck spear to stabilize herself, rears back with her other spear, and throws it forward through the rear-window and windshield of some minions’ car, where it sticks in the road before them. Their car strikes the spear and get crushed. It’s a kickass moment in a film of kickass moments. But by all means let’s talk about the footwear.

Now, it’s not explicit, the effect the shoe has in the world of the story. But we can guess, given the context, that we are meant to believe the shoes grip the car roof, giving her a firm enough anchor to stay on top of the car and not tumble off when it swerves.

She can’t just be stuck

I have never thrown a javelin or a hyper-technological vibranium spear. But Mike Barber, PhD scholar in Biomechanics at Victoria University and Australian Institute of Sport, wrote this article about the mechanics of javelin throwing, and it seems that achieving throwing force is not just by sheer strength of the rotator cuff. Rather, the thrower builds force across their entire body and whips the momentum around their shoulder joint.

 Ilgar Jafarov, CC BY-SA 4.0, via Wikimedia Commons

Okoye is a world-class warrior, but doesn’t have superpowers, so…while I understand she does not want the car to yank itself from underneath her with a swerve, it seems that being anchored in place, like some Wakandan air tube dancer, will not help her with her mighty spear throwing. She needs to move.

It can’t just be manual

Imagine being on a mechanical bull jerking side to side—being stuck might help you stay upright. But imagine it jerking forward suddenly, and you’d wind up on your butt. If it jerked backwards, you’d be thrown forward, and it might be much worse. All are possibilities in the car chase scenario.

If those jerking motions happened to Okoye faster than she could react and release her shoes, it could be disastrous. So it can’t be a thing she needs to manually control. Which means it needs to some blend of manual, agentive, and assistant. Autonomic, maybe, to borrow the term from physiology?

So…

To really be of help, it has to…

  • monitor the car’s motion
  • monitor her center of balance
  • monitor her intentions
  • predict the future motions of the cars
  • handle all the cybernetics math (in the Norbert Wiener sense, not the sci-fi sense)
  • know when it should just hold her feet in place, and when it should signal for her to take action
  • know what action she should ideally take, so it knows what to nudge her to do

These are no mean feats, especially in real-time. So, I don’t see any explanation except…

An A.I. did it.

AGI is in the Wakandan arsenal (c.f. Griot helping Ross), so this is credible given the diegesis, but I did not expect to find it in shoes.

An interesting design question is how it might deliver warning signals about predicted motions. Is it tangible, like vibration? Or a mild electrical buzz? Or a writing-to-the-brain urge to move? The movie gives us no clues, but if you’re up for a design challenge, give it a speculative design pass.

Wearable heuristics

As part of my 2014 series about wearable technologies in sci-fi, I identified a set of heuristics we can use to evaluate such things. A quick check against those show that they fare well. The shoes are quite sartorial, and look like shoes so are social as well. As a brain interface, it is supremely easy to access and use. Two of the heuristics raise questions though.

  1. Wearables must be designed so they are difficult to accidentally activate. It would have been very inconvenient for Okoye to find herself stuck to the surface of Wakanda while trying to chase Killmonger later in the film, for example. It would be safer to ensure deliberateness with some mode-confirming physical gesture, but there’s no evidence of it in the movie.
  2. Wearables should have apposite I/O. The soles glow. Okoye doesn’t need that information. I’d say in a combat situation it’s genuinely bad design to require her to look down to confirm any modes of the shoes. They’re worn. She will immediately feel whether her shoes are fixed in place. While I can’t name exactly how an enemy might use the knowledge about whether she is stuck in place or not, but on general principle, the less information we give to the enemy, the safer you’ll be. So if this was real-world, we would seek to eliminate the glow. That said, we know that undetectable interactions are not cinegenic in the slightest, so for the film this is a nice “throwaway” addition to the cache of amazing Wakandan technology.

Black Georgia Matters and Today is the Day

Each post in the Black Panther review is followed by actions that you can take to support black lives.

Today is the last day in the Georgia runoff elections. It’s hard to overstate how important this is. If Ossoff and Warnock win, the future of the country has a much better likelihood of taking Black Lives Matter (and lots of other issues) more seriously. Actual progress might be made. Without it, the obstructionist and increasingly-frankly-racist Republican party (and Moscow Mitch) will hold much of the Biden-Harris administration back. If you know of any Georgians, please check with them today to see if they voted in the runoff election. If not—and they’re going to vote Democrat—see what encouragement and help you can give them.

Some ideas…

  • Pay for a ride there and back remotely.
  • Buy a meal to be delivered for their family.
  • Make sure they are protected and well-masked.
  • Encourage them to check their absentee ballot, if they cast one, here. https://georgia.ballottrax.net/voter/
  • If their absentee ballot has not been registered, they can go to the polls and tell the workers there that they want to cancel their absentee ballot and vote in person. Help them know their poll at My Voter Page: https://www.mvp.sos.ga.gov/MVP/mvp.do

This vote matters, matters, matters.

Who did it better? Santa Claus edition

I presume my readership are adults. I honestly cannot imagine this site has much to offer the 3-to-8-year-old. That said, if you are less than 8.8 years old, be aware that reading this will land you FIRMLY on the naughty list. Leave before it’s too late. Oooh, look! Here’s something interesting for you.


For those who celebrate Yule (and the very hybridized version of the holiday that I’ll call Santa-Christmas to distinguish it from Jesus-Christmas or Horus-Christmas), it’s that one time of year where we watch holiday movies. Santa features in no small number of them, working against the odds to save Christmas and Christmas spirit from something that threatens it. Santa accomplishes all that he does by dint of holiday magic, but increasingly, he has magic-powered technology to help him. These technologies are different for each movie in which they appear, with different sci-fi interfaces, which raises the question: Who did it better?

Unraveling this stands to be even more complicated than usual sci-fi fare.

  • These shows are largely aimed at young children, who haven’t developed the critical thinking skills to doubt the core premise, so the makers don’t have much pressure to present wholly-believable worlds. The makers also enjoy putting in some jokes for adults that are non-diegetic and confound analysis.
  • Despite the fact that these magical technologies are speculative just as in sci-fi, makers cannot presume that their audience are sci-fi fans who are familiar with those tropes. And things can’t seem too technical.
  • The sci in this fi is magical, which allows makers to do all-sorts of hand-wavey things about how it’s doing what it’s doing.
  • Many of the choices are whimsical and serve to reinforce core tenets of the Santa Claus mythos rather than any particular story or worldbuilding purpose.

But complicated-ness has rarely cowed this blog’s investigations before, why let a little thing like holiday magic do it now?

Ho-Ho-hubris!

A Primer on Santa

I have readers from all over the world. If you’re from a place that does not celebrate the Jolly Old Elf, a primer should help. And if you’re from a non-USA country, your Saint Nick mythos will be similar but not the same one that these movies are based on, so a clarification should help. To that end, here’s what I would consider the core of it.

Santa Claus is a magical, jolly, heavyset old man with white hair, mustache, and beard who lives at the North Pole with his wife Ms. Claus. The two are almost always caucasian. He can alternately be called Kris Kringle, Saint Nick, Father Christmas, or Klaus. The Clark Moore poem calls him a “jolly old elf.” He is aware of the behavior of children, and tallies their good and bad behavior over the year, ultimately landing them on the “naughty” or “nice” list. Santa brings the nice ones presents. (The naughty ones are canonically supposed to get coal in their stockings though in all my years I have never heard of any kids actually getting coal in lieu of presents.) Children also hang special stockings, often on a mantle, to be filled with treats or smaller presents. Adults encourage children to be good in the fall to ensure they get presents. As December approaches, Children write letters to Santa telling him what presents they hope for. Santa and his elves read the letters and make all the requested toys by hand in a workshop. Then the evening of 24 DEC, he puts all the toys in a large sack, and loads it into a sleigh led by 8 flying reindeer. Most of the time there is a ninth reindeer up front with a glowing red nose named Rudolph. He dresses in a warm red suit fringed with white fur, big black boots, thick black belt, and a stocking hat with a furry ball at the end. Over the evening, as children sleep, he delivers the presents to their homes, where he places them beneath the Christmas tree for them to discover in the morning. Families often leave out cookies and milk for Santa to snack on, and sometimes carrots for the reindeer. Santa often tries to avoid detection for reasons that are diegetically vague.

There is no single source of truth for this mythos, though the current core text might be the 1823 C.E. poem, “A Visit from St. Nicholas” by Clement Clarke Moore. Visually, Santa’s modern look is often traced back to the depictions by Civil War cartoonist Thomas Nast, which the Coca-Cola Corporation built upon for their holiday advertisements in 1931.

Both these illustrations are by Nast.

There are all sorts of cultural conversations to have about the normalizing a magical panopticon, what effect hiding the actual supply chain has, and asking for what does perpetuating this myth train children; but for now let’s stick to evaluating the interfaces in terms of Santa’s goals.

Santa’s goals

Given all of the above, we can say that the following are Santa’s goals.

  • Sort kids by behavior as naughty or nice
    • Many tellings have him observing actions directly
    • Manage the lists of names, usually on separate lists
  • Manage letters
    • Reading letters
    • Sending toy requests to the workshop
    • Storing letters
  • Make presents
  • Travel to kids’ homes
    • Find the most-efficient way there
    • Control the reindeer
    • Maintain air safety
      • Avoid air obstacles
    • Find a way inside and to the tree
    • Enjoy the cookies / milk
  • Deliver all presents before sunrise
  • For each child:
    • Know whether they are naughty or nice
    • If nice, match the right toy to the child
    • Stage presents beneath the tree
  • Avoid being seen

We’ll use these goals to contextualize the Santa interfaces against.

This is the Worst Santa, but the image is illustrative of the weather challenges.

Typical Challenges

Nearly every story tells of Santa working with other characters to save Christmas. (The metaphor that we have to work together to make Christmas happen is appreciated.) The challenges in the stories can be almost anything, but often include…

  • Inclement weather (usually winter, but Santa is a global phenomenon)
  • Air safety
    • Air obstacles (Planes, helicopters, skyscrapers)
  • Ingress/egress into homes
  • Home security systems / guard dogs

The Contenders

Imdb.com lists 847 films tagged with the keyword “santa claus,” which is far too much to review. So I looked through “best of” lists (two are linked below) and watched those films for interfaces. There weren’t many. I even had to blend CGI and live action shows, which I’m normally hesitant to do. As always, if you know of any additional shows that should be considered, please mention it in the comments.

https://editorial.rottentomatoes.com/guide/best-christmas-movies/https://screenrant.com/best-santa-claus-holiday-movies-ranked/

After reviewing these films, the ones with Santa interfaces came down to four, presented below in chronological order.

The Santa Clause (1994)

This movie deals with the lead character, Scott Calvin, inadvertently taking on the “job” of Santa Clause. (If you’ve read Anthony’s Incarnations of Immortality series, this plot will feel quite familiar.)

The sleigh he inherits has a number of displays that are largely unexplained, but little Charlie figures out that the center console includes a hot chocolate and cookie dispenser. There is also a radar, and far away from it, push buttons for fog, planes, rain, and lightning. There are several controls with Christmas bell icons associated with them, but the meaning of these are unclear.

Santa’s hat in this story has headphones and the ball has a microphone for communicating with elves back in the workshop.

This is the oldest of the candidates. Its interfaces are quite sterile and “tacked on” compared to the others, but was novel for its time.

The Santa Clause on imdb.com

Fred Claus (2007)

This movie tells the story of Santa’s n’er do well brother Fred, who has to work in the workshop for one season to work off bail money. While there he winds up helping forestall foreclosure from an underhanded supernatural efficiency expert, and un-estranging himself from his family. A really nice bit in this critically-panned film is that Fred helps Santa understand that there are no bad kids, just kids in bad circumstances.

Fred is taken to the North Pole in a sled with switches that are very reminiscent of the ones in The Santa Clause. A funny touch is the “fasten your seatbelt” sign like you might see in a commercial airliner. The use of Lombardic Capitals font is a very nice touch given that much of modern Western Santa Claus myth (and really, many of our traditions) come from Germany.

The workshop has an extensive pneumatic tube system for getting letters to the right craftself.

This chamber is where Santa is able to keep an eye on children. (Seriously panopticony. They have no idea they’re being surveilled.) Merely by reading the name and address of a child a volumetric display appears within the giant snowglobe. The naughtiest children’s names are displayed on a digital split-flap display, including their greatest offenses. (The nicest are as well, but we don’t get a close up of it.)

The final tally is put into a large book that one of the elves manages from the sleigh while Santa does the actual gift-distribution. The text in the book looks like it was printed from a computer.

Fred Clause on imdb.com

Arthur Christmas (2011)

In this telling, the Santa job is passed down patrilineally. The oldest Santa, GrandSanta, is retired. The dad, Malcolm, is the current-acting Santa one, and he has two sons. One is Steve, a by-the-numbers type into military efficiency and modern technology. The other son, Arthur, is an awkward fellow who has a semi-disposable job responding to letters. Malcolm currently pilots a massive mile-wide spaceship from which ninja elves do the gift distribution. They have a lot of tech to help them do their job. The plot involves Arthur working with Grandsanta using his old Sleigh to get a last forgotten gift to a young girl before the sun rises.

To help manage loud pets in the home who might wake up sleeping people, this gun has a dial for common pets that delivers a treat to distract them.

Elves have face scanners which determine each kids’ naughty/nice percentage. The elf then enters this into a stocking-filling gun, which affects the contents in some unseen way. A sweet touch is when one elf scans a kid who is read as quite naughty, the elf scans his own face to get a nice reading instead.

The S-1 is the name of the spaceship sleigh at the beginning (at the end it is renamed after Grandsanta’s sleigh). Its bridge is loaded with controls, volumetric displays, and even a Little Tree air freshener. It has a cloaking display on its underside which is strikingly similar to the MCU S.H.I.E.L.D. helicarrier cloaking. (And this came out the year before The Avengers, I’m just sayin’.)

The north pole houses the command-and-control center, which Steve manages. Thousands of elves manage workstations here, and there is a huge shared display for focusing and informing the team at once when necessary. Smaller displays help elf teams manage certain geographies. Its interfaces fall to comedy and trope, mostly, but are germane to the story beats

One of the crisis scenarios that this system helps manage is for a “waker,” a child who has awoken and is at risk of spying Santa.

Grandsanta’s outmoded sleigh is named Eve. Its technology is much more from the early 20th century, with switches and dials, buttons and levers. It’s a bit janky and overly complex, but gets the job done.

One notable control on S-1 is this trackball with dark representations of the continents. It appears to be a destination selector, but we do not see it in use. It is remarkable because it is very similar to one of the main interface components in the next candidate movie, The Christmas Chronicles.

Arthur Christmas on imdb.com

The Christmas Chronicles (2018)

The Christmas Chronicles follows two kids who stowaway on Santa’s sleigh on Christmas Eve. His surprise when they reveal themselves causes him to lose his magical hat and wreck his sleigh. They help him recover the items, finish his deliveries, and (well, of course) save Christmas just in time.

Santa’s sleight enables him to teleport to any place on earth. The main control is a trackball location selector. Once he spins it and confirms that the city readout looks correct, he can press the “GO” button for a portal to open in the air just ahead of the sleigh. After traveling in a aurora borealis realm filled with famous landmarks for a bit, another portal appears. They pass through this and appear at the selected location. A small magnifying glass above the selection point helps with precision.

Santa wears a watch that measures not time, but Christmas spirit, which ranges from 0 to 100. In the bottom half, chapter rings and a magnifying window seem designed to show the date, with 12 and 31 sequential numbers, respectively. It’s not clear why it shows mid May. A hemisphere in the middle of the face looks like it’s almost a globe, which might be a nice way to display and change time zone, but that may be wishful thinking on my part.

Santa also has a tracking device for finding his sack of toys. (Apparently this has happened enough time to warrant such a thing.) It is an intricate filligree over a cool green and blue glass. A light within blinks faster the closer the sphere is to the sack.

Since he must finish delivering toys before Christmas morning, the dashboard has a countdown clock with Nixie tube numbers showing hours, minutes, and milliseconds. They ordinary glow a cyan, but when time runs out, they turn red and blink.

This Santa also manages his list in a large book with lovely handwritten calligraphy. The kids whose gifts remain undelivered glow golden to draw his attention.

The Christmas Chronicles on imdb.com

So…who did it better?

The hard problem here is that there is a lot of apples-to-oranges comparisons to do. Even though the mythos seems pretty locked down, each movie takes liberties with one or two aspects. As a result not all these Santas are created equally. Calvin’s elves know he is completely new to his job and will need support. Christmas Chronicles Santa has perfect memory, magical abilities, and handles nearly all the delivery duties himself, unless he’s enacting a clever scheme to impart Christmas wisdom. Arthur Christmas has intergenerational technology and Santas who may not be magic at all, but fully know their duty from their youths but rely on a huge army of shock troop elves to make things happen. So it’s hard to name just one. But absent a point-by-point detailed analysis, there are two that really stand out to me.

The weathered surface of this camouflage button is delightful (Arthur Christmas).

Coverage of goals

Arthur Christmas movie has, by far, the most interfaces of any of the candidates, and more coverage of the Santa-family’s goals. Managing noisy pets? Check? Dealing with wakers? Check. Navigating the globe? Check. As far as thinking through speculative technology that assists its Santa, this film has the most.

Keeping the holiday spirit

I’ll confess, though, that extradiegetically, one of the purposes of annual holidays is to mark the passage of time. By trying to adhere to traditions as much as we can, time and our memory is marked by those things that we cannot control (like, say, a pandemic keeping everyone at home and hanging with friends and family virtually). So for my money, the thoroughly modern interfaces that flood Arthur Christmas don’t work that well. They’re so modern they’re not…Christmassy. Grandsanta’s sleigh Eve points to an older tradition, but it’s also clearly framed as outdated in the context of the story.

Gorgeous steampunkish binocular HUD from The Christmas Chronicles 2, which was not otherwise included in this post.

Compare this to The Christmas Chronicles, with its gorgeous steampunk-y interfaces that combine a sense of magic and mechanics. These are things that a centuries-old Santa would have built and use. They feel rooted in tradition while still helping Santa accomplish as many of his goals as he needs (in the context of his Christmas adventure for the stowaway kids). These interfaces evoke a sense of wonder, add significantly to the worldbuilding, and which I’d rather have as a model for magical interfaces in the real world.

Of course it’s a personal call, given the differences, but The Christmas Chronicles wins in my book.

Ho, Ho, HEH.

For those that celebrate Santa-Christmas, I hope it’s a happy one, given the strange, strange state of the world. May you be on the nice list.


For more Who Did it Better, see the tag.

Deckard’s Photo Inspector

Back to Blade Runner. I mean, the pandemic is still pandemicking, but maybe this will be a nice distraction while you shelter in place. Because you’re smart, sheltering in place as much as you can, and not injecting disinfectants. And, like so many other technologies in this film, this will take a while to deconstruct, critique, and reimagine.

Description

Doing his detective work, Deckard retrieves a set of snapshots from Leon’s hotel room, and he brings them home. Something in the one pictured above catches his eye, and he wants to investigate it in greater detail. He takes the photograph and inserts it in a black device he keeps in his living room.

Note: I’ll try and describe this interaction in text, but it is much easier to conceptualize after viewing it. Owing to copyright restrictions, I cannot upload this length of video with the original audio, so I have added pre-rendered closed captions to it, below. All dialogue in the clip is Deckard.

Deckard does digital forensics, looking for a lead.

He inserts the snapshot into a horizontal slit and turns the machine on. A thin, horizontal orange line glows on the left side of the front panel. A series of seemingly random-length orange lines begin to chase one another in a single-row space that stretches across the remainder of the panel and continue to do so throughout Deckard’s use of it. (Imagine a news ticker, running backwards, where the “headlines” are glowing amber lines.) This seems useless and an absolutely pointless distraction for Deckard, putting high-contrast motion in his peripheral vision, which fights for attention with the actual, interesting content down below.

If this is distracting you from reading, YOU SEE MY POINT.

After a second, the screen reveals a blue grid, behind which the scan of the snapshot appears. He stares at the image in the grid for a moment, and speaks a set of instructions, “Enhance 224 to 176.”

In response, three data points appear overlaying the image at the bottom of the screen. Each has a two-letter label and a four-digit number, e.g. “ZM 0000 NS 0000 EW 0000.” The NS and EW—presumably North-South and East-West coordinates, respectively—immediately update to read, “ZM 0000 NS 0197 EW 0334.” After updating the numbers, the screen displays a crosshairs, which target a single rectangle in the grid.

A new rectangle then zooms in from the edges to match the targeted rectangle, as the ZM number—presumably zoom, or magnification—increases. When the animated rectangle reaches the targeted rectangle, its outline blinks yellow a few times. Then the contents of the rectangle are enlarged to fill the screen, in a series of steps which are punctuated with sounds similar to a mechanical camera aperture. The enlargement is perfectly resolved. The overlay disappears until the next set of spoken commands. The system response between Deckard’s issuing the command and the device’s showing the final enlarged image is about 11 seconds.

Deckard studies the new image for awhile before issuing another command. This time he says, “Enhance.” The image enlarges in similar clacking steps until he tells it, “Stop.”

Other instructions he is heard to give include “move in, pull out, track right, center in, pull back, center, and pan right.” Some include discrete instructions, such as, “Track 45 right” while others are relative commands that the system obeys until told to stop, such as “Go right.”

Using such commands he isolates part of the image that reveals an important clue, and he speaks the instruction, “Give me a hard copy right there.” The machine prints the image, which Deckard uses to help find the replicant pictured.

This image helps lead him to Zhora.

I’d like to point out one bit of sophistication before the critique. Deckard can issue a command with or without a parameter, and the inspector knows what to do. For example, “Track 45 right” and “Track right.” Without the parameter, it will just do the thing repeatedly until told to stop. That helps Deckard issue the same basic command when he knows exactly where he wants to look and when doesn’t know what exactly what he’s looking for. That’s a nice feature of the language design.

But still, asking him to provide step-by-step instructions in this clunky way feels like some high-tech Big Trak. (I tried to find a reference that was as old as the film.) And that’s not all…

Some critiques, as it is

  • Can I go back and mention that amber distracto-light? Because it’s distracting. And pointless. I’m not mad. I’m just disappointed.
  • It sure would be nice if any of the numbers on screen made sense, and had any bearing with the numbers Deckard speaks, at any time during the interaction. For instance, the initial zoom (I checked in Photoshop) is around 304%, which is neither the 224 or 176 that Deckard speaks.
  • It might be that each square has a number, and he simply has to name the two squares at the extents of the zoom he wants, letting the machine find the extents, but where is the labeling? Did he have to memorize an address for each pixel? How does that work at arbitrary levels of zoom?
  • And if he’s memorized it, why show the overlay at all?
  • Why the seizure-inducing flashing in the transition sequences? Sure, I get that lots of technologies have unfortunate effects when constrained by mechanics, but this is digital.
  • Why is the printed picture so unlike the still image where he asks for a hard copy?
  • Gaze at the reflection in Ford’s hazel, hazel eyes, and it’s clear he’s playing Missile Command, rather than paying attention to this interface at all. (OK, that’s the filmmaker’s issue, not a part of the interface, but still, come on.)
The photo inspector: My interface is up HERE, Rick.

How might it be improved for 1982?

So if 1982 Ridley Scott was telling me in post that we couldn’t reshoot Harrison Ford, and we had to make it just work with what we had, here’s what I’d do…

Squash the grid so the cells match the 4:3 ratio of the NTSC screen. Overlay the address of each cell, while highlighting column and row identifiers at the edges. Have the first cell’s outline illuminate as he speaks it, and have the outline expand to encompass the second named cell. Then zoom, removing the cell labels during the transition. When at anything other than full view, display a map across four cells that shows the zoom visually in the context of the whole.

Rendered in glorious 4:3 NTSC dimensions.

With this interface, the structure of the existing conversation makes more sense. When Deckard said, “Enhance 203 to 608” the thing would zoom in on the mirror, and the small map would confirm.

The numbers wouldn’t match up, but it’s pretty obvious from the final cut that Scott didn’t care about that (or, more charitably, ran out of time). Anyway I would be doing this under protest, because I would argue this interaction needs to be fixed in the script.

How might it be improved for 2020?

What’s really nifty about this technology is that it’s not just a photograph. Look close in the scene, and Deckard isn’t just doing CSI Enhance! commands (or, to be less mocking, AI upscaling). He’s using the photo inspector to look around corners and at objects that are reconstructed from the smallest reflections. So we can think of the interaction like he’s controlling a drone through a 3D still life, looking for a lead to help him further the case.

With that in mind, let’s talk about the display.

Display

To redesign it, we have to decide at a foundational level how we think this works, because it will color what the display looks like. Is this all data that’s captured from some crazy 3D camera and available in the image? Or is it being inferred from details in the 2 dimensional image? Let’s call the first the 3D capture, and the second the 3D inference.

If we decide this is a 3-D capture, then all the data that he observes through the machine has the same degree of confidence. If, however, we decide this is a 3D inferrer, Deckard needs to treat the inferred data with more skepticism than the data the camera directly captured. The 3-D inferrer is the harder problem, and raises some issues that we must deal with in modern AI, so let’s just say that’s the way this speculative technology works.

The first thing the display should do it make it clear what is observed and what is inferred. How you do this is partly a matter of visual design and style, but partly a matter of diegetic logic. The first pass would be to render everything in the camera frustum photo-realistically, and then render everything outside of that in a way that signals its confidence level. The comp below illustrates one way this might be done.

Modification of a pair of images found on Evermotion
  • In the comp, Deckard has turned the “drone” from the “actual photo,” seen off to the right, toward the inferred space on the left. The monochrome color treatment provides that first high-confidence signal.
  • In the scene, the primary inference would come from reading the reflections in the disco ball overhead lamp, maybe augmented with plans for the apartment that could be found online, or maybe purchase receipts for appliances, etc. Everything it can reconstruct from the reflection and high-confidence sources has solid black lines, a second-level signal.
  • The smaller knickknacks that are out of the reflection of the disco ball, and implied from other, less reflective surfaces, are rendered without the black lines and blurred. This provides a signal that the algorithm has a very low confidence in its inference.

This is just one (not very visually interesting) way to handle it, but should illustrate that, to be believable, the photo inspector shouldn’t have a single rendering style outside the frustum. It would need something akin to these levels to help Deckard instantly recognize how much he should trust what he’s seeing.

Flat screen or volumetric projection?

Modern CGI loves big volumetric projections. (e.g. it was the central novum of last year’s Fritz winner, Spider-Man: Far From Home.) And it would be a wonderful juxtaposition to see Deckard in a holodeck-like recreation of Leon’s apartment, with all the visual treatments described above.

But…

Also seriously who wants a lamp embedded in a headrest?

…that would kind of spoil the mood of the scene. This isn’t just about Deckard’s finding a clue, we also see a little about who he is and what his life is like. We see the smoky apartment. We see the drab couch. We see the stack of old detective machines. We see the neon lights and annoying advertising lights swinging back and forth across his windows. Immersing him in a big volumetric projection would lose all this atmospheric stuff, and so I’d recommend keeping it either a small contained VP, like we saw in Minority Report, or just keep it a small flat screen.


OK, so we have an idea about how the display would (and shouldn’t) look, let’s move on to talk about the inputs.

Inputs

To talk about inputs, then, we have to return to a favorite topic of mine, and that is the level of agency we want for the interaction. In short, we need to decide how much work the machine is doing. Is the machine just a manual tool that Deckard has to manipulate to get it to do anything? Or does it actively assist him? Or, lastly, can it even do the job while his attention is on something else—that is, can it act as an agent on his behalf? Sophisticated tools can be a blend of these modes, but for now, let’s look at them individually.

Manual Tool

This is how the photo inspector works in Blade Runner. It can do things, but Deckard has to tell it exactly what to do. But we can still improve it in this mode.

We could give him well-mapped physical controls, like a remote control for this conceptual drone. Flight controls wind up being a recurring topic on this blog (and even came up already in the Blade Runner reviews with the Spinners) so I could go on about how best to do that, but I think that a handheld controller would ruin the feel of this scene, like Deckard was sitting down to play a video game rather than do off-hours detective work.

Special edition made possible by our sponsor, Tom Nook.
(I hope we can pay this loan back.)

Similarly, we could talk about a gestural interface, using some of the synecdochic techniques we’ve seen before in Ghost in the Shell. But again, this would spoil the feel of the scene, having him look more like John Anderton in front of a tiny-TV version of Minority Report’s famous crime scrubber.

One of the things that gives this scene its emotional texture is that Deckard is drinking a glass of whiskey while doing his detective homework. It shows how low he feels. Throwing one back is clearly part of his evening routine, so much a habit that he does it despite being preoccupied about Leon’s case. How can we keep him on the couch, with his hand on the lead crystal whiskey glass, and still investigating the photo? Can he use it to investigate the photo?

Here I recommend a bit of ad-hoc tangible user interface. I first backworlded this for The Star Wars Holiday Special, but I think it could work here, too. Imagine that the photo inspector has a high-resolution camera on it, and the interface allows Deckard to declare any object that he wants as a control object. After the declaration, the camera tracks the object against a surface, using the changes to that object to control the virtual camera.

In the scene, Deckard can declare the whiskey glass as his control object, and the arm of his couch as the control surface. Of course the virtual space he’s in is bigger than the couch arm, but it could work like a mouse and a mousepad. He can just pick it up and set it back down again to extend motion.

This scheme takes into account all movement except vertical lift and drop. This could be a gesture or a spoken command (see below).

Going with this interaction model means Deckard can use the whiskey glass, allowing the scene to keep its texture and feel. He can still drink and get his detective on.

Tipping the virtual drone to the right.

Assistant Tool

Indirect manipulation is helpful for when Deckard doesn’t know what he’s looking for. He can look around, and get close to things to inspect them. But when he knows what he’s looking for, he shouldn’t have to go find it. He should be able to just ask for it, and have the photo inspector show it to him. This requires that we presume some AI. And even though Blade Runner clearly includes General AI, let’s presume that that kind of AI has to be housed in a human-like replicant, and can’t be squeezed into this device. Instead, let’s just extend the capabilities of Narrow AI.

Some of this will be navigational and specific, “Zoom to that mirror in the background,” for instance, or, “Reset the orientation.” Some will more abstract and content-specific, e.g. “Head to the kitchen” or “Get close to that red thing.” If it had gaze detection, he could even indicate a location by looking at it. “Get close to that red thing there,” for example, while looking at the red thing. Given the 3D inferrer nature of this speculative device, he might also want to trace the provenance of an inference, as in, “How do we know this chair is here?” This implies natural language generation as well as understanding.

There’s nothing from stopping him using the same general commands heard in the movie, but I doubt anyone would want to use those when they have commands like this and the object-on-hand controller available.

Ideally Deckard would have some general search capabilities as well, to ask questions and test ideas. “Where were these things purchased?” or subsequently, “Is there video footage from the stores where he purchased them?” or even, “What does that look like to you?” (The correct answer would be, “Well that looks like the mirror from the Arnolfini portrait, Ridley…I mean…Rick*”) It can do pattern recognition and provide as much extra information as it has access to, just like Google Lens or IBM Watson image recognition does.

*Left: The convex mirror in Leon’s 21st century apartment.
Right: The convex mirror in Arnolfini’s 15th century apartment

Finally, he should be able to ask after simple facts to see if the inspector knows or can find it. For example, “How many people are in the scene?”

All of this still requires that Deckard initiate the action, and we can augment it further with a little agentive thinking.

Agentive Tool

To think in terms of agents is to ask, “What can the system do for the user, but not requiring the user’s attention?” (I wrote a book about it if you want to know more.) Here, the AI should be working alongside Deckard. Not just building the inferences and cataloguing observations, but doing anomaly detection on the whole scene as it goes. Some of it is going to be pointless, like “Be aware the butter knife is from IKEA, while the rest of the flatware is Christofle Lagerfeld. Something’s not right, here.” But some of it Deckard will find useful. It would probably be up to Deckard to review summaries and decide which were worth further investigation.

It should also be able to help him with his goals. For example, the police had Zhora’s picture on file. (And her portrait even rotates in the dossier we see at the beginning, so it knows what she looks like in 3D for very sophisticated pattern matching.) The moment the agent—while it was reverse ray tracing the scene and reconstructing the inferred space—detects any faces, it should run the face through a most wanted list, and specifically Deckard’s case files. It shouldn’t wait for him to find it. That again poses some challenges to the script. How do we keep Deckard the hero when the tech can and should have found Zhora seconds after being shown the image? It’s a new challenge for writers, but it’s becoming increasingly important for believability.

Though I’ve never figured out why she has a snake tattoo here (and it seems really important to the plot) but then when Deckard finally meets her, it has disappeared.

Scene

  • Interior. Deckard’s apartment. Night.
  • Deckard grabs a bottle of whiskey, a glass, and the photo from Leon’s apartment. He sits on his couch and places the photo on the coffee table.
  • Deckard
  • Photo inspector.
  • The machine on top of a cluttered end table comes to life.
  • Deckard
  • Let’s look at this.
  • He points to the photo. A thin line of light sweeps across the image. The scanned image appears on the screen, pulled in a bit from the edges. A label reads, “Extending scene,” and we see wireframe representations of the apartment outside the frame begin to take shape. A small list of anomalies begins to appear to the left. Deckard pours a few fingers of whiskey into the glass. He takes a drink before putting the glass on the arm of his couch. Small projected graphics appear on the arm facing the inspector.
  • Deckard
  • OK. Anyone hiding? Moving?
  • Photo inspector
  • No and no.
  • Deckard
  • Zoom to that arm and pin to the face.
  • He turns the glass on the couch arm counterclockwise, and the “drone” revolves around to show Leon’s face, with the shadowy parts rendered in blue.
  • Deckard
  • What’s the confidence?
  • Photo inspector
  • 95.
  • On the side of the screen the inspector overlays Leon’s police profile.
  • Deckard
  • Unpin.
  • Deckard lifts his glass to take a drink. He moves from the couch to the floor to stare more intently and places his drink on the coffee table.
  • Deckard
  • New surface.
  • He turns the glass clockwise. The camera turns and he sees into a bedroom.
  • Deckard
  • How do we have this much inference?
  • Photo inspector
  • The convex mirror in the hall…
  • Deckard
  • Wait. Is that a foot? You said no one was hiding.
  • Photo inspector
  • The individual is not hiding. They appear to be sleeping.
  • Deckard rolls his eyes.
  • Deckard
  • Zoom to the face and pin.
  • The view zooms to the face, but the camera is level with her chin, making it hard to make out the face. Deckard tips the glass forward and the camera rises up to focus on a blue, wireframed face.
  • Deckard
  • That look like Zhora to you?
  • The inspector overlays her police file.
  • Photo inspector
  • 63% of it does.
  • Deckard
  • Why didn’t you say so?
  • Photo inspector
  • My threshold is set to 66%.
  • Deckard
  • Give me a hard copy right there.
  • He raises his glass and finishes his drink.

This scene keeps the texture and tone of the original, and camps on the limitations of Narrow AI to let Deckard be the hero. And doesn’t have him programming a virtual Big Trak.