Comparing Sci-Fi HUDs in 2024 Movies

As in previous years, in preparation for awarding the Fritzes, I watched as many sci-fi movies as I could find across 2024. One thing that stuck out to me was the number of heads-up displays (HUDs) across these movies. There were a lot to them. So in advance of the awards, lets look and compare these. (Note the movies included here are not necessarily nominees for a Fritz award.)

I usually introduce the plot of every movie before I talk about it. This provides some context to understanding the interface. However, that will happen in the final Fritzes post. I’m going to skip that here. Still, it’s only fair to say there will be some spoilers as I describe these.

If you read Chapter 8 of Make It So: Interaction Lessons from Science Fiction, you’ll recall that I’d identified four categories of augmentation.

  1. Sensor displays
  2. Location awareness
  3. Context awareness (objects, people)
  4. Goal awareness

These four categories are presented in increasing level of sophistication. Let’s use these to investigate and compare five primary examples from 2024, in order of their functional sophistication.

Dune 2

Lady Margot Fenring looks through augmented opera glasses at Feyd-Rautha in the arena. Dune 2 (2024).

True to the minimalism that permeates much of the interfaces film, the AR of this device has a rounded-rectangle frame from which hangs a measure of angular degrees to the right. There are a few ticks across the center of this screen (not visible in this particular screen shot). There is a row of blue characters across the bottom center. I can’t read Harkonnen, and though the characters change, I can’t quite decipher what most of them mean. But it does seem the leftmost character indicates azimuth and the rightmost character angular altitude of the glasses. Given the authoritarian nature of this House, it would make sense to have some augmentation naming the royal figures in view, but I think it’s a sensor display, which leaves the user with a lot of work to figure out how to use that information.

You might think this indicates some failing of the writer’s or FUI designers’ imagination. However, an important part of the history of Dune is a catastrophic conflict known as the Butlerian Jihad. This conflict involved devastating, large-scale wars against intelligent machines. As a result, machines with any degree of intelligence are considered sacrilege. So it’s not an oversight, but as a result, we can’t look to this as a model for how we might handle more sophisticated augmentations.

Alien: Romulus

Tyler teaches Rain how to operate a weapon aboard the Renaissance. Alien: Romulus (2024)

A little past halfway through the movie, the protagonists finally get their hands on some weapons. In a fan-service scene similar to one between Ripley and Hicks from Aliens (1986), Tyler shows Rain how to hold an FAA44 pulse rifle. He also teaches her how to operate it. The “AA” stands for “aiming assist”, a kind of object awareness. (Tyler asserts this is what the colonial marines used, which kind of retroactively saps their badassery, but let’s move on.) Tyler taps a small display on the user-facing rear sight, and a white-on-red display illuminates. It shows a low-res video of motion happening before it. A square reticle with crosshairs shows where the weapon will hit. A label at the top indicates distance. A radar sweep at the bottom indicates movement in 360° plan view, a sensor display.

When Rain pulls the trigger halfway, the weapon quickly swings to aim at the target. There is no indication of how it would differentiate between multiple targets. It’s also unclear how Rain told it that the object in the crosshairs earlier is what she wants it to track now. Or how she might identify a friendly to avoid. Red is a smart choice for low-light situations as red is known to not interfere with night vision. Also it’s elegantly free of flourishes and fuigetry.

I’m not sure the halfway-trigger is the right activation mechanism. Yes, it allows the shooter to maintain a proper hold and remain ready with the weapon, and allows them not have to look at the display to gain its assistance, but also requires them to be in a calm, stable circumstance that allows for fine motor control. Does this mean that in very urgent, chaotic situations, users are just left to their own devices? Seems questionable.

Alien: Romulus is beholden to the handful of movies in the franchise that preceded it. Part of the challenge for its designers is to stay recognizably a part of the body of work that was established in 1979 while offering us something new. This weapon HUD stays visually simple, like the interfaces from the original two movies. It narratively explains how a civilian colonist with no weapons training can successfully defend herself against a full-frontal assault by a dozen of this universe’s most aggressive and effective killers. However, it leaves enough unexplained that it doesn’t really serve as a useful model.

The Wild Robot

Roz examines an abandoned egg she finds. The Wild Robot (2024)

HUD displays of artificially intelligent robots are always difficult to analyze. It’s hard to determine what’s an augmentation, here loosely defined as an overlay on some datastream created for a user’s benefit but explicitly not by that user. It opposes a visualization of the AI’s own thoughts as they are happening. I’d much rather analyze these as augmentation provided for Roz, but it just doesn’t hold up to scrutiny that way. What we see in this film are visualizations of Roz’ thoughts.

In the HUD, there is an unchanging frame around the outside. Static cyan circuit lines extend to the edge. (In the main image above, the screen-green is an anomaly.) A sphere rotates in the upper left unconnected to anything. A hexagonal grid on the left has some hexes which illuminate and blink unconnected to anything. The grid moves unrelated to anything. These are fuigetry and neither conveys information nor provides utility.

Inside that frame, we see Roz’ visualized thinking across many scenes.

  • Locus of attention—Many times we see a reticle indicating where she’s focused, oftentimes with additional callout details written in robot-script.
  • “Customer” recognition—(pictured) Since it happens early in the film, you might think this is a goofy error. The potential customer she has recognized is a crab. But later in the film, Roz learns the language common to the animals of the island. All the animals display a human-like intelligence, so it’s completely within the realm of possibility that this blue little crustacean could be her customer. Though why that customer needed a volumetric wireframe augmentation is very unclear.
  • X-ray vision—While looking around for a customer, she happens upon an egg. The edge detection indicates her attention. Then she performs scans that reveal the growing chick inside and a vital signs display.
  • Damage report—After being attacked by a bear, Roz does an internal damage check and she notes the damage on screen.
  • Escape alert—(pictured) When a big wave approaches the shore on which she is standing, Roz estimates the height of the wave to be five time her height. Her panic expresses itself in a red tint around the outside edge.
  • Project management—Roz adopts Brightbill and undertakes the mission to mother him—specifically to teach him to eat, swim, and fly. As she successfully teaches him each of these things, she checks it off by updating one of three graphics that represent the topics.
  • Language acquisition—(pictured) Of all the AR in this movie, this scene frustrates me the most. There is a sequence in which Roz goes torpid to focus on learning the animal language. Her eyes are open the entire time she captures samples and analyzes them. The AR shows word bubbles associated with individual animal utterances. At first those bubbles are filled with cyan-colored robo-ese script. Over the course of processing a year’s worth of samples, individual characters are slowly replaced in the utterances with bold, green, Latin characters. This display kind of conveys the story beat of “she’s figuring out the language), but befits cryptography much more than acquisition of a new language.

If these were augmented reality, I’d have a lot of questions about why it wasn’t helping her more than it does. It might seem odd to think an AI might have another AI helping it, but humans have loads of systems that operate without explicit conscious thought, like preattentive processing, all the functions of our autonomic nervous system, sensory filtering, and recall, just to name a few. So I can imagine it would be a fine model for AI-supporting-AI.

Since it’s not augmented reality, it doesn’t really act as a model for real world designs except perhaps for its visual styling.

Borderlands

Claptrap is a little one-wheel robot that accompanies Lilith though her adventures on and around Pandora. We see things through his POV several times.

Claptrap sizes up Lilith from afar. Borderlands (2024).

When Claptrap first sees Lilith, it’s from his HUD. Like Roz’ POV display in The Wild Robot, the outside edge of this view has a fixed set of lines and greebles that don’t change, not even for a sensor display. I wish those lines had some relationship to his viewport, but that’s just a round lens and the lines are vaguely like the edges of a gear.

Scrolling up from the bottom left is an impressive set of textual data. It shows that a DNA match has been made (remotely‽ What kind of resolution is Claptrap’s CCD?) and some data about Lilith from what I presume is a criminal justice data feed: Name and brief physical description. It’s person awareness.

Below that are readouts for programmed directive and possible directive tasks. They’re funny if you know the character. Tasks include “Supply a never-ending stream of hilarious jokes and one-liners to lighten the mood in tense situations” and “Distract enemies during combat. Prepare the Claptrap dance of confusion!” I also really like the last one “Take the bullets while others focus on being heroic.” It both foreshadows a later scene and touches on the problem raised with Dr. Strange’s Cloak of Levitation: How do our assistants let us be heroes?

At the bottom is the label “HYPERION 09 U1.2” which I think might be location awareness? The suffix changes once they get near the vault. Hyperion a faction in the game. Not certain what it means in this context.

When driving in a chase sequence, his HUD gives him a warning about a column he should avoid. It’s not a great signal. It draws his attention but then essentially says “Good luck with that.” He has to figure out what object it refers to. (The motion tracking, admittedly, is a big clue.) But the label is not under the icon. It’s at the bottom left. If this were for a human, it would add a saccade to what needs to be a near-instantaneous feedback loop. Shouldn’t it be an outline or color overlay to make it wildly clear what and where the obstacle is? And maybe some augmentation on how to avoid it, like an arrow pointing right? As we see in a later scene (below) the HUD does have object detection and object highlighting. There it’s used to find a plot-critical clue. It’s just oddly not used here, you know, when the passengers’ lives are at risk.

When the group goes underground in search of the key to the Vault, Claptrap finds himself face to face with a gang of Psychos. The augmentation includes little animated red icons above the Psychos. Big Red Text summarizes “DANGER LEVEL: HIGH” across the middle, so you might think it’s demonstrating goal and context awareness. But Claptrap happens to be nigh-invulnerable, as we see moments later when he takes a thousand Psycho bullets without a scratch. In context, there’s no real danger. So,…holup. Who’s this interface for, then? Is it really aware of context?

When they visit Lilith’s childhood home, Claptrap finds a scrap of paper with a plot-critical drawing on it. The HUD shows a green outline around the paper. Text in the lower right tracks a “GARBAGE CATALOG” of objects in view with comments, “A PSYCHO WOULDN’T TOUCH THAT”, “LIFE-CHOICE QUESTIONING TRASH”, “VAULT HUNTER THROWBACK TRASH”. This interface gives a bit of comedy and leads to the Big Clue, but raises questions about consistency. It seems the HUDs in this film are narrativist.

In the movie, there are other HUDs like this one, for the Crimson Lance villains. They fly their hover-vehicles using them, but we don’t nearly get enough time to tease the parts apart.

Atlas

The HUD in Atlas happens when the titular character Atlas is strapped into an ARC9 mech suit, which has its own AGI named Smith. Some of the augmentations are communications between Smith and Atlas, but most are augmentations of the view before her. The viewport from the pilot’s seat is wide and the augmentations appear there.

Atlas asks Smith to display the user manuals. Atlas (2024)

On the way to evil android Harlan’s base, we see the frame of the HUD has azimuth and altitude indicators near the edge. There are a few functionless flourishes, like arcs at the left and right edges. Later we see object and person recognition (in this case, an android terrorist, Casca Decius). When Smith confirms they are hostile, the square reticles go from cyan to red, demonstrating context awareness.

Over the course of the movie Atlas has resisted Smith’s call to “sync” with him. At Harlan’s base, she is separated from the ARC9 unit for a while. But once she admits her past connection to Harlan, she and Smith become fully synched. She is reunited with the ARC9 unit and its features fully unlock.

As they tear through the base to stop the launch of some humanity-destroying warheads, they meet resistance from Harlan’s android army. This time the HUD wholly color codes the scene, making it extremely clear where the combatants are amongst the architecture.

Overlays indicate the highest priority combatants that, I suppose, might impede progress. A dashed arrow stretches through the scene indicating the route they must take to get to their goal. It focuses Atlas on their goal and obstacles, helping her decision-making around prioritization. It’s got rich goal awareness and works hard to proactively assist its user.

Despite being contrasting colors, they are well-controlled to not vibrate. You might think that the luminance of the combatants and architecture might be flipped, but the ARC9 is bulletproof, so there’s no real danger from the gunfire. (Contrast Claptrap’s fake danger warning, above.) Saving humanity is the higher priority. So the brightest (yellow) means “do this”, the second brightest (cyan) means “through this” and darkest (red) means “there will be some nuisances en route.” The luminescence is where it should be.

In the climactic fight with Harlan, the HUD even displays a predictive augmentation, illustrating where the fast-moving villain is likely to be when Atlas’ attacks land. This crucial augmentation helps her defeat the villain and save the day. I don’t think I’ve seen predictive augmentation outside of video games before.


If I was giving out an award for best HUD of 2024, Atlas would get it. It is the most fully-imagined HUD assistance across the year, and consistently, engagingly styled. If you are involved with modern design or the design of sci-fi interfaces, I highly recommend you check it out.

Stay tuned for the full Fritz awards, coming later this year.

Luke’s predictive HUD

When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.

Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.

childrenofmen-impact-08

It commands attention effectively

Props to this attention-commanding signal. Neuroscience tells us that symmetrical expansion like this triggers something called a startle response.  (I first learned this in the awesome and highly recommended book Mind Hacks.) Any time we see symmetrical expansion in our field of vision, within milliseconds our sympathetic nervous system takes over, fixes our attention to that spot, and prompts us to avoid the thing that our brains believe is coming right at us. It all happens way before conscious processing, and that’s a good thing. It’s evolutionarily designed to keep us safe from falling rocks, flying fists, and pouncing tigers, and scenarios like that don’t have time for the relatively slow conscious processes.

Well visualized

The startle response varies in strength depending on several things.

  • The anxiety of the person (an anxious person will react to a slighter signal)
  • The driver’s habituation to the signal
  • The strength of the signal, in this case…
    • Contrast of the shape against its background
    • The speed of the expansion
  • The presence of a prepulse stimulus

We want the signal to be strong enough to grab the attention of a possibly-distracted driver, but not strong enough to cause them to overreact and risk control of car. While anything this critical to safety needs to be thoroughly tested, the size of the IMPACT triangle seems to sit in the golden mean between these two.

And while the effect is strongest in the lab with a dark shape expanding over a light background, I suspect given habituation to the moving background of the roadscape and a comparatively static HUD, the sympathetic nervous system would have no problem processing this light-on-dark shape.

Well placed

We only see it in action once, so we don’t know if the placement is dynamic. But it appears to be positioned on the HUD such that it draws Luke’s attention directly to the point in his field of vision where the flaming car is. (It looks offset to us because the camera is positioned in the middle of the back seat rather than the driver’s seat.) This dynamic positioning is great since it saves the driver critical bits of time. If the signal was fixed, then the driver would have his attention pulled between the IMPACT triangle and the actual thing. Much better to have the display say, “LOOK HERE!”

Readers of the book will recall this nuance from the lesson from Chapter 8, Augment the Periphery of Vision: “Objects should be placed at the edge of the user’s view when they are not needed, and adjacent to the locus of attention when they are.”

Improvements

There are a few improvements that could be made.

  • It could synchronize the audio to the visual. The dinging is dissociated from the motion of the triangle, and even sounds a bit like a seat belt warning rather than something trying to warn you of a possible, life-threatening collision. Having the sound and visual in sync would strengthen the signal. It could even increase volume with the probability and severity of impact.
  • It could increase the strength of the audio signal by suppressing competing audio, by pausing any audio entertainment and even canceling ambient sounds.
  • It could predict farther into the future. The triangle only appears once the flaming car actually stops in the road a few meters ahead. But there is clearly a burning car rolling down to the road for seconds before that. We see it. The passengers see it. Better sensors and prediction models would have drawn Luke’s attention to the problem earlier and helped him react sooner.
  • It could also know when the driver is actually focused on the problem and than fade the signal to the periphery so that it does not cover up any vital visual information. It can then fade completely when the risk has passed.
  • An even smarter system might be able to adjust the strength of the signal based on real-time variables, like the anxiety of the driver, his or her current level of distraction, ambient noise and light, and of course the degree of risk (a tumbleweed vs. a small child on the road).
  • It could of course go full agentive and apply the brakes or swerve if the driver fails to take appropriate action in time.

Despite these improvements, I believe Luke’s HUD to be well designed that gets underplayed in the drama and disorientation of the scene.

childrenofmen-impact-09

Talking Technology

We’ve seen four interfaces with voice output through speakers so far.

  1. The message centre in the New Darwin hotel room, which repeated the onscreen text
  2. The MemDoubler, which provided most information to Johnny through voice alone
  3. The bathroom tap in the Beijing hotel which told Johnny the temperature of the water
  4. The Newark airport security system

jm-9-5-talking-montage

Later, in the brain hacking scene, we’ll hear two more sentences spoken.

Completionists: There’s also extensive use of voice output during a cyberspace search sequence, but there Johnny is wearing a headset so he is the only one who can hear it. That is sufficiently different to be left out of this discussion.

Voice is public

Sonic output in general and voice in particular have the advantage of being omnidirectional, so the user does not need to pay visual attention to the device, and, depending on volume and ambient noise, can be understood at much greater distances than a screen can be read. These same qualities are not so desirable if the user would prefer to keep the message or information private. We can’t tell whether these systems can detect the presence or absence of people, but the hotel message centre only spoke when Johnny was alone. Later in the film we will see two medical systems that don’t talk at all. This is most likely deliberate because few patients would appreciate their symptoms being broadcast to all and sundry.

Unless you’re the only one in the room

jm-8-bathroom-tap

The bathroom tap is interesting because the temperature message was in English. This is a Beijing hotel, and the scientists who booked the suite are Vietnamese, so why? It’s not because we the audience need to know this particular detail. But we do have one clue: Johnny cursed rather loudly once he was inside the bathroom. I suggest that there is a hotel computer monitoring the languages being used by guests within the room and adjusting voice outputs to match. Current day word processors, web browsers, and search engines can recognise the language of typed text input and load the matching spellcheck dictionaries, so it’s a fair bet that by 2021 our computers will be able to do the same for speech.

Iron Man HUD: 1st person view

In the prior post we catalogued the functions in the Iron HUD. Today we examine the 1st-person display.

When we first see the HUD, Tony is donning the Iron Man mask. Tony asks, JARVIS, “You there?” To which JARVIS replies, “At your service sir.” Tony tells him to “Engage the heads-up display”, and we see the HUD initialize. It is a dizzying mixture of blue wireframe motion graphics. Some imply system functions, such as the reticle that pinpoints Tony’s eye. Most are small dashboard-like gauges that remain small and in Tony’s peripheral vision while the information is not needed, and become larger and more central when needed. These features are catalogued in another post, but we learn about them through two points-of-view:a first-person view, which shows us what Tony’s sees as if we were there, donning the mask in his stead, and second-person view, which shows us Tony’s face overlaid against a dark background with floating graphics.

This post is about that first-person view. Specifically it’s about the visual design and the four awarenesses it displays.

Avengers-missile-fetching04

In the Augmented Reality chapter of Make It So, I identified four types of awareness seen in the survey for Augmented Reality displays:

  1. Sensor display
  2. Location awareness
  3. Context awareness
  4. Goal awareness

The Iron Man HUD illustrates all four and is a useful framework for describing and critiquing the 1st-person view.

Sensor display

When looking through the HUD “ourselves,” we can see that the HUD provides some airplane-like heads up instruments: Across the top is a horizontal compass with a thin white line for a needle. Below and to its left is a speed indicator, presented in terms of MACH. On the left side of the screen is a two-part altimeter with overlays indicating public, commercial, military, and aerospace layers of atmosphere, with a small blue tick mark indicating Tony’s current altitude.

There are just-in-time status indicators like that cyan text box on the right with its randomized rule line. The content within is all N -8 W -97 RNG EL, so, hard to tell what it means, but Tony’s a maker working with a prototype. It’s no surprise he takes some shortcuts in the interface since it’s not a commercial device. But we should note that it would reduce his cognitive load to not have to remember what those cryptic letters meant.

IronMan1_HUD08
You can just see the tops of these gauges at the bottom of this screen.

The exact sensor shown depends on the context and goal at hand.

Periphery and attention

A quick sidenote about peripheral vision and the detail of these gauges. Looking at them, it’s notable that they are small and quite detailed. That makes sense when he’s looking right at them, but when he’s not, given the amount of big, swirling graphics he“s got vying for his attention in the main display, the more those little gauges have to compete. And when it comes to your peripheral vision, localized detail and motion is not enough, owing to the limits of our foveal extent. (Props to @pixelio for the heads-up on this one.)

You see, your brain tricks you into thinking that you can see really well across your entire field of vision. In fact, you can only see really well across a few dozen degrees of that perceptual sphere, corresponding to the tiny area at the back of your eye called the fovea where all the really good photoreceptors concentrate. As your eyes dart around the scene before you, your brain puts all the snippets of detailed information together so it feels like a cohesive, well-detailed whole, but it’s ultimately just a hack. Take a look at this demonstration of the effect.

Screen Shot 2015-07-20 at 23.49.56
This only works if you view it live.

So, having those teeny little guages dancing around as a signal of troubles ahead won’t really get Tony’s attention. He could develop habits of glancing at these things, but that’s a weak strategy, since this data is so mission-critical. If he misses it and forgets to check the gauges, he’s Iron Toast. Fortunately, JARVIS is once again our deus ex machina (in so many senses) because he is able to track where Tony is looking, and if he’s not looking at the wiggling gauge, JARVIS can choose to escalate the signal: Hide the air traffic data temporarily and show the problem in the main screen. Here, as in other mission critical systems, attention management is crisis management. Now, for those of us working with pre-JARVIS tech, it’s rare today for a system to be able to

  • Track perceptual details of its users
  • Monitor a model of the user’s attention
  • Make the right call amongst competing priorities to escalate the right one

But if you could, it would be the smart and humane way to handle it.

Location Awareness

As Tony prepares for his first flight, JARVIS gives him a bit of x-ray vision, displaying a wireframe view of the Santa Monica coastline with live air traffic control icons of aircraft in the vicinity. The overhead map updates of course in real time.

IronMan1_HUD17
If my Google Earth sleuthing is right, his view means he lives in the Malibu RV Park and this view is due East.

Context Awareness

Very quickly after we meet the HUD it shows its object recognition capabilities. As Tony sweeps his glance across his garage, complex reticles jump to each car. Split-seconds afterwards, the car’’s outline is overlaid and some adjunct information about it is presented.

IronMan1_HUD10

This holds true as he’s in flight as well. When Tony passes by the Santa Monica pier, not only is the Pacific Wheel identified (as the Santa Monica Ferriswheel), but the interface shows him a Wikipedia-esque article for the thing as well.

IronMan1_HUD19

IronMan1_HUD21

While JARVIS might be tapping into location databases for both the car and the ferris wheel recognition, it’s more than that. In one scene we see him getting information on the Iron Patriot as it rockets away, and its location wouldn’t be on any real-time record for him to access.

Optical zoom

Too much detail

While this level of object detail is deeply impressive, it’s about as useful as reading Wikipedia pages hard-printed to transparencies while driving. The text is too small, too multilayered, and just pointless considering that JARVIS can tell him whatever he needs to know without even asking. Maybe he could indulge in pop-up pamphlets if he was on a long-haul flight from, say, Europe back home to the Malibu RV Park (see above), but wouldn’t Tony rather watch a movie while on Autopilot instead?

Goal awareness

Of course JARVIS is aware of Tony’s goals, and provides graphics customized to the task, whether that task is navigating flight through complex obstacle courses…

3D wayfinding

…taking down a bad guy with the next hit…

Suggested target points

…saving innocent bystanders who are freefalling from a plane…

Biometric analysis, target acquisition

…or instantly analyzing problems in an observed (and complicated) piece of machinery…

3D schematics of observed machinery with damage highlights

…JARVIS is there with the graphics to help illustrate, if not solve, the problem at hand. Most impressively, perhaps, is JARVIS’ ability to juggle all of these graphics and modes seamlessly to present just the right thing at the right time in real time. Tony never asks for a particular display, it just happens. If you needed no other proof of its strong artificial intelligence, this would be it.

Next up in the Iron HUD series: Compare and contrast the 2nd-person view.