The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.
Today we’ll be covering Best Narrative. These movies’ interfaces blow us away with evocative visuals and the richness of their future vision. They engross us in the story world by being spectacular.
The 2026 Award goes to: Elio
Pixar consistently puts great thought into their animated interfaces, and Elio is no different. The little wearable personal devices that help the different intergalactic species all share a space are so simple, and provide both a bit of worldbuilding as well as moments of comedy. The incomprehensibility of the alien spaceship controls are a plot-critical, candy-colored glowing hoot (and reminiscent of another Pixar short, Lifted.) I loved the lemniscate-shaped AI encyclopedia that Elio consults when preparing for his negotiations. We should be able to talk to Wikipedia and not just its articles. (Though I wish the entries were more than just text and an image.) Also this film has the only example I’ve seen where one character acts as an environmental suit for another character (not pictured, but you know the scene).
Also check out: Mickey 17
It’s a dark world where the hoarding class has made the working class so desperate that some people have to agree to be cloned for critical tasks that are likely death sentences. The interfaces in Mickey 17 help sell that very world, and even the ways that some folks use that same tech to eke out a little naughty joy amongst the drudgery. (With echoes of a similarly flirty interface from Starship Troopers.)
Also check out: Fantastic Four: First Steps
Marvel was once a main-stay for interfaces to study, but they’ve pointed their camera increasingly away from interfaces of late. So I was delighted to see Fantastic Four: First Steps bring to life interfaces from Jack Kirby’s Silver Age Fantastic Four. I don’t know if it was CGI, but I swear the giant, spherical quadrilateral screens are actual giant CRTs right down to the blurriness and chromatic aberration. If that’s CGI, it’s great attention to the detail from the reference material. All the spherical displays!
The “big” award in the Fritzes is Best Interface, but to amp up the anticipation, let’s look at some of the idiosyncratic awards from 2025 first.
In Johnny Mnemonic we see two different types of binoculars with augmented reality overlays and other enhancements: Yakuz-oculars, and LoTek-oculars.
Yakuz-oculars
The Yakuza are the last to be seen but also the simpler of the two. They look just like a pair of current day binoculars, but this is the view when the leader surveys the LoTek bridge.
I assume that the characters here are Japanese? Anyone?
In the centre is a fixed-size green reticule. At the bottom right is what looks like the magnification factor. At the top left and bottom left are numbers, using Western digits, that change as the binoculars move. Without knowing what the labels are I can only guess that they could be azimuth and elevation angles, or distance and height to the centre of the reticule. (The latter implies some sort of rangefinder.)
So far, this is a simple uncluttered display. But why is there a brightly glowing Pharmakom logo at the top right? It blocks part of the view, and probably doesn’t help anyone trying to keep their eyes adapted for night vision.
LoTek-oculars
The LoTeks, despite their name, have more impressive binoculars. They’re first used when Johnny gets out of his airport taxi.
There’s a third tube above the optics, a rectangular inlet, and an antenna.
In these binoculars, the augmented reality overlay is much more dynamic. Instead of a fixed circle, green lines converge in a bounding box around the image of Johnny. Text slides onto the display from left to right, the last line turning yellow.
Zoomrect
The animated transition of the bounding box resembles what Classic MacOS programmers of the 1990s called “zoomrects” used for showing windows opening or closing. It’s a very effective technique to draw attention to a particular area of an image.
Animated text
Text appearing character by character is ubiquitous in film interfaces. In the 1960s and 1970s mainframe and minicomputer terminals really did display incrementally, as the characters arrived one by one over slow serial port links. On any more recent computer it actually takes extra programming to achieve this effect, as the normal display of text is so fast that we would perceive it as instantaneous. But people like to see incremental text, or have been conditioned by film to expect it, so why not?
Bioscanning
The binoculars detect Johnny’s implant. It might just be possible to detect this passively from infrared or electronic signals, but more likely the binoculars include a high resolution microwave radar as well. If there had been more than one person in view, the bounding box would indicate which one the text refers to. And note that the last line of text is a different color. What that means is unclear here, but it becomes clear (and I’ll discuss it) later.
The second time we see the LoTek binoculars is when a lookout spots Street Preacher, a very bad guy and another who wants to remove Johnny’s head. Once again the binoculars have performed more than just a visual scan.
The binocular view and overlay are being relayed to another character, the LoTek leader J-Bone who can watch on a monitor. Here the film anticipates the WiFi webcam.
The overlay text now changes.
Narrow AI?
This is interesting, because the binoculars can not only detect implants and other cyborg modifications, but are apparently able to evaluate and offer advice. It appears that the green text is used for the factual (more or less) information about what has been detected, while yellow text is uncertain or or speculative.
Does this imply a general artificial intelligence? Not necessarily. This warning could be based solely on the detected signature, in the same way that current day military passive sonars and radar warning receivers can identify threats based on identifying characteristics of a received signal. In the world of Johnny Mnemonic it would make sense to assume that anyone with full custom biomechanics is extremely dangerous. Or, since Street Preacher is a resident rather than a stranger and already feared by others, his appearance and the warning could have been entered into a LoTek facial recognition database that the binocular system uses as a reference.
These textual overlays are an excellent interface, not interfering with normal vision and providing a fast and easy-to-understand analysis. But, the user must have faith that the computer analysis is accurate. There’s no reason given as to why any of the text is displayed. If Johnny was carrying an implant in his pocket instead of his brain, would the computer know the difference?
An alternative approach would be some kind of sensor fusion or false spectrum display, with the raw infrared or radar image overlaid over the visuals and the viewer responsible for interpreting the data. The problem with such systems is that our visual system didn’t evolve to interpret such imagery, so a lot of training and practice is required to be both fast and accurate. And the overlay itself interferes with our normal visual recognition and processing. If the computer can do a better job of deciphering the meaning of non-visual data, it should do so and summarise for the human viewer.
Further advantages of this interface are that even a novice sentry will benefit from the built-in scanning and threat analysis, and the wireless transmission ensures that the information is shared rather than being limited to the person on watch.
In the last post we went over the Iron HUD components. There is a great deal to say about the interactions and interface, but let’s just take a moment to recount everything that the HUD does over the Iron Man movies and The Avengers. Keep in mind that just as there are many iterations of the suit, there can be many iterations of the HUD, but since it’s largely display software controlled by JARVIS, the functions can very easily move between exosuits.
Gauges
Along the bottom of the HUD are some small gauges, which, though they change iconography across the properties, are consistently present.
For the most part they persist as tiny icons and thereby hard to read, but when the suit reboots in a high-altitude freefall, we get to see giant versions of them, and can read that they are:
Tony can, at a glance or request, summon more detail for any of the gauges.
Even different visualizations of similar information.
Object Recognition
In the 1st-person view we see that the HUD has a separate map in the lower-left, and object recognition/awareness,
In the 2nd-person view, we see even more layers of information about the identified objects, floating closer to tony’s point of view.
Situational
Most of the HUD functions we see, though, are situational, brought up for Tony’s attention when JARVIS believes they are needed, or when Tony requests them. Following are screenshots that illustrate a moment when the situational function appeared.
Iron Man
Iron Man 2
Iron Man 3
The Avengers
Some of these illustrate why I argue that JARVIS is the superhero, and Tony just the onboard manager, but rather than reverse engineering any particular function, for this post it is enough to document them and note that only the optical zoom seems to be an interactive function. This raises questions of how he initiated the mode and how he escapes the mode, but since we don’t see the mechanisms of control, it’s entirely arguable that JARVIS is just being his usual helpful self again.
Cut to the bottom of the Hudson River where some electrical “transmission lines” rest. Tony in his Iron Man supersuit has his palm-mounted repulsor rays configured such that they create a focused beam, capable of cutting through an iron pipe to reveal power cables within. Once the pipe casing is removed, he slides a circular device onto the cabling. The cuff automatically closes, screws itself tight, and expands to replace the section of casing. Dim white lights burn brighter as hospital-green rings glow brightly around the cable’s circumference. His task done, he underwater-flies away, flying up the southern tip of Manhattan to Stark Tower.
It’s quick scene that sets up the fact that they’re using Tony’s arc reactor technology to liberate Stark Tower from the electrical grid (incidentally implying that the Avengers will never locate a satellite headquarters anywhere in Florida. Sorry, Jeb.) So, since it’s a quick scene, we can just skip the details and interaction design issues, right?
Of course not. You know better from this blog.
The Lines
In case you were planning on living out some elaborate cosplay fantasy to remake this scene, be aware that subsea cables don’t just sit like that inside of an air-filled pipe, waiting for a leak to short circuit the power grid. The conductive cabling is surrounded by thick plastic insulation. Even the notion of pipes underwater might have been a thing years ago, but modern subsea casings are steel cables embedded in the same insulation. But whatever, we don’t need that for the story.
Cuffing
The cuff interaction is awesome. All Tony has to do to is roughly slide it in position, and then the thing does the rest. The shape and actuators make it so he can place it roughly in the right place, and it does what it needs to do. That might have been overengineered for a single-use device, but whatever, he’s Tony Stark. He might have engineered it over breakfast, and he might already have made a handful for his other buidlings.
Cuff
The cuff itself is less awesome. If you were a high-profile billionaire inventor superhero putting a device in a place that’s difficult to monitor and connected directly to the electrical systems of your headquarters would you let it glow? Sure, that makes it easy to find later, but that means it’s easy for any supervillain to find, too. Much better is to camouflage it in the original pipe and keep its location secret from malefactors. Bad Tony. No glow.
Welding
The welding looks problematic for a couple of reasons. Does the “arc” have a fixed focal length? If so, how does he know what that length is and when he’s straying from it? We can presume it auto-adjusts using some welding subroutine of his on-board artificial intelligence, JARVIS. Then it becomes an issue of aiming. Try this: tape a three-inch pencil perpendicularly to your palm, and then try and fill out a crossword puzzle. Not exactly precise.
But using a bit of apologetics, let’s presume that he’s not using a tool so much as positioning the tool that JARVIS uses. JARVIS can use cameras situated around the suit and perfect 3D modeling to continually adjust the focus and positioning of the repulsor beam to target wherever it is that Tony is looking. Perfectly reasonable given what we know about the technology in the Marvel Cinematic Universe, and moreover, the way it ought to work for its user. He uses the thing his body is expert in, i.e. looking, to guide an agent that takes care of all the details he’s not good at, i.e. controlling the repulsor beam to cut into the pipe at a precise depth. Now it’s not problematic. It’s awesome.
Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.
The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.
These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.
These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.
This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?
Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.
This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.
Section 6 stations a spider tank, hidden under thermoptic camouflage, to guard Project 2501. When Kunasagi confronts the tank, we see a glimpse of the video feed from its creepy, metal, recessed eye. This view is a screen green image, overlaid with two reticles. The larger one with radial ticks shows where the weapon is pointing while the smaller one tracks the target.
I have often used the discrepancy between a weapon- and target-reticle to point out how far behind Hollywood is on the notion of agentive systems in the real world, but for the spider tank it’s very appropriate.The image processing is likely to be much faster than the actuators controlling the tank’s position and orientation. The two reticles illustrate what the tank’s AI is working on. This said, I cannot work out why there is only one weapon reticle when the tank has two barrels that move independently.
When the spider tank expends all of its ammunition, Kunasagi activates her thermoptic camouflage, and the tank begins to search for her. It switches from its protected white camera to a big-lens blue camera. On its processing screen, the targeting reticle disappears, and a smaller reticle appears with concentric, blinking white arcs. As Kunasagi strains to wrench open plating on the tank, her camouflage is compromised, allowing the tank to focus on her (though curiously, not to do anything like try and shake her off or slam her into the wall or something). As its confidence grows, more arcs appear, become thicker, and circle the center, indicating its confidence.
The amount of information on the augmentation layer is arbitrary, since it’s a machine using it and there are certainly other processes going on than what is visualized. If this was for a human user, there might be more or less augmentation necessary, depending on the amount of training they have and the goal awareness of the system. Certainly an actual crosshairs in the weapon reticle would help aim it very precisely.