Fritzes 2026 bonus award: Best Comedy-Horror Interface

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

In this post, I award the best comedy-horror interface of 2025, then realize it is a special category of thing, gather multiple examples, and propose a name for it. It’s going to be a long one. Buckle in.

This post contains major spoilers (central twist) and a major digression.

A stylized graphic featuring a jellyfish-like creature against a dark background with the text 'MASSIVE SPOILERS AHEAD' in bold yellow lettering.

The movie is Bugonia. It is an English-language remake of the 2003 South Korean film Save the Green Planet! by Jang Joon-hwan. (Which is not streaming anywhere as far as I can tell, so I haven’t seen it yet.)

IMDB: https://www.imdb.com/title/tt0354668/

The plot

Bugonia centers on Teddy, a paranoid beekeeper, and his impressionable cousin Donny, who together kidnap Michelle Fuller. She is CEO of the pharmaceutical conglomerate Auxolith. The pair are convinced she is an extraterrestrial from the Andromeda galaxy, intent on destroying humanity. Their belief is drawn from conspiracy podcasts, fringe online sources, and Teddy’s own experimentation. Having abducted her, they chain her in their basement, shave her head, torture her, and subject her to an extended interrogation in which they hope to get her to agree to arrange a parley with the Andromedan emperor, in turn to negotiate for the withdrawal of Andromedans from Earth.

Michelle tries several tactics to escape, including reason, denial, and bargaining. While Teddy is out of the basement, dealing with an investigating sheriff, Donny confesses to Michelle that it’s all gone too far and shoots himself. When Teddy returns, Michelle tries absurdist escalation—agreeing that she is an alien—and convinces Teddy to inject his hospitalized mother with an alien cure in her car’s trunk (that is actually antifreeze). He does so, killing her. Infuriated, he returns to confront Michelle, but she intimidates him with absurdist escalation, claiming that she is in fact alien royalty and he must do what she says to save humanity. He agrees to take her to her office where she says she has a teleporter hidden in the coat closet. He steps in, but the explosives he has strapped to his body detonate, killing him, and freeing Michelle from the ordeal.

The spoiler

There are lots of hints along the way that Teddy and Donny don’t have a solid grasp on reality. But the sequence at the very end of the movie reframes everything that came before it, showing that Teddy’s conspiracy theories were right all along. (That in and of itself seems like a dangerous thing to put into the world, given current kayfabe fascist politics and their psychotic supporters, but it’s kind of played for comedy, so…sure, I guess?) Michelle really is queen of an alien species.

It means the long story she delivers in the basement is probably, diegetically true, rather than a bid to out-conspiracy Teddy, as the audience is led to believe. In this monologue she explains (it’s long, so I’m augmenting with emoji): The Andromedans’ 75th emperor discovered Earth 🛸👑🌎 when it was ruled by dinosaurs. 🦕🦖 After his species accidentally introduced a fatal virus 🦠 that wiped out all life there, he repopulated the planet with beings modeled on the Andromedans. 👽 These early humans eventually flourished into a civilization—Atlantis—that worshipped the Andromedans as gods. 🕉️

That harmony unravelled when some Atlantean humans began engineering 🧬 stronger, more aggressive variants of themselves, triggering a war ⚔️ that ended in thermonuclear catastrophe. 💥 The few survivors drifted at sea for a century. 🌊🚣‍♂️⏳ When they returned to land, their leaders were dead, ☠️ leaving only degraded remnants from which the apes 🦍 and eventually modern humans 🧑‍🤝‍🧑 descended. The new species proved no better. They were driven by war, ⚔️ ecological destruction, 🌲➡️🪵 and self-poisoning, 🍶☠️ incapable of changing course even when confronted with evidence of their own ruin. 📉 [Which, you know, fair enough.]

The Andromedans 👽 determined the flaw was genetic, 🧬 inherited from those ancient engineered ancestors and growing stronger with each generation. Their stated mission became eliminating this suicidal gene. 🔬💉 This would save both humanity and the Earth. 🧑‍🤝‍🧑🌏 For the experiments, including those conducted on Teddy’s mother, 👩 they chose subjects selected for their weakness and brokenness, 💔 on the theory that if the most damaged humans could be corrected, all of humanity might be. 🌍✨

Whew. 😮‍💨

So, after Teddy accidentally kills himself, Michelle teleports back to her ship where she meets with her court, dons her royal regalia, and confers with them on strategy. The hive agrees that humanity is beyond saving, and to enact this decision, she approaches a circular table with a map of the earth on top. Specifically it is a Lambert azimuthal equal-area projection centered on the North Pole. (I’m a sucker for nonstandard projections, as you may recall.)

A surreal and eerie underground environment with a circular arrangement of stone-like sculptures, surrounded by red terrain and mist, featuring a small figure in a tattered cloak standing near a central basin.

Encasing this map is a shimmering dome of translucent hexagons. (Like a beehive. I see what you did there.)

A close-up view of a decorative bowl filled with blue liquid, resembling an abstract earth or water scene, surrounded by soft, flowing material in warm colors.

She stares at it for a while.

Close-up portrait of a person with a detailed, artistic headdress, showcasing a serious expression against a dark background.

She presses the tip of a large thorn-like object into the dome. It gives and resists for a half a second, but then it pops, leaving tiny clouds above the map that quickly dissipate. And that’s it. All done. She looks down with a hint of sadness. Such a loss.

There follows a 3-minute sequence of eerily still scenes from around the world of the 8 billion humans who have been cut down instantly as a result of that interface, while extradiegetically, we hear Marlene Dietrich’s ”Where Have All the Flowers Gone”. Nightclubs and factories. Bedrooms and saunas. Beaches and museums. Everyone’s lying there, dead.

IMDB: https://www.imdb.com/title/tt12300742/Currently streaming on:

It’s a shockingly simple interface that wildly contrasts the horror of the mass extermination it causes. There is no second-hand safety switch. No pair of keys that need simultaneous turning. No equivalent of an “are you sure?” confirmation dialog. No big, surging hum from the giant planet-exploding laser that’s powering up. It is just presspop…death. The need to hold the thorn and keep pressing is a tiny, negligible safety measure, which, again, adds to the horror for being so mismatched to its effects. For a horror movie this thing is bzzz bzzz bzzz (bee’s kiss) perfection.

We do see a few animals, like birds, moving amongst the corpses. So we know the whole biosphere isn’t affected. (Well, at least until the 500 million metric tons of corpse begins to decay and so on.) So at first I thought I would have liked to have seen some interface preceding the pop where Queen Michelle selects our one species from amongst the 8.7 millions on the planet (maybe from an interactive Hillis Plot of the Tree of Life?), but when I imagined it, I thought better of it. It would have lost the horror of its utter simplicity. As it is, it conveys that homo sapiens sapiens were the singular problem under consideration, and this interface was just about them. Well. Killing them, anyway.

But otherwise, I don’t think the pop-interface itself makes much sense.

  • Why would it need a detailed map when it’s just a giant, momentary mass-murder button? Certainly we want labels, but this label doesn’t really explain what the button does, so is insufficient.
  • The dome is misleading, since it’s not describing some atmospheric protection. The air swirls, as a display, are misleading because not all air in the Terran atmosphere dissipates. (Sure, you can’t un-pop a bubble, and this extinction-action is irreversible, so that’s fitting.)
  • It seems prone to accidental activation. The Andromedans are managing a planetary, 66-million-year cover-their-ass project. Its end would involve…more.

So I suspect something else is going on here. I don’t think we’re seeing something literal in this sequence.

But to explain that in any depth I have to veer into some super heady film-critique stuff. If you’re just here for the interfaces, nope-out now. See you next time for Best Robots. But for the rest of you, let’s talk about…

Similar sequences

It’s one of my favorite kind in sci-fi, where you suspect the diegetic reality is kind of unfilmable or even incomprehensible to the human mind, but the filmmaker has to show something so they shift into a close-enough representation.

In these types of sequences, the shift from a more literal depiction to some close-enough stand-in is not marked or explained. You just have to feel that things are uncanny, decide that you’re seeing things in a different narrative register, and interpret from there.

Bugonia is not the first time we see something like this.

Other examples | 2001: A Space Odyssey (1969)

I think the first and biggest example in the survey is the white bedroom sequence at the end of 2001. Bowman’s mind is being shown something beyond his (and our) capabilities to comprehend. Kind of like a monkey mind being blown because tools. So Kubrick uses streaky lights and Louis-XVI-style bedroom furniture, illuminated floor grids and multiple, overlapping reflections of Bowman at different ages staring at each other, and you have to try and figure it out.

Other examples | Under the Skin (2013)

The Female (sorry, that’s the character name on imdb.com) looks like a seductrix, but functions more like the lure on an anglerfish. In the midnight zone where the anglerfish hunts, little fishes just see a pretty blue light and follow it, unable to perceive (or conceive?) the imminent danger of the giant, unseen, terrifying anglerfish controlling it. Similarly, The Female lures female-attracted men through a regular-looking door in a city. Once through the door, things quickly become uncanny, but the victims are so entranced by The Female, they just keep going. They walk deeper and deeper into a pool of inky blackness following her, while she walks on top of it. Once submerged in the weird liquid/not-liquid, after an elongated, spooky beat, they are suddenly flayed and the slurry of their remains goes…somewhere.

The movie, if you haven’t seen it, takes the whole thing several steps further, interrogating the existential crisis and ego death of The Female realizing she is just a lure, and more than that, one that is decaying and being replaced by another. If you haven’t seen it, I highly recommend it, even though you’ve just read massive spoilers, it’s still fantastic and worth watching and contemplating.

Other examples | Interstellar (2104)

This movie features a tesseract, a four-and-a-half dimensional hyper-cube structure built by post-human beings inside the supermassive black hole Gargantua. Astronaut Cooper gets trapped within it. In this space, the film represents time as a physical, navigable dimension, an Escher-esque library with bookshelves running every which way; repeating, stretching, and infused with scenes from Cooper’s daughter’s life. From this vantage he’s able to hit books in the shelves and manipulate gravity across the universe, ultimately sending quantum data Murph’s way that is crucial for saving humanity from itself.

We poor suckers in the audience live constrained in 3 and a half dimensions: we can move in the X, Y, and Z directions, but are passive recipients of the half bit, i.e. time. The tesseract allows time to function like one of those navigable dimensions, which we just aren’t equipped to comprehend, so, OK, a library of books is as good a visualization as any. 

Other examples | Legion (2017–2019)

(Thanks to Jonathan Korman for this last example). In the Season 2 opener of Legion, we see a choreographed dance-off between professor X’s psychic son David Haller, psychic parasite Amahl Farouk (posing as Oliver Bird), and fellow Clockworks patient Lenny Busker. It is a mental battle that we can’t possibly imagine, visualized as a dance battle that we can.


In each of these examples, the rest of the movie or TV show works with a standard-issue camera that shows what you might see if you were a fly on the wall in the room. But in these scenes, we’re seeing a weird in-between. It’s an impression of the actual events as they unfold, but not as literal as the rest of the show. But it’s not completely abstract, which takes us to this next not-quite-an-example.

A slightly different example | The End of Evangelion (1997)

The Third Impact sequence from Neon Genesis Evangelion features a similar sequence, that is not quite the same. In it, humanity is being unified into a single consciousness, and things shift from standard anime into a wholly-abstract sequence of still images, text cards, multiple characters overlapped on the same screen from multiple people’s memories, and bits of animation which are just fill color, no lines, and some kid’s illustrations, and hand drawings, and abstract paint, &c.

Contrast this chaos with the examples above. In those it feels like the art direction may have gotten stranger, but third-person narrative is still happening. Bowman is trying to figure out what he’s seeing. Victims are being eaten. Cooper is sending messages. David is fighting for control.

In Neon Genesis, we’re seeing the chaos of 8 million individuals’ memories and perceptions dissolving and fusing into a new thing. It’s more of a narrative-less, 8-million-person POV impression. Maybe I’m hair-splitting, but it does feel different.

Now that I’ve corralled those examples and that one near-example, I want to name it.

Naming it

I did a lot of web searching and I couldn’t find a fitting, extant descriptor in film theory for this kind of thing. Important caveat: I have never explicitly studied film theory, so I don’t have the benefit of a community of practice from whom I might have learned of one. But I can use Google and skip past the enshittified results to find some real ones. There were maybe half a dozen candidates. But none of them fit. So I have to coin something. I propose calling this a…

Text graphic displaying the phrase 'NARRATIVE PROXY SEQUENCE' in a stylized black font.
Admittedly setting the damned thing in Churchward Roundsquare does nothing to make it more accessible, but it’s the movie typeface, so…

(If that image didn’t load, know that it read, “narrative proxy sequence.”)

It’s a sequence because it’s unlike the rest of the narrative. It’s special. It’s a “narrative proxy” because while it’s still describing things that happen in the story, it’s using stand-ins for otherwise-unrenderable diegetic elements.

  • We can’t experience the cosmic mind-expansion that Bowman is experiencing, but we can deal with an antique bedroom set on an illuminated grid.
  • We can’t face the man-hunting anglerfish, but we can deal with a beautiful woman and an inky floor.
  • We can’t conceive a tesseract, but we can deal with a twisty library.
  • We can’t perceive a mental battle between omega-level telepaths, but we can go with a dance battle.
  • We can’t face whatever an Andromedan and their evil human-extinction interface is, but we can deal with a bubble map and a pop.

There’s one aspect that I failed to capture in the phrase “narrative proxy sequence”. In the examples, the “grand imagier” behind the film has decided that we couldn’t cope with—or even that it’s futile to try to—depict the diegetic events in a literal sense, so get in, loser, we’re going with this instead. Compare the trope of flashbacks. They’re not happening at the moment they’re remembered, but they’re shown as if the imagier’s camera was there, then. That’s different.

To capture this extra sense, I thought of prepending “mind-sparing”, “cognizable”, “renderable”, “semidiegetic”, or “perceptualized”, but each of them was either too wan or academic or misleading, so I left the intent part out to be inferred from context. Plus it just made the phrase too long. “Perceptualized narrative proxy sequence”, while more precise, is almost double the length. It’s just too much. So let’s go with the shorter phrase.

OK. What does this mean for sci-fi interfaces?

What’s important to us for this blog’s purposes is: When discussing an interface in a narrative proxy sequence, we don’t have access to any of the usual tools. What are the outputs? (We’re not sure.) What are the controls and how do you manipulate them? (We only have a guess.) Does it all fit together? (We can’t say.)

All of these questions are much more possible when we’ve got a literal depiction of a speculative interface. And so though my usual art-criticism stance is to push through and presume the interface is exactly as it appears, that analysis becomes prohibitively convoluted when we’re looking at a narrative proxy. We have to admit that it’s unavailable to the close-read analysis that this blog does.

It doesn’t make it any less awesome, though. So I’m giving it this award.

If you know of other sci-fi examples of this niche trope, feel free to comment. And thank you, Bugonia, for giving us something to think about and giving us this marvelous, funny, terrifying moment of interface horror.

*pop*

The word 'BUGONIA' is displayed in a stylized font featuring various geometric shapes, set against a black background.

IMDB: https://www.imdb.com/title/tt12300742/Currently streaming on:

Next up: The best robots of 2025 (currently scheduled for 24 Apr 2026)

Fritzes 2026: Best Narrative

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

Today we’ll be covering Best Narrative. These movies’ interfaces blow us away with evocative visuals and the richness of their future vision. They engross us in the story world by being spectacular.

The 2026 Award goes to: Elio

Pixar consistently puts great thought into their animated interfaces, and Elio is no different. The little wearable personal devices that help the different intergalactic species all share a space are so simple, and provide both a bit of worldbuilding as well as moments of comedy. The incomprehensibility of the alien spaceship controls are a plot-critical, candy-colored glowing hoot (and reminiscent of another Pixar short, Lifted.) I loved the lemniscate-shaped AI encyclopedia that Elio consults when preparing for his negotiations. We should be able to talk to Wikipedia and not just its articles. (Though I wish the entries were more than just text and an image.) Also this film has the only example I’ve seen where one character acts as an environmental suit for another character (not pictured, but you know the scene).

Also check out: Mickey 17

It’s a dark world where the hoarding class has made the working class so desperate that some people have to agree to be cloned for critical tasks that are likely death sentences. The interfaces in Mickey 17 help sell that very world, and even the ways that some folks use that same tech to eke out a little naughty joy amongst the drudgery. (With echoes of a similarly flirty interface from Starship Troopers.)

Also check out: Fantastic Four: First Steps

Marvel was once a main-stay for interfaces to study, but they’ve pointed their camera increasingly away from interfaces of late. So I was delighted to see Fantastic Four: First Steps bring to life interfaces from Jack Kirby’s Silver Age Fantastic Four. I don’t know if it was CGI, but I swear the giant, spherical quadrilateral screens are actual giant CRTs right down to the blurriness and chromatic aberration. If that’s CGI, it’s great attention to the detail from the reference material. All the spherical displays!

The “big” award in the Fritzes is Best Interface, but to amp up the anticipation, let’s look at some of the idiosyncratic awards from 2025 first.

Next up: The best comedy-horror interface

Fritzes 2026 Best Believable

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

Today we’ll be covering Best Believable. These movies’ interfaces adhere to solid computer-human-interaction principles and believable interactions. They engage us in the story world by being convincing.

The 2026 Award goes to: The Running Man

This second adaptation of Stephen King’s novel knocks it out of the park for the plot-central interfaces: The runner cuff and R-Cam box, the hideous sousveillance phone app for “fans”, the service design of the “free-v” show, and the in-home snitch interfaces. They lean towards narrative (missing a few things real-world counterparts would need), but all help articulate this dystopian world and the circumstances that drive the action. Moreover, I feel quite certain not making good real-world models of these horrible things is the right thing to do, especially given *gestures vaguely at the kakistocracy*.

On top of that it also has lots of awesome everyday interfaces, and it takes a level of commitment on the part of the filmmakers to go that deep in the worldbuilding. There’s a videophone interface with shades of Blade Runner. There’s a mailbox that signals its readiness and lifts off immediately after receiving a letter. (Though I would have flipped those red and green colors, so red meant “don’t put mail in here” and green meant “ready to receive”, but my invitation was lost in the mail.) The fare interfaces in the taxi. The self-driving interface of the citizen car. The piloting interfaces aboard the network plane. It’s all uncluttered, straightforward, and believable. Really well done, really well presented, and that’s hard to do in intense-action movies.

Also check out: War of the Worlds (2025) 

It got universally panned. Fair enough, neither ubiquitous government surveillance nor the current DHS bears valorization. (Also the virus-but-its-digital twist was already done), but I am impressed that this take on the classic Wells story is told almost entirely through interfaces, and each of them is detailed and mostly-realistic. The editing around the interface can be dizzying, and I wondered why William Radford had to do so much digital hunting at the beginning when an assistant should have been guiding his attention. But it’s impressive to bring that tale to life mostly through this unsung medium.

Also check out: Companion

With soft echoes of the interfaces in Westworld (2016), the interfaces in Companion control android and gynoid companions. (Yes, that term is deliberately coy.) They are clean and simple, which underscores the robots’ horror that they are under that much control by their owners.

My hackles are raised from “Intelligence” being a single slider. Intelligence is much more complicated than that, and this notion that it’s a single scalar variable has done a lot of damage over time. Even if they’d had a little expando control, it would have pointed at the idea that we’re looking at a simplification. Also I wish they’d provided a live preview of the eye color, because even with its intended use—of an owner controlling their companion’s eye color—this control has them glancing up to see the effect and then back down again to adjust, which is not a satisfying feedback loop. I use this very control as an example of a “plan” assistant in my new book. Hey, all of Hollywood: Buy it!

Next up: The Best Narrative interfaces from 2025

Fritzes 2026, an intro

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition. (Looking at you, Academy.) Awards are given for Best Believable, Best Narrative, and Best Interfaces (overall). Some years I give awards and shout-outs to other interesting trends or interfaces I spot along the way. This year I’ll do that, too.

History (still) unfolding note: Here in my home country we are still in the throes of Epstein-class fascism that amounts to a crimes-against-humanity, cartoonishly-incompetent, distraction-war. We are obligated to root out and overcome these forces. But we can’t be “on” 24/7, and sometimes the best thing we can do in these circumstances is resist and thrive, so despite the daily horrors, for when you’re done protesting and voting and resisting, I present this minor distraction with the full knowledge that there are other things with orders of magnitude more importance going on. It is not meant to normalize the kakistocracy.

Last year surprised me for the number of quality interfaces in sci-fi. I keep a long note on my phone across the year as I see shows, and despite that very concrete memory anchor, when I started thinking through the complete set for 2025, I had a vague sense that there weren’t that many. But when I started looking, I was wrong. There are a lot, and some really good ones. I’ll save further comments on the whole year in the wrap-up post.

MASSIVE SPOILERS AHEAD

Major spoilers in the days and weeks ahead, as I’ll be posting these in parts. Today, a pre-award shout-out to interfaces from long-format shows.

Pre-award shout out: Series!

Long-form formats like TV shows require a lot more of me to give those interfaces their due. More watching, more capturing, more analysis. But I do watch some shows, and there’s some great, great stuff happening. Maybe I should start an Emmy-esque award series, but that takes time I do not have. But as a simple shout-out, let me name a few you might want to check out.

Check out Alien Earth!

Working between the palette of the existing movies and genre and bringing something new to the franchise.

Check out Murderbot!

Check out their beautifully controlled palette (light gray and orange as keystone colors are just gorgeous), and what look like deeply considered interfaces throughout.

Check out Pluribus!

It’s much more of an abstract conversation, but the show is quite smart about the interfaces between the Unum (my term for the hive mind) and the free-willed. (Though come on, surely they could shorten that voice mail message after her first couple of calls.)

There are certainly some shows I’ve missed because I don’t have so much time to survey all the TV shows, much less in their entirety. Sorry if I missed your favorites, but give a comment below if there’s a series with great interfaces. As noted, though, the Fritzes are about movies, so I’ll say so long to TV for now.

Previous awards: [2021] [2022] [2023] [2024] [2025]

Next up: We’ll move on to movies and the Best Believable interfaces from 2025

Fritzes 2025 Winners

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition. (Looking at you, Academy.) Awards are given for Best Believable, Best Narrative, and Best Interfaces (overall). Sometimes I like to call out other things I spotted in my survey.

History unfolding note: On the one hand, it feels trivial and pointless to be focusing any attention on niche aspects of the film industry while my country is undergoing an oligarchic dismantling by a unelected white nationalist billionaire president and his rapist felon puppet. On the other, the best thing we can try to do in these circumstances is resist and thrive, so despite it all, I present this minor distraction with the full knowledge that there are other things with orders of magnitude more importance going on. It is not meant to normalize the coup.

Oh and hey, I managed to post this on the same day as the Oscars, for whatever that’s worth.

Best Believable

These movies’ interfaces adhere to solid CHI principles and believable interactions. They engage us in the story world by being convincing. The nominees for Best Believable were Alien: Romulus, Mars Express, and Spaceman.

Various screen caps from Alien: Romulus (2024).

Various screen caps from Spaceman (2024).

The winner of the Best Believable award for 2025 is Mars Express. Sharp-eyed readers will raise an eyebrow to object that the film was released theatrically in 2023, not 2024. But I follow the Oscars’ rules, which use the North American release dates. In this case, GKIDS acquired the rights and released it only in 2024.

Mars Express

In 2200, Aline Ruby is a private detective working with Carlos Rivera, an android backup of her partner, who had died years before. Their investigation into an android-rights activist leads them to the underbelly of Noctis, a Martian enclave. Over the course of events, they uncover more and more evidence of a movement larger and more consequential than either of them could have guessed.

Various screen caps from Mars Express (2024).

From the first unzip of a robotic cat’s skin (for washing), I knew this would be something special. The interfaces throughout are thoroughly considered and artfully executed. The microinteractions, choice of gestures and displays are—even when describing mundane things in the world like a crosswalk—thrilling to see. Pay special attention to the civic infrastructure interfaces of the car crash scene, and the environmental supports of Ruby’s alcoholism recovery. Note that the film is violent in points and thematically not wholly new, but 100% worth the watch, paying close attention to the interfaces. To underscore my recommendation, let me note it was a close call as to whether this should have won the Best Interfaces.

Catch the movie on Apple+. You can also find it on some billionaire-affiliated and fascist-suckup services, but see history unfolding note above, I don’t want to send you there if I can help it.


Best Narrative

These movies’ interfaces blow us away with evocative visuals and the richness of their future vision. They engross us in the story world by being spectacular. The nominees for Best Narrative were BorderlandsV/H/S Beyond, and The Wild Robot.

Various screen caps from Borderlands (2024).

Various screen caps from The Wild Robot (2024).

The winner of the Best Narrative award for 2023 is V/H/S Beyond.

V/H/S Beyond

V/H/S is a “found-footage” anthology franchise, and V/H/S Beyond focuses on sci-fi horror. In the last segment titled “Stowaway”, Haley is an amateur UFO hunter recording a video in the Mojave Desert. Following odd lights in the sky, she finds a real, crashed UFO and enters it. The door closes behind her and the spaceship takes off. Once inside she investigates amid a growing panic as she realizes what’s going on. She becomes wounded while interacting with the ship, and when healed by the onboard medical tech, it corrects her “broken” DNA, beginning a horrifying transformation.

Various screen caps from V/H/S Beyond (2024).

Note that the screen caps and compilation are not clear because all the sequences aboard the craft are unclear. This is apropos to its cinéma vérité style and the spaceship’s being an environment optimized for something other than human—much less human video capture devices.

There are a few movies that really lean in on how…uh…alien it will be to experience non-human environments, and renders that alienness to screen. No green-skinned bodice-ripping come-hither love interests and human-coded computer viruses able to infect alien software networks, thank you. The very material of these interfaces harm Haley. The display may not even be perceptible to us. The interactions are meant for some physiology and psychology we can only imagine. Certainly not the squishy meat popsicles that humans are. If I had to lay odds, the experience of alien interfaces will much more closely resemble the terror we feel when watching this segment than whiz-bang holograms. It is a study in otherness and even automation that bears close study.

Watch it on Apple+.


Displays

I have chosen to impose a limitation on myself in this blog and for these awards, and that’s that I review interactions, not merely displays. That means I need to see what users are doing with the speculative technology and tell how it’s effecting a state-change in the system. Even if it’s just a finger press to a button, or a gesture, or even a grunt, without that obvious input, I can’t really tell you if it’s a good interface supporting the interaction or not. But that constraint really hurt this year, because there were so many gorgeous displays where we didn’t see the interactions driving them. Before we get to the Best Interfaces award, let me take a moment to at least give a shout-out to some of these.

The Harkonnen sand table from Dune 2 (2024). The details are art, almost like elegant filigree; calm, floating, arcane sigils greatly contrasting the Harkonnen brutality they convey. No surprise it won Best Visual Effects at the Oscars this year.

The user manual from Atlas (2024). It’s overwhelming, funny, and maintains its clear visual hierarchy.

Mr. Paradox tells Deadpool that the Wolverine he has retrieved is the worst of them, in Deadpool & Wolverine (2024). The interfaces visually reinforce the central narrative conceit of the sacred timeline and telegraph the long-running history of the TVA.

Nice work to all the display designers out there. Y’all are doing some fine work. I just don’t have enough authority as an aesthete to offer awards based on the displays alone.


Best Interfaces

The movies nominated for Best Interfaces manage the extraordinary challenge of being believable and helping to paint a picture of the world of the story. They advance the state of the art in telling stories with speculative technology.

The winner of the Best Interfaces award for 2023 is Atlas.

Atlas

This movie tells the story of an AI-hating analyst named Atlas who finds herself on a remote planet as the lone survivor of a military expedition to take down a human-hating genocidal android named Harlan. Fortunately she has an ARC mech suit with all the military’s latest technology. Unfortunately it houses an artificial intelligence named Smith. As she slowly learns the ARC’s capabilities and uses it to hunt down Harlan, she also faces her own trauma and bonds with Smith. Will it be enough for her to finally “synch” with the suit to unlock its full potential, defeat Harlan’s android army, and prevent the interstellar assault on Earth?

Various screen caps from Atlas (2024).

A few scenes are over-the-top gee-whiz-ism, but almost all of the rest is well-thought-out, consistently designed, and fully in support of Atlas’ goals. Keep an eye out for the augmented reality escape HUD that bests the one seen in Warriors of Future from 2022. And as I described in the HUD comparison post, this is the first time I recall seeing predictive augmentation outside of video games. It’s deeply-future-looking, quite germane to prediction capabilities of AI, instantly understandable, critical to the plot, and full of climactic spectacle.

I will note that it’s written with the presupposition that Smith is a sympathetic character that we can trust, and it’s really Atlas’ hangups that are the problem. That’s a little unnerving because we know how charming and thereby manipulative the large language models of today can be. The more I study overreliance and underreliance, the more I want to see skepticism and literacy written onto the silver screen for audiences to internalize. We should keep AI at arm’s length as a society and as individuals— just as Atlas does—if, hopefully, not for the same reasons.

Catch Atlas and appreciate its awesome interfaces on Netflix.


Congratulations to all the candidates and the winners. Thank you for helping advance the art and craft of speculative interfaces in cinema.

Is there something utterly fantastic that I missed? It’s possible. Let me know in the comments, I’d love to see what you’ve got.

Comparing Sci-Fi HUDs in 2024 Movies

As in previous years, in preparation for awarding the Fritzes, I watched as many sci-fi movies as I could find across 2024. One thing that stuck out to me was the number of heads-up displays (HUDs) across these movies. There were a lot to them. So in advance of the awards, lets look and compare these. (Note the movies included here are not necessarily nominees for a Fritz award.)

I usually introduce the plot of every movie before I talk about it. This provides some context to understanding the interface. However, that will happen in the final Fritzes post. I’m going to skip that here. Still, it’s only fair to say there will be some spoilers as I describe these.

If you read Chapter 8 of Make It So: Interaction Lessons from Science Fiction, you’ll recall that I’d identified four categories of augmentation.

  1. Sensor displays
  2. Location awareness
  3. Context awareness (objects, people)
  4. Goal awareness

These four categories are presented in increasing level of sophistication. Let’s use these to investigate and compare five primary examples from 2024, in order of their functional sophistication.

Dune 2

Lady Margot Fenring looks through augmented opera glasses at Feyd-Rautha in the arena. Dune 2 (2024).

True to the minimalism that permeates much of the interfaces film, the AR of this device has a rounded-rectangle frame from which hangs a measure of angular degrees to the right. There are a few ticks across the center of this screen (not visible in this particular screen shot). There is a row of blue characters across the bottom center. I can’t read Harkonnen, and though the characters change, I can’t quite decipher what most of them mean. But it does seem the leftmost character indicates azimuth and the rightmost character angular altitude of the glasses. Given the authoritarian nature of this House, it would make sense to have some augmentation naming the royal figures in view, but I think it’s a sensor display, which leaves the user with a lot of work to figure out how to use that information.

You might think this indicates some failing of the writer’s or FUI designers’ imagination. However, an important part of the history of Dune is a catastrophic conflict known as the Butlerian Jihad. This conflict involved devastating, large-scale wars against intelligent machines. As a result, machines with any degree of intelligence are considered sacrilege. So it’s not an oversight, but as a result, we can’t look to this as a model for how we might handle more sophisticated augmentations.

Alien: Romulus

Tyler teaches Rain how to operate a weapon aboard the Renaissance. Alien: Romulus (2024)

A little past halfway through the movie, the protagonists finally get their hands on some weapons. In a fan-service scene similar to one between Ripley and Hicks from Aliens (1986), Tyler shows Rain how to hold an FAA44 pulse rifle. He also teaches her how to operate it. The “AA” stands for “aiming assist”, a kind of object awareness. (Tyler asserts this is what the colonial marines used, which kind of retroactively saps their badassery, but let’s move on.) Tyler taps a small display on the user-facing rear sight, and a white-on-red display illuminates. It shows a low-res video of motion happening before it. A square reticle with crosshairs shows where the weapon will hit. A label at the top indicates distance. A radar sweep at the bottom indicates movement in 360° plan view, a sensor display.

When Rain pulls the trigger halfway, the weapon quickly swings to aim at the target. There is no indication of how it would differentiate between multiple targets. It’s also unclear how Rain told it that the object in the crosshairs earlier is what she wants it to track now. Or how she might identify a friendly to avoid. Red is a smart choice for low-light situations as red is known to not interfere with night vision. Also it’s elegantly free of flourishes and fuigetry.

I’m not sure the halfway-trigger is the right activation mechanism. Yes, it allows the shooter to maintain a proper hold and remain ready with the weapon, and allows them not have to look at the display to gain its assistance, but also requires them to be in a calm, stable circumstance that allows for fine motor control. Does this mean that in very urgent, chaotic situations, users are just left to their own devices? Seems questionable.

Alien: Romulus is beholden to the handful of movies in the franchise that preceded it. Part of the challenge for its designers is to stay recognizably a part of the body of work that was established in 1979 while offering us something new. This weapon HUD stays visually simple, like the interfaces from the original two movies. It narratively explains how a civilian colonist with no weapons training can successfully defend herself against a full-frontal assault by a dozen of this universe’s most aggressive and effective killers. However, it leaves enough unexplained that it doesn’t really serve as a useful model.

The Wild Robot

Roz examines an abandoned egg she finds. The Wild Robot (2024)

HUD displays of artificially intelligent robots are always difficult to analyze. It’s hard to determine what’s an augmentation, here loosely defined as an overlay on some datastream created for a user’s benefit but explicitly not by that user. It opposes a visualization of the AI’s own thoughts as they are happening. I’d much rather analyze these as augmentation provided for Roz, but it just doesn’t hold up to scrutiny that way. What we see in this film are visualizations of Roz’ thoughts.

In the HUD, there is an unchanging frame around the outside. Static cyan circuit lines extend to the edge. (In the main image above, the screen-green is an anomaly.) A sphere rotates in the upper left unconnected to anything. A hexagonal grid on the left has some hexes which illuminate and blink unconnected to anything. The grid moves unrelated to anything. These are fuigetry and neither conveys information nor provides utility.

Inside that frame, we see Roz’ visualized thinking across many scenes.

  • Locus of attention—Many times we see a reticle indicating where she’s focused, oftentimes with additional callout details written in robot-script.
  • “Customer” recognition—(pictured) Since it happens early in the film, you might think this is a goofy error. The potential customer she has recognized is a crab. But later in the film, Roz learns the language common to the animals of the island. All the animals display a human-like intelligence, so it’s completely within the realm of possibility that this blue little crustacean could be her customer. Though why that customer needed a volumetric wireframe augmentation is very unclear.
  • X-ray vision—While looking around for a customer, she happens upon an egg. The edge detection indicates her attention. Then she performs scans that reveal the growing chick inside and a vital signs display.
  • Damage report—After being attacked by a bear, Roz does an internal damage check and she notes the damage on screen.
  • Escape alert—(pictured) When a big wave approaches the shore on which she is standing, Roz estimates the height of the wave to be five time her height. Her panic expresses itself in a red tint around the outside edge.
  • Project management—Roz adopts Brightbill and undertakes the mission to mother him—specifically to teach him to eat, swim, and fly. As she successfully teaches him each of these things, she checks it off by updating one of three graphics that represent the topics.
  • Language acquisition—(pictured) Of all the AR in this movie, this scene frustrates me the most. There is a sequence in which Roz goes torpid to focus on learning the animal language. Her eyes are open the entire time she captures samples and analyzes them. The AR shows word bubbles associated with individual animal utterances. At first those bubbles are filled with cyan-colored robo-ese script. Over the course of processing a year’s worth of samples, individual characters are slowly replaced in the utterances with bold, green, Latin characters. This display kind of conveys the story beat of “she’s figuring out the language), but befits cryptography much more than acquisition of a new language.

If these were augmented reality, I’d have a lot of questions about why it wasn’t helping her more than it does. It might seem odd to think an AI might have another AI helping it, but humans have loads of systems that operate without explicit conscious thought, like preattentive processing, all the functions of our autonomic nervous system, sensory filtering, and recall, just to name a few. So I can imagine it would be a fine model for AI-supporting-AI.

Since it’s not augmented reality, it doesn’t really act as a model for real world designs except perhaps for its visual styling.

Borderlands

Claptrap is a little one-wheel robot that accompanies Lilith though her adventures on and around Pandora. We see things through his POV several times.

Claptrap sizes up Lilith from afar. Borderlands (2024).

When Claptrap first sees Lilith, it’s from his HUD. Like Roz’ POV display in The Wild Robot, the outside edge of this view has a fixed set of lines and greebles that don’t change, not even for a sensor display. I wish those lines had some relationship to his viewport, but that’s just a round lens and the lines are vaguely like the edges of a gear.

Scrolling up from the bottom left is an impressive set of textual data. It shows that a DNA match has been made (remotely‽ What kind of resolution is Claptrap’s CCD?) and some data about Lilith from what I presume is a criminal justice data feed: Name and brief physical description. It’s person awareness.

Below that are readouts for programmed directive and possible directive tasks. They’re funny if you know the character. Tasks include “Supply a never-ending stream of hilarious jokes and one-liners to lighten the mood in tense situations” and “Distract enemies during combat. Prepare the Claptrap dance of confusion!” I also really like the last one “Take the bullets while others focus on being heroic.” It both foreshadows a later scene and touches on the problem raised with Dr. Strange’s Cloak of Levitation: How do our assistants let us be heroes?

At the bottom is the label “HYPERION 09 U1.2” which I think might be location awareness? The suffix changes once they get near the vault. Hyperion a faction in the game. Not certain what it means in this context.

When driving in a chase sequence, his HUD gives him a warning about a column he should avoid. It’s not a great signal. It draws his attention but then essentially says “Good luck with that.” He has to figure out what object it refers to. (The motion tracking, admittedly, is a big clue.) But the label is not under the icon. It’s at the bottom left. If this were for a human, it would add a saccade to what needs to be a near-instantaneous feedback loop. Shouldn’t it be an outline or color overlay to make it wildly clear what and where the obstacle is? And maybe some augmentation on how to avoid it, like an arrow pointing right? As we see in a later scene (below) the HUD does have object detection and object highlighting. There it’s used to find a plot-critical clue. It’s just oddly not used here, you know, when the passengers’ lives are at risk.

When the group goes underground in search of the key to the Vault, Claptrap finds himself face to face with a gang of Psychos. The augmentation includes little animated red icons above the Psychos. Big Red Text summarizes “DANGER LEVEL: HIGH” across the middle, so you might think it’s demonstrating goal and context awareness. But Claptrap happens to be nigh-invulnerable, as we see moments later when he takes a thousand Psycho bullets without a scratch. In context, there’s no real danger. So,…holup. Who’s this interface for, then? Is it really aware of context?

When they visit Lilith’s childhood home, Claptrap finds a scrap of paper with a plot-critical drawing on it. The HUD shows a green outline around the paper. Text in the lower right tracks a “GARBAGE CATALOG” of objects in view with comments, “A PSYCHO WOULDN’T TOUCH THAT”, “LIFE-CHOICE QUESTIONING TRASH”, “VAULT HUNTER THROWBACK TRASH”. This interface gives a bit of comedy and leads to the Big Clue, but raises questions about consistency. It seems the HUDs in this film are narrativist.

In the movie, there are other HUDs like this one, for the Crimson Lance villains. They fly their hover-vehicles using them, but we don’t nearly get enough time to tease the parts apart.

Atlas

The HUD in Atlas happens when the titular character Atlas is strapped into an ARC9 mech suit, which has its own AGI named Smith. Some of the augmentations are communications between Smith and Atlas, but most are augmentations of the view before her. The viewport from the pilot’s seat is wide and the augmentations appear there.

Atlas asks Smith to display the user manuals. Atlas (2024)

On the way to evil android Harlan’s base, we see the frame of the HUD has azimuth and altitude indicators near the edge. There are a few functionless flourishes, like arcs at the left and right edges. Later we see object and person recognition (in this case, an android terrorist, Casca Decius). When Smith confirms they are hostile, the square reticles go from cyan to red, demonstrating context awareness.

Over the course of the movie Atlas has resisted Smith’s call to “sync” with him. At Harlan’s base, she is separated from the ARC9 unit for a while. But once she admits her past connection to Harlan, she and Smith become fully synched. She is reunited with the ARC9 unit and its features fully unlock.

As they tear through the base to stop the launch of some humanity-destroying warheads, they meet resistance from Harlan’s android army. This time the HUD wholly color codes the scene, making it extremely clear where the combatants are amongst the architecture.

Overlays indicate the highest priority combatants that, I suppose, might impede progress. A dashed arrow stretches through the scene indicating the route they must take to get to their goal. It focuses Atlas on their goal and obstacles, helping her decision-making around prioritization. It’s got rich goal awareness and works hard to proactively assist its user.

Despite being contrasting colors, they are well-controlled to not vibrate. You might think that the luminance of the combatants and architecture might be flipped, but the ARC9 is bulletproof, so there’s no real danger from the gunfire. (Contrast Claptrap’s fake danger warning, above.) Saving humanity is the higher priority. So the brightest (yellow) means “do this”, the second brightest (cyan) means “through this” and darkest (red) means “there will be some nuisances en route.” The luminescence is where it should be.

In the climactic fight with Harlan, the HUD even displays a predictive augmentation, illustrating where the fast-moving villain is likely to be when Atlas’ attacks land. This crucial augmentation helps her defeat the villain and save the day. I don’t think I’ve seen predictive augmentation outside of video games before.


If I was giving out an award for best HUD of 2024, Atlas would get it. It is the most fully-imagined HUD assistance across the year, and consistently, engagingly styled. If you are involved with modern design or the design of sci-fi interfaces, I highly recommend you check it out.

Stay tuned for the full Fritz awards, coming later this year.

You’re the only one who can stop him

Superhero shows are a weird subgenre of sci-fi. The super-powers and how the superheroes use them in pursuit of their world-saving goals are often the point, and so often skimp on the sci part of sci-fi. The Amazon original The Boys is no different, where the core novum is a chemical (compound V) that gives people superpowers.

I love the show. Though it’s definitely for adults with its violence and psychopathy and depravity, I think it’s closer to what would happen if humans had superhuman powers in a world of late-stage capitalism, enshittification of everything, and wannabe fascists. I’ve been a fan since it first aired. (And can’t wait to dive into the comics after the show wraps.)

Be forewarned—massive spoilers ahead. (The graphic shows the Millennium Falcon sporting a massive spoiler.)

It hasn’t really had many interfaces of note across the series. And the one I’m going to talk about in this post isn’t a “big” interface. But it was bad, so I’m coming out of my hiatus to talk about it, and then to make an appeal similar to what I did when I reviewed Idiocracy in 2019.


A screen shot from the scene with Grace leaning down to talk to Ryan while Butcher looks on in the background.

In the Season 4 finale—hastily renamed “Season 4 Finale” instead of “Assassination Run” after the alleged July 13 assassination attempt of Donald Trump—co-founders of The Boys, Grace Mallory and Butcher, invite the young supe Ryan to an underground bunker with three goals in mind.

  1. Give him some time with Butcher who, as a kind of stepfather to Ryan, wants to see him before he dies. (Butcher is dying from a “sentient tumor” that developed from his overuse of “Temp V”.)
  2. Convince Ryan to turn against his father, Homelander.
  3. Entrap Ryan if he refuses.

It’s this last goal that involves the interface, because sure enough, Ryan is highly conflicted at the idea of killing his father after Butcher explains “You’re the only one who can stop him.”

“You’re the only one who can stop him.” —Butcher

As Ryan tries to leave to think things through, Grace blocks his way, saying “You can’t leave.” Ryan uses his super vision to observe that the walls of the room they’re in are 6 feet thick. Grace tries to explain, “This is the CIA Hazlet Safehouse, designed to hold people like you. I could seal us in here, flood the room with halothene, and we’d all take a nice, long nap.” As Ryan gets more agitated and threatens to leave anyway, she reaches out to a big, red momentary button mounted to the concrete wall beside her, presumably to release the aerosolized anesthesia.

A screen shot from the scene showing Grace’s hand on the junction box on which the big button sits, her index finger reaching up towards it.
Let’s get this party started.

And that’s it. That’s the interface. Because in a show that is very compellingly written, this is bad design.

It’s obvious

Being a big, red panic button, it might as well have a spotlight on it and a neon sign blinking “Press here to suppress.” Any supe worth their salt will recognize it as a threat and seek to disable it. I trust it would have a Normally Closed circuit, so that ripping the button out of the wall or severing the conduit would trip it, but a supe with Ryan or Homelander’s x-ray vision could just follow the circuit back to discover the nature of the halothane system and work from there. Much better is a system that wouldn’t call attention to itself.

It’s hard to get to

It’s hard to tell the complete room layout from the scene. It looks half hospital recovery room, half storage room, and I suspect is a converted supe prison cell (with windows, though?) The button appears to be just inside…the bathroom? Out of sight of the main part of the room, sure, so kind of hidden unless the supe needs to ever pee, but also harder to get to. A single button at around elbow-height works when a near-average-height person is upright and able to reach out to press it. But if you’ve just been knocked down, or had your arm laser-severed, or I don’t know, been body slammed across the room away from that button, you’re screwed. Even a ceiling-to-floor crash bar doesn’t work because it still requires your being within arms reach of that one spot. Better is a system that does not depend on where anyone is in the room for activation.

It works at human response speed

This is world with fast and mind-control supes. It doesn’t make sense to rely on human response times to activate it. Better is a semi-automated system that monitors everything and can respond in microseconds when data trends suspiciously.

Between its being obvious, hard to get to, and requiring manual activation I think nearly every single supe in the show would find it trivial to stop that button from being pressed if they wanted.

The scene could have been written more smartly—without sacrificing the efficiency of the beat—with something like this…

  • Grace
  • This is the CIA Hazlet Safehouse, designed to hold people like you. If you try to leave…
  • Cut to an arc shot of a supe-monitoring display. On the side, a live transcript of the conversation types out Grace’s words as she speaks them. In the center, infrared video of them in the room with overlays for each of them labeled SUPE or human, live vital signs, and a line showing their AI-predicted movements.
  • Grace (voiceover)
  • …or any of our vital signs crash…
  • Cut back to the actors
  • Grace
  • …the room is flooded with halothane and we all take a nice, long nap.
  • Zoom in to Ryan’s face as his eyes dart around and his breathing intensifies.
  • Cut to interface reading “escape prediction” and a number rising to 75, 80, 85. At 90 it turns red and a soft alarm goes off.
  • Cut to an extreme close up of Ryan’s ear to show he hears this alarm.

This isn’t obvious to the supe, works faster than a human could, and doesn’t rely on a human being in a specific spot.

Now instead of this, we could have Ryan brag about what a bad-ass he is and escape before the system can react, but this moment is constructed in the original to show that Ryan isn’t just an arrogant mini-Homelander. He’s a conflicted adolescent with an adolescent’s poor impulse control, and he panicked seeing her reach for the button. Having an alarm sets that same stage for him to panic. Note that I don’t think it’s good design for a system to tip its hand before it enacts control measures—as this does with the alarm—but it would be more forgivable than the dumb button, which just paints the CIA as incompetent and undermines the diegesis.


A screen shot from the episode, showing Homelander looking at a wad of his graying pubic hair in his hand, because he’s seriously fucked up.

OK, that said, this next bit goes out to my fellow Americans:

One of the reasons I have wanted to talk about this show is not just the fascism of the villains, but how it illustrates the corrupting effect of power, and that’s directly related to the coming American election.

With Biden dropping out of the race yesterday, and the Democratic National Convention a month away, I can’t yet formally lean on the merits of the Democratic candidate to make a case for weeks to come. (Though, go go go, Kamala!) But the case against the Republican party almost makes itself.

What we are facing as a nation with this election is existential. The Supreme Court has outrageously ruled that a president is unaccountable for his actions while in office. A dictator’s wet dream. And Trump has declared publicly that he will be a dictator “on day one,” but it’s easy to see that he means “as of day one”. What malignant narcissist willingly gives up power once he has it? His many ties to the wretched Heritage Foundation and its deeply, deeply disturbing Project 2025 (see this video and this one where he directly praises this group and their plan) tell us that if he is elected and his cronies have their way, we fall towards an extremist religious-nationalism that puts The Boys to shame and spells the end of the ideals and institutions that were the reason the United States was invented in the first place. The American Experiment is on the brink.

But to quote the ACLU, despair and resignation are not a strategy. We have to America-up and enact a strategy. Please, please…

Expose the Extremism

Get familiar with the extremist plans (the Christianization and militarization of public school, cutting overtime protections for 4.3 million people, banning labor unions, privatizing Medicare, replacing a million experts with loyalist lackies, putting the DOJ under presidential control, close NOAA and end free weather reports, categorizing LGBTQ+ folks as pederasts and instating a death penalty for it, trying to pass a constitutional amendment to make abortion illegal, and much more) and share those often and loudly on your social media platforms of choice. Especially reach out to anyone on the fence, in a swing state (Arizona, Georgia, Michigan, Pennsylvania, and Wisconsin), or who thinks they should just sit this one out because the (current) candidates are so old or not doing enough of what they want. We cannot afford “protest votes.”

Volunteer

If you don’t have money to spare (and with the current income inequality plaguing the nation that’s likely to be most of us) you can donate time and effort. If you’re in a solidly-colored state, you can join texting and letter-writing campaigns to those in swing states. If you’re in a swing state (Arizona, Georgia, Michigan, Pennsylvania, and Wisconsin), you can help canvas directly to voters still deciding. (How they’re still undecided is utterly alien to me, but here we are.) Here are just a few places you can opt to volunteer.

Donate

If you do have money to spare, spare it. Give to progressive and Democratic causes that will use that buying power to get ads, get the word out, and support the vote. Dig deep because I know we’ve heard it before, but this one is critical

Vote

Most importantly, have a plan to vote. Register if you’re not. If you are, double-check your voter registration status because they are purged just before elections, often bumping democrats for the most trivial of reasons. Vote by mail if you are overseas or if getting time off on the day of might be a problem. Find your polling location. Make a plan with others to go vote together. Charge your phone and bring water in case there are long lines. (And many bastards have worked very hard to ensure there will be long lines.) Get calendar reminders for voting deadlines sent directly to you.

If everyone gets out there and activates the vote, we can avoid giving the absolutely wrong people the power they should not have. You’re the only one who can stop him.

Fritzes 2024 Winners

So I missed synchronizing the Fritzes with the Oscars. By like, a lot. A lot a lot. That hype curve has come and gone. (In my defense, it’s been an intensely busy year.) I don’t think providing nominees and then waiting to reveal winners makes sense now, so I’ll just talk about them. It was another year where there weren’t a lot of noteworthy speculative interfaces, from an interaction design point of view. This is true enough that I didn’t have enough candidates to fill out my usual three categories of Believable, Narrative, and Overall. So, I’m just going to do a round-up of some of the best interfaces as I saw them, and at the end, name an absolute favorite.

The Kitchen

In a dystopian London, the rich have eliminated all public housing but one last block known as The Kitchen. Izi and Benji live there and are drawn together by the death of Benji’s mother, who turns out to be one of Izi’s romantic partners from the past. The film is full of technology, but the one part that really struck me was the Life After Life service where Izi works and where Benji’s mom’s funeral happens. It’s reminiscent of the Soylent Green suicide service, but much better done, better conceived. The film has a sci-fi setting, but don’t expect easy answers and Marvel-esque plot here. This film about relationships amid struggle and ends quite ambiguously.

The funerary interfaces are mostly translucent cyans with pinstripe dividing lines to organize everything. In the non-funerary the cyan is replaced with bits of saturated red. Everything funerary and non- feels as if it has the same art direction, which lends to reading the interfaces extradiegetically, but maybe that’s part of the point?

Pod Generation

This dark movie considers what happens if we gestated babies in technological wombs called pods.  The interactions with the pod are all some corporate version of intuitive, as if Apple had designed them. (Though the swipe-down to reveal is exactly backwards. Wouldn’t an eyelid or window shade metaphor be more natural? Maybe they were going for an oven metaphor, like bun in the oven? But cooking a child implications? No, it’s just wrong.)

The design is largely an exaggeration of Apple’s understated aesthetic, except for the insane, giant floral eyeball that is the AI therapist. I love how much it reads like a weirdcore titan and the characters are nonplussed, telegraphing how much the citizens of this world have normalized to inhumanity. I have to give a major ding to the iPad interface by which parents take care of their fetuses, as its art direction is a mismatch to everything else in the film and seems quite rudimentary, like a Flash app circa 1998.

Before I get to the best interfaces of the year, let’s take a moment to appreciate two trends I saw emerging in 2023. That of hyperminimalist interfaces and of interface-related comedy.

Hyperminimalist interfaces

This year I noticed that many movies are telling stories with very minimal interfaces. As in, you can barely call them designed since they’re so very minimalist. This feels like a deliberate contrast to the overwhelming spectacle that permeates, say, the MCU. They certainly reduce the thing down to just the cause and effect that are important to the story. Following are some examples that illustrate this hyperminimalism.

This could be a cost-saving tactic, but per the default New Criticism stance of this blog, we’ll take it as a design choice and note it’s trending.

Shout-out: Interface Comedy

I want to give a special shout-out to interface-related comedy over the past year.

Smoking Causes Coughing

The first comes from the French gonzo horror sci-fi Smoking Causes Coughing. In a nested story told by a barracuda that is on a grill being cooked, Tony is the harried manager of a log-processing plant whose day is ruined by her nephew’s somehow becoming stuck in an industrial wood shredder. Over the scene she attempts to reverse the motor, failing each time, partly owing to the unlabeled interface and bad documentation. It’s admittedly not sci-fi, just in a sci-fi film, and a very gory, very hilarious bit of interface humor in an schizoid film.

Guardians of the Galaxy 3

The second is Guardians of the Galaxy 3. About a fifth of the way into the movie, the team spacewalks from the Milano to the surface of Orgocorp to infiltrate it. Once on the surface, Peter, who still pines for alternate-timeline Gamora, tries to strike up a private conversation with her. The suits have a forearm interface featuring a single row of colored stay-state buttons that roughly match the colors of the spacesuits they’re wearing. Quill presses the blue one and tries in vain to rekindle the spark between him and Gamora in a private conversation. But then a minute into the conversation, Mantis cuts in…

  • Mantis
  • Peter you know this is an open line, right?
  • Peter
  • What?
  • Mantis
  • We’re listening to everything you’re saying.
  • Drax
  • And it is painful.
  • Quill
  • And you’re just telling me now‽
  • Nebula
  • We were hoping it would stop on its own.
  • Peter
  • But I switched it over to private!
  • Mantis
  • What color button did you push?
  • Peter
  • Blue! For the blue suit!
  • Drax
  • Oh no.
  • Nebula
  • Blue is the open line for everyone.
  • Mantis
  • Orange is for blue.
  • Peter
  • What‽
  • Mantis
  • Black is for orange. Yellow is for green. Green is for red. And red is for yellow.
  • Drax
  • No, yellow is for yellow. Green is for red. Red is for green.
  • Mantis
  • I don’t think so.
  • Drax
  • Try it then.
  • Mantis (screaming)
  • HELLO!
  • Peter writhes in pain
  • Mantis
  • You were right.
  • Peter
  • How the hell and I supposed to know all of that?
  • Drax
  • Seems intuitive.

The Marvels

A third comedy bit happens in The Marvels, when Kamala Khan is nerding out over Monica Rambeau’s translucent S.H.I.E.L.D. tablet. She says…

  • Khan
  • Is this the new iPad? I haven’t seen it yet.
  • Rambeau
  • I wish.
  • Khan
  • Wait, if this is all top secret information, why is it on a clear case?

Rambeau has no answer, but there are, in fact, some answers.

Anyway, I want to give a shout-out to the writers for demonstrating with these comedy bits some self-awareness and good-natured self-owning of tropes. I see you and appreciate you. You are so valid.

Best Interfaces of 2023

But my favorite interfaces of 2023 come from Spider-Man: Across the Spider-Verse. The interfaces throughout are highly stylized (so might be tough to perform the detailed analysis, which is this site’s bread-and-butter) but play the plot points perfectly.

In Across the Spider-Verse, while dealing difficulties with his home life and chasing down a new supervillain called The Spot, Miles Morales learns about The Society. The Society is a group of (thousands? Tens of thousands? of) Spider-people of every stripe and sort from across the Multiverse, whose overriding mission is to protect “canon” events in each universe that, no matter how painful, they believe are necessary to keep the fabric of reality from unraveling. It’s full of awesome interfaces.

Lyla is the general artificial intelligence that has a persistent volumetric avatar. She’s sassy and disagreeable and stylish and never runs, just teleports.

The wrist interfaces—called the Multiversal Gizmo—worn by members of The Society all present highly-contextual information with most-likely actions presented as buttons, and, as needed, volumetric alerts. Also note that Miguel’s Gizmo is longer, signaling his higher status within The Society.

Of special note is volumetric display that Spider Gwen uses to reconstruct the events at the Alchemax laboratory. The interface is so smart, telegraphs its complex functioning quickly and effectively, and describes a use that builds on conceivable but far-future applications of inference. The little dial that pops up allowing her to control time of the playback reminds me of Eye of Agamatto (though sadly I didn’t see evidence of the important speculative time-control details I’d provided in that analysis). The in-situ volumetric reconstruction reminds me of some of the speculative interfaces I’d proposed in the review of Deckard’s photo inspector from Blade Runner, and so was a big thrill to see.

All of the interfaces have style, are believable for the diegesis, and contribute to the narrative with efficiency. Congratulations to the team crafting these interfaces, and if you haven’t seen it yet, what are you waiting for? Go see it. It’s in a lot of places and the interfaces are awesome. (For full disclosure, I get no kickback from these referral links.)

Lessons in instrument design from Star Trek

by S. Astrid Bin 

Editor’s Note: Longtime fans of this site may be familiar with its “tag line,” “Stop watching sci-fi. Start using it.” So I was thrilled when a friend told me they had seen Astrid present how she had made an instrument from a Star Trek episode real! Please welcome Astrid as she tells us about the journey and lessons learned from making something from a favorite sci-fi show real. —Christopher

I’ve been watching Star Trek for as long as I can remember. Though it’s always been in the air of culture, it wasn’t until March 2020—when we were all stuck at home with Netflix and nothing else to do—that I watched all of it from the beginning.

Discovering Trek Instruments

I’m a designer and music researcher, and I specialise in interfaces for music. When I started this Great Rewatch with my husband (who is an enormous Trek fan, so nothing pleased him more) I started noting every musical instrument I saw. What grabbed me was they were so different from the instruments I write about, design, make, and look at, because none of these instruments, you know, actually worked. They were pure speculation, free even of the conventions of the last couple of decades since computers became small and powerful enough that digital musical instruments started to become a common thing on Kickstarter. I got excited every time I saw a new one.

What struck me the most about these instruments is that how they worked didn’t ever seem to enter into the mind of the person who dreamed them up. This sure is a departure for me, as I’ve spent more than ten years designing instruments and worrying about the subtleties of sensors, signal processing, power requirements, material response, fabrication techniques, sound design, and countless other factors that come into play when you make novel digital musical instruments. The instruments in Star Trek struck me as anarchic, because it was clear the designers didn’t consider at all how they would work, or, if they did, they just weren’t concerned. Some examples: Tiny instruments make enormous sounds. Instruments are “telepathic”. Things resonate by defying the laws of physics. Some basic sound design is tossed in at the end, and bam, job done.

Some previous instrument design projects. From left: Moai (electronic percussion), Keppi (electronic percussion), Gliss (synth module interaction, as part of the Bela.io team)

I couldn’t get over how different this was to the design process I was used to. Of course, this is because the people designing these instruments weren’t making “musical instruments” the way we know them, as functional cultural objects that produce sound of some kind. Rather, Trek instruments are storytelling devices, alluring objects that have a narrative and character function, and the sound they make and how they might work is completely secondary. These instruments have a number of storytelling purposes, but most of all they serve to show that alien civilisations are as complex, creative and culturally sophisticated as humans’.

This was striking, because I was used to the opposite; so often the technical aspects of an instrument—and there are many, from synthesis to sensors—always somehow become the most significant determining factor in an instruments’ final form.

The Aldean Instrument

There was one instrument that especially intrigued me, the “unnamed Aldean instrument” from Season 1, Episode 16 of Star Trek: The Next Generation, “When the Bough Breaks”. This instrument is a light-up disc that is played by laying hands on it, through which it translates your thoughts to sound. In this episode the children of the Enterprise are kidnapped by a race of people who can’t reproduce (spoiler alert: it was an environmental toxin, they’re fine now) and the children are distributed among various families. One girl is sent to a family of very kind musicians, and the grandfather teaches her to play this instrument. When she puts her hands on it, lays her fingers over the edge and is very calm it plays some twinkly noise, but then she gets anxious when she remembers she’s been kidnapped, and it makes a burst of horrible noise.

[If you have a subscription to Paramount, you can see the episode here. —Ed.]

This instrument was fascinating for a lot of reasons. It looked so cool with the light-up sides and round shape, and it was only on screen for about four tantalising seconds. Unlike other instruments that were a bit ridiculous, I kept thinking about this one because it was uniquely beautiful, and it seemed like a lot of thought went into it.

I researched the designers of Trek instruments and this instrument was the only one that had a design credit: Andrew Probert. Andrew is a prolific production designer who’s worked mainly in science fiction, and he’s been active for decades, designing everything from the bridge on the Enterprise to the Delorian in Back to the Future. He’s still working, his work is fantastic, and he has a website, so I emailed him and asked him what he could tell me about the design process.

He got back to me straight away and said he couldn’t remember anything about it, but he dug out his production sketch for me:

Courtesy of Andrew Probert, https://probert.artstation.com/

The sketch was so gloriously beautiful that I couldn’t resist building it. I had so many questions that you can’t answer, except through bringing it into reality: How would I make it work like it did in the show? How would I make it come alive slowly, and require calmness? How was I going to make that shape? Wait, this thing is supposed to translate moods, what does that even mean? How was I going to achieve the function and presence that this instrument had in the show, and what would I learn?

Building the Aldean Instrument

Translating moods

When I discussed this project with people, the question I got asked most often was “So how are you going to make it read someone’s mind?”

While the instrument doesn’t read minds, the idea of translating moods gave me pause and eventually led me to think of affective computing, an area of computing that was originated by a woman named—brace yourself—Rosalind Picard. Picard says that affective computing refers to computing that relates to, arises from, or deliberately impacts emotions.

Affective computing considers two variable and intersecting factors: Arousal (on a scale of “inactive” to “active”), and valence (on a scale from “unpleasant” to “pleasant”). A lot of research has been done on how various emotions fall into this two-dimensional space, and how emotional states can be inferred by sensing these two factors.

Image by Patricia Bota, 2019

I realised that, to make this instrument work the way it did in the show, the valence/arousal state that the instrument was sensing was much simpler. In the show, the little girl is calm (and the instrument plays some sparkly sound), and then she’s not (and the instrument emits a burst of noise). If this instrument just sensed arousal through how hard it was being gripped and valence through how much the instrument was moving, this creates an interaction space that still has a lot of possibility.

The instrument playing requires calmness, and I could sense how much they were moving around with an accelerometer, by calculating quantity of motion. If the instrument was moved suddenly or violently it could make a burst of noise. For valence—pleasantness to unpleasantness—I could sense how hard the person was gripping the instrument using a Trill Bar sensor. The Trill Bar can sense up to five individual touches, as well as the size of those touches (in other words, how hard those fingers are pressing). 

Both the touch sensing and the accelerometer data would be processed by a Bela Mini, a tiny but powerful computer that could process the sensor data, as well as provide the audio playback.

Making the body

I got to work first with the body of the instrument. I often prototype 3D shapes using layers of paper that are laser cut and sandwiched together, as it allows for a gradual, hands-on process that allows adjustments throughout. After a few days with a laser cutter and some cut and paste circuitry, I had something that lit up that I could attach the sensing system to.

Putting it together

I attached the Bela Mini to the underside of the instrument body, and embedded the Trill Bar sensor on the underside of the hand grip, so I could sense when someone’s hand was on the instrument. 

As I set out to recreate how the instrument looked and sounded in the show, I wanted to make a faithful reproduction of the sound design, despite the sound design being pretty basic.

The sound is a four-part major chord harmony. I recreated the sound in Ableton Live, with each part of the harmony as a separate sample. I also made a burst of noise. 

When the instrument is being held gently and there are no sudden movements, it can play; this doesn’t mean stillness, just a lack of chaos. As the player places their fingers over the instrument’s edge, each of their four fingers will be sensed and trigger one part of the harmony. The harder that finger presses, the louder that voice is.

There’s a demo video of me playing it, above.

Reflections on the process

This process was just as interesting as I suspected, for a number of reasons.

Firstly, de-emphasising technology in the process of making a technological object presented a fresh way of thinking. Instead of worrying about what I could add, whether the interaction was enough, or what other sensors I had access to (and thereby making the design a product of those technical decisions), I was able instead to be led by the material and object factors in this design process. This is an inverse of what usually happens, and I certainly am going to consciously invert this process more often from now on.

Secondly, thinking about what this instrument needed to do, say and mean, and extract the technological factors from there, made the technical aspects much simpler. I found myself working artistic muscles that aren’t always active in designing technology, because there’s often some kind of pressure, real or imagined, to make the technical aspects more complex. In this situation, the most important thing was supporting what this was in the show, which was an object that told a story. When I thought along those lines, the two axes of sensing were an obvious, and refreshingly simple direction to take.

Third, one of the difficult things about designing instruments is that, thanks to tiny and powerful computers, they can sound like anything you can imagine. There’s no size limitations for sound, no physical bodies to resonate, no material factors that affect the acoustic physics that create a noise. This freedom is often overwhelming, and it’s hard to make sound design choices that make sense. However, because I was working backwards from thinking about how this instrument was presented in the plot of the episode, I had something to attach these decisions to. I recreated the show’s simplistic sound design, but I’ve since designed sound worlds for it that support this calm, gentle, but very much alive nature that the Aldean instrument would have, when I imagine it played in its normal context. 

Not only physically recreating the shape an instrument from Star Trek, but making it function as an instrument showed me that bringing imaginary things into reality is a process that offers the creator a fresh perspective, whether designing fantastical or earthly interfaces.

3D interfaces: Observations and Reflections

So what is a 3D interface?

These examples, although fictional, demonstrate that “3D” can be used in different ways.

In Jurassic Park and Hackers, 3D graphics are used to create a richer display with more information density, though it is not photorealistic. The Jurassic Park file browser is primarily a symbolic 2D representation of the file system hierarchy, projected onto a perspective ground plane to make more elements visible at once. The third dimension is used to indicate the number of sub elements or their size. In Hackers, the City of Text towers most likely represent the actual contents of each physical disk drive in the corresponding real world location, and the pulses and colors indicate levels of activity or threat.

The Corridor in Disclosure, and its VirtuGood 6500 close copy in Community, instead create a more photorealistic virtual world. The file system becomes a building or landscape, and the users are embodied within the virtual world as an avatar. Like the pre-computer memory palace, this should take advantage of the human ability to remember and navigate our way around. But The Corridor blows it by putting all the files within one room, and representing them as sheets of paper within identical filing cabinets. Walking through the 3D architecture becomes a pretty but time wasting diversion.

I’m personally disappointed not to find any true computer memory palaces, whether fictional or real. As mentioned in the introduction, an essential characteristic of the memory palace is that each item be stored in a unique location, visually distinct from any other. None of the 3D file systems I’ve been able to find do this, instead using generic icons throughout. Computers are actually quite good at creating almost infinite variations in appearance, e.g. fractals in 2D and various CGI landscapes and underwater environments in 3D. A computer memory palace would at least be more interesting to look at.

Where are they today?

Since the 1990s the 3D file browser has seemingly faded away, both in reality and in film/TV. Let’s (briefly) think about why.

The SGI 3D file browser shown in Jurassic Park was not the only one to be released as a real piece of software. Although personal computers could easily run such a 3D file browser by the year 2000, and mobile phones a few years later, the systems we actually use have remained two dimensional. The only widespread use of 3D spatial organisation that I’m aware of is the Apple Time Machine backup software, which uses distance from the viewer to represent increasing age. It’s a linear sequence of 2D desktops rather than allowing true three dimensional movement in any direction. Even native 3D systems like the Oculus Quest present the user a 2D GUI wrapped around the user in a cylinder. 

We don’t have our files arranged into 3D buildings or worlds, but there have been other developments since the first 2D file browsers. Keyword search is now built into most GUI desktops. Photo collections can be viewed by timeline, or by geographical location; and music collections arranged by genre, artist, or album. So one likely reason why we don’t have real world 3D file browsers is that in themselves they don’t provide enough of an advantage over the existing 2D GUIs to make changing worthwhile.

User interfaces in film and TV are not constrained by reality or practicality so their absence must be due to other reasons. Sometimes real world interface trends affect what we see on the screen, for instance the replacement of command line interfaces by graphical, but for file browsing we’re still using the 2D GUI browsers from the 1990s. And it’s not because of technical difficulty or expense, because we’ve seen that 1990 feature-film 3D effects can now be created in the budget of a sitcom episode.
An example is the 2008 film Iron Man, already mentioned for using a 3D trashcan within Tony Stark’s CAD software system. Later in the film, Pepper needs to copy some files from the corporate PC of evil executive Obadiah Stane. As in the earlier films covered in this review, Stark Industries is portrayed as an advanced technology company so this PC also has a custom GUI created for the film. Here though there is only a very slight use of 3D to arrange flat file icons in order, otherwise it closely resembles existing 2D desktops. The filmmakers could have inserted a 3D file browser with perhaps volumetric projection to match Tony’s 3D CAD system but chose not to.

Pepper selects a folder in the text list at left and it is also highlighted in the graphical list of overlaid translucent icons at right. Iron Man (2008)

Copying computer files (or more dramatically “the data”) still happens in science fiction or near future film settings, but also has become more common in everyday life with the spread of personal computers and now smartphones worldwide. In my opinion, this is the most likely reason why we don’t see 3D and VR file browsers any more: we the audience know how to copy files and search for them, and won’t be impressed by attempts to make it “high tech” with fanciful user interfaces. File systems and browsers have become, well, boring. So we can look back on these cinematic dalliances with 3D file management fondly, but recognize it as a thing we tried for a while, and learned from, but eventually put down.