Fritzes 2026 bonus award: Best Assistant(s)

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition. Best Assistant is a special award that I’m giving for the first time.

Ok but why now? Well, in March of this year I published a new non-fiction book about the design of technology that assists people doing things (as opposed to doing stuff for them). It’s called Designing Assistant Technology: AI That Makes People Smarter. In the book I lay out a framework for categorizing assistant interactions, and describe the risks and mitigations of having an assistant in the mix. I daresay it’s not only valuable for design, but for scriptwriters and futurists as well. If that intrigues you, look for a discount code near the end of this article.

Anyway, it gave me the idea to select the movie with the best examples of Assistants.

The 2026 Award for Best Assistants: M3gan 2.0

I know, I’m as surprised as you are.

The first movie, while smarter than I expected, seemed to be a horror flick that was using AI as set dressing. It did get a shout-out in the Fritzes 2024 for best HUD, but as I recall, its unbounded atomic optimization was just another way to frame it as a ruthless, efficient killer. But this second one seems to take the theme more seriously, and the scriptwriters did their homework.

A colorful diagram featuring a red loop labeled with the words 'think', 'reflect', 'do', 'see', 'perceive', 'plan', and 'know', alongside a blue mountain icon, representing a cyclical process of action and reflection.

In Part II of the book, I build on the see-think-do loop (that is core to interaction design) to identify the Five Universal Assists. These are the universal, exhaustive set of categories by which technology can assist users: Perceive, Know, Plan, Perform, and Reflect. And to my surprise, when you look close, there are examples of all five of the universal assists in M3gan 2.0, more than any other film in 2025.

Note: M3gan jumps bodies many times over the course of the movie, so you’ll see her described many times with the same name, but with vastly different appearances in the screen shots. 

Perceive

In this assist, the tech helps users perceive signal amidst noise.

Early in the film, Cady discovers that the source code of Better Bionix is being hacked. When everyone comes over to see what’s on her screen, Tess says, “Oh, Jesus. She’s right. There’s stray commands all over the source code.” The screen we see doesn’t ask them (or us) to try and detect which out of the dozens on screen are suspect. Those lines are colored red to contrast greatly with the screen-green, and in case you were colorblind, they’re indented as well. 

A close-up view of a person's head and shoulders in front of a computer monitor displaying code and technical diagrams, suggesting a programming or technical task.

You might think that that M3gan’s alerting Gemma of the FBI home invasion to be an example of perceive, but she was sleeping when the alert comes. In that context, M3gan’s acting more as an agent. (More on that below.)

In act 2, Gemma asks M3gan to increase audio of two conversants at a noisy party, and that might as well be a canonical example. (And the first time she does it, M3gan substitutes audio in a very snarky way, reminding the audience that in a super-AI-mediated world, you cannot implicitly trust the media it controls, reminding us about over-reliance, another theme from Part III of the book.)

A group of professionals engaged in a serious discussion, with visual graphics overlay indicating audio enhancement and data analysis.

Know

In this assist, the tech helps users understand the meaning of what they’ve perceived, either in shallow ways such as names and categories, or very deeply.

HUDs have this built into the trope, and there are plenty of HUDs throughout.

But also, when beginning their joint hunt for AMELIA, M3gan explains that every battery Altwave (the villain corporation in the film) makes has a remote-controllable kill switch, explaining the meaning of what Gemma sees in the file.

When infiltrating Altwave, M3gan(toy) explains why AMELIA is there as well: She seeks to control Altwave’s cloud servers, which are half of North America. That control enables AMELIA to disable the economy, threatening “societal collapse in 10 to 12 working days”. 

A high-tech computer screen displaying a map of the United States with data points connected by lines, overlaid with programming code.

Plan

In this assist, it helps users plan their course of action, tactically or strategically.

When M3gan comes out of hiding and presents a deal to Gemma, she explains that she’s run a thousand simulations and if they don’t team up, more people die than if they didn’t. M3gan asks, “Who is the real killer in that situation?” Not having much of a choice, Gemma agrees.

A woman with long hair and a bow tie stands in front of a textured brick wall, featuring a ghostly or ethereal effect.

A key part of the planning assist is helping users know what the best course of action is.

Perform

In this assist, the tech helps users perform some task.

One of the first scenes in the film has Tess and Cole demonstrating an exosuit. In their pitch they explain to the potential investor that its purpose is to help laborers avoid fatigue while performing physical tasks. To demonstrate, Cole lifts huge concrete blocks without showing any signs of exertion.

A few beats later, slimy Elon-Musk stand-in demonstrates how his neural chip helps him stand though he is ordinarily bound to his wheelchair.

In the climax, M3gan stows away on a neural chip forcibly implanted on Gemma. When Gemma dons an exosuit the AI helps her defeat many goons in hand to hand combat. It’s arguably acting as an agent here, since Gemma isn’t trying to build those skills. (Similarly when Gemma gets knocked unconscious, M3gan controls the exosuit to animate her body anyway, something we also see in Section 31, but more on this example in a later post.)

Reflect

In this assist—the most abstract of them—the technology helps users reflect on things to turn experience into knowledge, or to question their goals and future tactics.

There’s a lot less of this here, just like there is in the real world. But, we see some of it.When Cady asks M3gan(half-formed) how she can feel anything, M3gan replies, “Can you explain why you feel things?” It’s rhetorical in context, but exactly the sort of thing that a reflection assistant might ask. 

A close-up of a vintage robotic figure with expressive features and tangled wires, set against a dark, atmospheric background.

When Gemma is spiraling about her parenting in the basement, M3gan(souped up) takes a moment to share counterexamples. “I saw you wake up every day at 4:00 A.M., staring at the ceiling contemplating what the future holds for her…I watched you make homemade lunches with fresh-baked sourdough…I watched you help her with her homework, even though it always ended in a fight…Gemma, it’s not a failure to feel guilt or that you’re not enough. It’s part of the job.” It’s not the best fit for the definition of this assist I give in the book, but it’s the closest thing in the movie and the closest thing in my survey of the year’s films.

A humanoid robot with long hair and large, expressive eyes sitting next to a woman in a dark environment. The robot's outfit has a shiny, futuristic design, and a computer screen with data is visible in the background.

Also agents

There are also many examples where M3gan(AI) acts as an agent on their behalf, but that was my last book, so I’ll skip getting into those examples. But as you watch the movie, keep an eye out for additional shouts out to the paperclip thought experiment (a metaphor for the threat of instrumental convergence), allusions to the Xerox WorkCentre scanner bug, and of course super AI as an existential threat. The whole plot can be seen as an example of Bostrom’s a priori argument that multiple super AIs are the most stable scenario. All this is why I say that the writers seemed to have done their homework..

I’m a lot less fond about the guy wanting to regulate/eliminate AI is painted the bad guy, but having positioned M3gan as sentient and the antihero of the film, I’m not sure what else they could do. But I wish it didn’t valorize AI as equivalent to humans despite all of that. We have enough LeMoinian panic about large language models as it is.


Anyway, congratulations to M3gan 2.0 for showing so many examples of assistants throughout. If you’re interested in getting the book, you can get 20% off if you purchase from Rosenfeldmedia.com and use the code “scifi26” during checkout. Use this power only for good.

And let me know in comments if you think of other examples of assistants across the year.

IMDB: https://www.imdb.com/title/tt26342662/Currently streaming on:

Next up: A Big Screen Label Roundup (currently scheduled for 8 May 2026)

Sci-fi Spacesuits: Interface Locations

A major concern of the design of spacesuits is basic usability and ergonomics. Given the heavy material needed in the suit for protection and the fact that the user is wearing a helmet, where does a designer put an interface so that it is usable?

Chest panels

Chest panels are those that require that the wearer only look down to manipulate. These are in easy range of motion for the wearer’s hands. The main problem with this location is that there is a hard trade off between visibility and bulkiness.

Arm panels

Arm panels are those that are—brace yourself—mounted to the forearm. This placement is within easy reach, but does mean that the arm on which the panel sits cannot be otherwise engaged, and it seems like it would be prone to accidental activation. This is a greater technological challenge than a chest panel to keep components small and thin enough to be unobtrusive. It also provides some interface challenges to squeeze information and controls into a very small, horizontal format. The survey shows only three arm panels.

The first is the numerical panel seen in 2001: A Space Odyssey (thanks for the catch, Josh!). It provides discrete and easy input, but no feedback. There are inter-button ridges to kind of prevent accidental activation, but they’re quite subtle and I’m not sure how effective they’d be.

2001: A Space Odyssey (1968)

The second is an oversimplified control panel seen in Star Trek: First Contact, where the output is simply the unlabeled lights underneath the buttons indicating system status.

The third is the mission computers seen on the forearms of the astronauts in Mission to Mars. These full color and nonrectangular displays feature rich, graphic mission information in real time, with textual information on the left and graphic information on the right. Input happens via hard buttons located around the periphery.

Side note: One nifty analog interface is the forearm mirror. This isn’t an invention of sci-fi, as it is actually on real world EVAs. It costs a lot of propellant or energy to turn a body around in space, but spacewalkers occasionally need to see what’s behind them and the interface on the chest. So spacesuits have mirrors on the forearm to enable a quick view with just arm movement. This was showcased twice in the movie Mission to Mars.

HUDs

The easiest place to see something is directly in front of your eyes, i.e. in a heads-up display, or HUD. HUDs are seen frequently in sci-fi, and increasingly in sc-fi spacesuits as well. One is Sunshine. This HUD provides a real-time view of each other individual to whom the wearer is talking while out on an EVA, and a real-time visualization of dangerous solar winds.

These particular spacesuits are optimized for protection very close to the sun, and the visor is limited to a transparent band set near eye level. These spacewalkers couldn’t look down to see the top of a any interfaces on the suit itself, so the HUD makes a great deal of sense here.

Star Trek: Discovery’s pilot episode included a sequence that found Michael Burnham flying 2000 meters away from the U.S.S. Discovery to investigate a mysterious Macguffin. The HUD helped her with wayfinding, navigating, tracking time before lethal radiation exposure (a biological concern, see the prior post), and even doing a scan of things in her surroundings, most notably a Klingon warrior who appears wearing unfamiliar armor. Reference information sits on the periphery of Michael’s vision, but the augmentations occur mapped to her view. (Noting this raises the same issues of binocular parallax seen in the Iron HUD.)

Iron Man’s Mark L armor was able to fly in space, and the Iron HUD came right along with it. Though not designed/built for space, it’s a general AI HUD assisting its spacewalker, so worth including in the sample.

Avengers: Infinity War (2018)

Aside from HUDs, what we see in the survey is similar to what exists in existing real-world extravehicular mobility units (EMUs), i.e. chest panels and arm panels.

Inputs illustrate paradigms

Physical controls range from the provincial switches and dials on the cigarette-girl foldout control panels of Destination Moon to the simple and restrained numerical button panel of 2001, to strangely unlabeled buttons of Star Trek: First Contact’s arm panels (above), and the ham-handed touch screens of Mission to Mars.

Destination Moon (1950)
2001: A Space Odyssey (1968)

As the pictures above reveal, the input panels reflect the familiar technology of the time of the creation of the movie or television show. The 1950s were still rooted in mechanistic paradigms, the late 1960s interfaces were electronic pushbutton, the 2000s had touch screens and miniaturized displays.

Real world interfaces

For comparison and reference, the controls for NASA’s EMU has a control panel on the front, called the Display and Control Module, where most of the controls for the EMU sit.

The image shows that inputs are very different than what we see as inputs in film and television. The controls are large for easy manipulation even with thick gloves, distinct in type and location for confident identification, analog to allow for a minimum of failure points and in-field debugging and maintenance, and well-protected from accidental actuation with guards and deep recesses. The digital display faces up for the convenience of the spacewalker. The interface text is printed backwards so it can be read with the wrist mirror.

The outputs are fairly minimal. They consist of the pressure suit gauge, audio warnings, and the 12-character alphanumeric LCD panel at the top of the DCM. No HUD.

The gauge is mechanical and standard for its type. The audio warnings are a simple warbling tone when something’s awry. The LCD panel provides information about 16 different values that the spacewalker might need, including estimated time of oxygen remaining, actual volume of oxygen remaining, pressure (redundant to the gauge), battery voltage or amperage, and water temperature. To cycle up and down the list, she presses the Mode Selector Switch forward and backward. She can adjust the contrast using the Display Intensity Control potentiometer on the front of the DCM.

A NASA image tweeted in 2019.

The DCMs referenced in the post are from older NASA documents. In more recent images on NASA’s social media, it looks like there have been significant redesigns to the DCM, but so far I haven’t seen details about the new suit’s controls. (Or about how that tiny thing can house all the displays and controls it needs to.)

Agent Ross’ remote piloting

Remote operation appears twice during Black Panther. This post describes the second, in which CIA Agent Ross remote-pilots the Talon in order to chase down cargo airships carrying Killmonger’s war supplies. The prior post describes the first, in which Shuri remotely drives an automobile.

In this sequence, Shuri equips Ross with kimoyo beads and a bone-conducting communication chip, and tells him that he must shoot down the cargo ships down before they cross beyond the Wakandan border. As soon as she tosses a remote-control kimoyo bead onto the Talon, Griot announces to Ross in the lab “Remote piloting system activated” and creates a piloting seat out of vibranium dust for him. Savvy watchers may wonder at this, since Okoye pilots the thing by meditation and Ross would have no meditation-pilot training, but Shuri explains to him, “I made it American style for you. Get in!” He does, grabs the sparkly black controls, and gets to business.

The most remarkable thing to me about the interface is how seamlessly the Talon can be piloted by vastly different controls. Meditation brain control? Can do. Joystick-and-throttle? Just as can do.

Now, generally, I have a beef with the notion of hyperindividualized UI tailoring—it prevents vital communication across a community of practice (read more about my critique of this goal here)—but in this case, there is zero time for Ross to learn a new interface. So sure, give him a control system with which he feels comfortable to handle this emergency. It makes him feel more at ease.

The mutable nature of the controls tells us that there is a robust interface layer that is interpreting whatever inputs the pilot supplies and applying them to the actuators in the Talon. More on this below. Spoiler: it’s Griot.

Too sparse HUD

The HUD presents a simple circle-in-a-triangle reticle that lights up red when a target is in sights. Otherwise it’s notably empty of augmentation. There’s no tunnel in the sky display to describe the ideal path, or proximity warnings about skyscrapers, or airspeed indicator, or altimeter, or…anything. This seems a glaring omission since we can be certain other “American-style” airships have such things. More on why this might be below, but spoiler: It’s Griot.

What do these controls do, exactly?

I take no joy in gotchas. That said…

  • When Ross launches the Talon, he does so by pulling the right joystick backward.
  • When he shoots down the first cargo ship over Birnin Zana, he pushes the same joystick forward as he pulls the trigger, firing energy weapons.

Why would the same control do both? It’s hard to believe it’s modal. Extradiegetically, this is probably an artifact of actor Martin Freeman’s just doing what feels dramatic, but for a real-world equivalent I would advise against having physical controls have wholly different modes on the same grip, lest we risk confusing pilots on mission-critical tasks. But spoiler…oh, you know where this is going.

It’s Griot

Diegetically, Shuri is flat-out wrong that Ross is an experienced pilot. But she also knew that it didn’t matter, because her lab has him covered anyway. Griot is an AI with a brain interface, and can read Ross’ intentions, handling all the difficult execution itself.

This would also explain the lack of better HUD augmentation. That absence seems especially egregious considering that the first cargo ship was flying over a crowded city at the time it was being targeted. If Ross had fired in the wrong place, the cargo ship might have crashed into a building, or down to the bustling city street, killing people. But, instead, Griot quietly, precisely targets the ship for him, to insure that it would crash safely in nearby water.

This would also explain how wildly different interfaces can control the Talon with similar efficacy.

An stained-glass image of William of Ockham. A modern blackletter caption reads, “It was always Griot.”

So, Occams-apology says, yep, it’s Griot.

An AI-wizard did it?

In the post about Shuri’s remote driving, I suggested that Griot was also helping her execute driving behind the scenes. This hearkens back to both the Iron HUD and Doctor Strange’s Cloak of Levitation. It could be that the MCU isn’t really worrying about the details of its enabling technologies, or that this is a brilliant model for our future relationship with technology. Let us feel like heroes, and let the AI manage all the details. I worry that I’m building myself into a wizard-did-it pattern, inserting AI for wizard. Maybe that’s worth another post all its own.

But there is one other thing about Ross’ interface worth noting.

The sonic overload

When the last of the cargo ships is nearly at the border, Ross reports to Shuri that he can’t chase it, because Killmonger-loyal dragon flyers have “got me trapped with some kind of cables.” She instructs him to, “Make an X with your arms!” He does. A wing-like display appears around him, confirming its readiness.

Then she shouts, “Now break it!” he does, and the Talon goes boom shaking off the enemy ships, allowing Ross to continue his pursuit.

First, what a great gesture for this function. Very ordinarily, Wakandans are piloting the Talon, and each of them would be deeply familiar with this gesture, and even prone to think of it when executing a hail Mary move like this.

Second, when an outsider needed to perform the action, why didn’t she just tell Griot to just do it? If there’s an interpretation layer in the system, why not just speak directly to that controller? It might be so the human knows how to do it themselves next time, but this is the last cargo ship he’s been tasked with chasing, and there’s little chance of his officially joining the Wakandan air force. The emergency will be over after this instance. Maybe Wakandans have a principle that they are first supposed to engage the humans before bringing in the machines, but that’s heavy conjecture.

Third, I have a beef about gestures—there’s often zero affordances to tell users what gestures they can do, and what effects those gestures will have. If Shuri was not there to answer Ross’ urgent question, would the mission have just…failed? Seems like a bad design.

How else could have known he could do this? If Griot is on board, Griot could have mentioned it. But avoiding the wizard-did-it solutions, some sort of context-aware display could detect that the ship is tethered to something, and display the gesture on the HUD for him. This violates the principle of letting the humans be the heroes, but would be a critical inclusion in any similar real-world system.

Any time we are faced with “intuitive” controls that don’t map 1:1 to the thing being controlled, we’re faced with similar problems. (We’ve seen the same problems in Sleep Dealer and Lost in Space (1998). Maybe that’s worth its own write-up.) Some controls won’t map to anything. More problematic is that there will be functions which don’t have controls. Designers can’t rely on having a human cavalry like Shuri there to save the day, and should take steps to find ways that the system can inform users of how to activate those functions.

Fit to purpose?

I’ve had to presume a lot about this interface. But if those things are correct, then, sure, this mostly makes it possible for Ross, a novice to piloting, to contribute something to the team mission, while upholding the directive that AI Cannot Be Heroes.

If Griot is not secretly driving, and that directive not really a thing, then the HUD needs more work, I can’t diegetically explain the controls, and they need to develop just-in-time suggestions to patch the gap of the mismatched interface. 


Black Georgia Matters

Each post in the Black Panther review is followed by actions that you can take to support black lives. As this critical special election is still coming up, this is a repeat of the last one, modified to reflect passed deadlines.

The state flag of Georgia, whose motto clearly violates the doctrine of separation of church and state.
Always on my mind, or at least until July 06.

Despite outrageous, anti-democratic voter suppression by the GOP, for the first time in 28 years, Georgia went blue for the presidential election, verified with two hand recounts. Credit to Stacey Abrams and her team’s years of effort to get out the Georgian—and particularly the powerful black Georgian—vote.

But the story doesn’t end there. Though the Biden/Harris ticket won the election, if the Senate stays majority red, Moscow Mitch McConnell will continue the infuriating obstructionism with which he held back Obama’s efforts in office for eight years. The Republicans will, as they have done before, ensure that nothing gets done.

To start to undo the damage the fascist and racist Trump administration has done, and maybe make some actual progress in the US, we need the Senate majority blue. Georgia is providing that opportunity. Neither of the wretched Republican incumbents got 50% of the vote, resulting in a special runoff election January 5, 2021. If these two seats go to the Democratic challengers, Warnock and Ossof, it will flip the Senate blue, and the nation can begin to seriously right the sinking ship that is America.

Photograph: Erik S Lesser/EPA

What can you do?

If you live in Georgia, vote blue, of course. You can check your registration status online. You can also help others vote. Important dates to remember, according to the Georgia website

  • 14 DEC Early voting begins
  • 05 JAN 2021 Final day of voting

Residents can also volunteer to become a canvasser for either of the campaigns, though it’s a tough thing to ask in the middle of the raging pandemic.

The rest of us (yes, even non-American readers) can contribute either to the campaigns directly using the links above, or to Stacey AbramsFair Fight campaign. From the campaign’s web site:

We promote fair elections in Georgia and around the country, encourage voter participation in elections, and educate voters about elections and their voting rights. Fair Fight brings awareness to the public on election reform, advocates for election reform at all levels, and engages in other voter education programs and communications.

We will continue moving the country into the anti-racist future regardless of the runoff, but we can make much, much more progress if we win this election. Please join the efforts as best you can even as you take care of yourself and your loved ones over the holidays. So very much depends on it.

Black Reparations Matter

This is timely, so I’m adding this on as well rather than waiting for the next post: A bill is in the house to set up a commission to examine the institution of slavery and its impact and make recommendations for reparations to Congress. If you are an American citizen, please consider sending a message to your congresspeople asking them to support the bill.

Image, uncredited, from the ACLU site. Please contact me if you know the artist.

On this ACLU site you will find a form and suggested wording to help you along.

Luke’s predictive HUD

When Luke is driving Kee and Theo to a boat on the coast, the car’s heads-up-display shows him the car’s speed with a translucent red number and speed gauge. There are also two broken, blurry gauges showing unknown information.

Suddenly the road becomes blocked by a flaming car rolled onto the road by a then unknown gang. In response, an IMPACT warning triangle zooms in several times to warn the driver of the danger, accompanied by a persistent dinging sound.

childrenofmen-impact-08

It commands attention effectively

Props to this attention-commanding signal. Neuroscience tells us that symmetrical expansion like this triggers something called a startle response.  (I first learned this in the awesome and highly recommended book Mind Hacks.) Any time we see symmetrical expansion in our field of vision, within milliseconds our sympathetic nervous system takes over, fixes our attention to that spot, and prompts us to avoid the thing that our brains believe is coming right at us. It all happens way before conscious processing, and that’s a good thing. It’s evolutionarily designed to keep us safe from falling rocks, flying fists, and pouncing tigers, and scenarios like that don’t have time for the relatively slow conscious processes.

Well visualized

The startle response varies in strength depending on several things.

  • The anxiety of the person (an anxious person will react to a slighter signal)
  • The driver’s habituation to the signal
  • The strength of the signal, in this case…
    • Contrast of the shape against its background
    • The speed of the expansion
  • The presence of a prepulse stimulus

We want the signal to be strong enough to grab the attention of a possibly-distracted driver, but not strong enough to cause them to overreact and risk control of car. While anything this critical to safety needs to be thoroughly tested, the size of the IMPACT triangle seems to sit in the golden mean between these two.

And while the effect is strongest in the lab with a dark shape expanding over a light background, I suspect given habituation to the moving background of the roadscape and a comparatively static HUD, the sympathetic nervous system would have no problem processing this light-on-dark shape.

Well placed

We only see it in action once, so we don’t know if the placement is dynamic. But it appears to be positioned on the HUD such that it draws Luke’s attention directly to the point in his field of vision where the flaming car is. (It looks offset to us because the camera is positioned in the middle of the back seat rather than the driver’s seat.) This dynamic positioning is great since it saves the driver critical bits of time. If the signal was fixed, then the driver would have his attention pulled between the IMPACT triangle and the actual thing. Much better to have the display say, “LOOK HERE!”

Readers of the book will recall this nuance from the lesson from Chapter 8, Augment the Periphery of Vision: “Objects should be placed at the edge of the user’s view when they are not needed, and adjacent to the locus of attention when they are.”

Improvements

There are a few improvements that could be made.

  • It could synchronize the audio to the visual. The dinging is dissociated from the motion of the triangle, and even sounds a bit like a seat belt warning rather than something trying to warn you of a possible, life-threatening collision. Having the sound and visual in sync would strengthen the signal. It could even increase volume with the probability and severity of impact.
  • It could increase the strength of the audio signal by suppressing competing audio, by pausing any audio entertainment and even canceling ambient sounds.
  • It could predict farther into the future. The triangle only appears once the flaming car actually stops in the road a few meters ahead. But there is clearly a burning car rolling down to the road for seconds before that. We see it. The passengers see it. Better sensors and prediction models would have drawn Luke’s attention to the problem earlier and helped him react sooner.
  • It could also know when the driver is actually focused on the problem and than fade the signal to the periphery so that it does not cover up any vital visual information. It can then fade completely when the risk has passed.
  • An even smarter system might be able to adjust the strength of the signal based on real-time variables, like the anxiety of the driver, his or her current level of distraction, ambient noise and light, and of course the degree of risk (a tumbleweed vs. a small child on the road).
  • It could of course go full agentive and apply the brakes or swerve if the driver fails to take appropriate action in time.

Despite these improvements, I believe Luke’s HUD to be well designed that gets underplayed in the drama and disorientation of the scene.

childrenofmen-impact-09

The Iron Man HUD is an impossible thing

In the prior post we looked at the HUD display from Tony’s point of view. In this post we dive deeper into the 2nd-person view, which turns out to be not what it seems.

The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. But then there’s this gorgeous 2nd-person view.

IronMan1_HUD00
IronMan1_HUD07

When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why.

Not a mini-TARDIS

First, it looks like we’re in some TARDIS-like space where the helmet extends so far we can fit in it, or a camera can, about a meter from his face. But of course the helmet isn’t huge on the inside. Tony hasn’t broken those laws of physics. The helmet is helmet-sized on the inside.

Not a volumetric projection

HUD_composit

Then there’s the issue of the huge display. It looks like a volumetric projection, like what R2-D2 can project, but that can’t be true, either. The projection would extend way beyond the boundaries of the helmet-sized helmet. Which as you can see below, is a non-starter. So it’s not a volumetric projection.

So, retinal projection

Then what is the display technology? Given the size constraints, retinal projection makes the most sense, but if we could make the helmet go invisible, it would look like Tony was having diffuse LASIK, or maybe playing The Game from Star Trek: The Next Generation.

STTNG The Game-02
Let’s face it, this is not the worst thing you’ve caught me doing.

Representation of the projections?

So, OK, fine. Maybe what we see is what’s being projected, the separate stereoscopic images onto individual retinas. Nope. Then we would see two similar, slightly offset images, like in older anaglyph stereoscopy, but more confusing, because there wouldn’t be a color difference, just double vision.

i_am_iron_man____in_3d_by_homerjk85-d57gs7u
Let’s pray that poor Tony doesn’t have to wear anaglyph glasses in there.
(Props to Deviantartist homerjk85 for the awesome conversion.)

Nope.

So what we are left with is that we are not seeing anything in the real world of the diegesis. This 2° view is strictly a narrative conceit: A projection of what Tony’s brain puts together from the split views of the stereographic projection into a cohesive whole, i.e. retinally-projected augmentation of his eyesight. It’s a testament to the talent of the filmmakers that this HUD, as narratively constructed as it is, just works. We think it’s something real. We instantly get it. But…

The damned multilayering

IronMan_HUDMultilayer
1280px-Parallax_Example.svg
layeringproblems

But even that notion—that this HUD is what Tony experiences, perceptually—is troubled by the multilayering in the HUD. Information in the HUD is typically displayed across multiple layers. See the three squares in the left side of this screen shot for an example. So many problems with this. If this is meant to be what he perceives, then we immediately have trouble with parallax. Parallax is the way that objects shift against background objects when seen from two different viewpoints, like, say, Tony’s two eyes. If Tony perceives these layers through both eyes, i.e. stereoscopically, as an actual set of three layers floating in front of his face, then those graphics shift around depending on which eye JARVIS is optimizing for. One eye might see it beautifully, but then the other eye is wholly confounded. In the worst possible situation, neither eye is really satisfied. See the Wikipedia article on parallax as parallaxed for a meta-example. If on the other hand it’s just one eye that’s seeing these layers, then the layering is utterly pointless, because a single eye has no depth perception and therefore these would just appear as a single layer. It would have no benefit for Tony and only be there for our gee-whizification.

Our choices are: Terrible or Pointless

So, it’s either a terrible, confusing display for Tony (which I can’t imagine, given how genius of a technologist he is meant to be), or this view is not even a representation of what Tony sees, but a strictly narrative construction. And we can’t say for sure which it is because this multilayering is never seen in the first-person views. In those screens it’s been reasonably cleaned up to be intelligible. Note the difference between the car views below in the first- and second-person shots.

IronMan1_HUD11
Layers include end views and a side view.
IronMan1_HUD10
Only the side view is shown, the end views are absent.

Then, the damned head movement

Note also that in the 2nd-person view, Tony is very expressive, moving his head around a lot in response to the HUD. But looking at him from the outside, Iron Man’s head doesn’t swivel around except to look at things in the real world. Is the interface requiring him to move his head or is he just a drama queen? If it requires him, that’s terrible. That would move his head away from important things in the real world to focus on something in this virtual world? If he’s a drama queen, fine, nothing to do about that and glad that JARVIS can accomodate. In any case, when we see the him in the helmet outside the TARDIS-HUD, he is not swiveling his head apropos of nothing, which reinforces the notion that this is strictly a cinematic conceit. (Hat tip to Jonathan Korman for sharing this observation with me.)

So…

So ultimately what I’m saying here is this is an impossible thing, and for being impossible, we should not just freak out about how cool it is and declare it the necessary and good future. It has major problems, even as gorgeous and exciting as it is. Hey, no surprise, nobody has forgotten that it’s a movie, but recognize that what you thought was just maybe exaggerated was in fact a bold-faced impossibility.

Next up in the Iron HUD series: Iron Man forces us to get clear about some terms.

The Bubbleship Cockpit

image01

Jack’s main vehicle in the post-war Earth is the Bubbleship craft. It is a two seat combination of helicopter and light jet. The center joystick controls most flight controls, while a left-hand throttle takes the place of a helicopter’s thrust selector. A series of switches above Jack’s seat provide basic power and start-up commands to the Bubbleship’s systems.

image05

Jack first provides voice authentication to the Bubbleship (the same code used to confirm his identity to the Drones), then he moves to activate the switches above his head.

image06

The switches are large, all move the same direction for startup, and are labeled. Two of the controls are color coded red, and Jack switches the red control last. We never see the round knobs in use. They could be circuit breakers for the major systems. All are positioned nicely to prevent accidental use. Overall, it is setup almost exactly like a modern-day helicopter, with two distinct additions: Cockpit-wide HUD, and Swivel Controls. While not technical, the cockpit also has a little Elvis bobblehead—whose name is Bob—that keeps Jack company.

image02

The main HUD provides standard information that Jack needs to pilot, even in zero visibility. It displays thrust output of his engines, an artificial horizon, altitude, and other indicators (shown in the above image and labeled in the image below).

image04

The HUD is displayed on the front glass, and is tied to the Bubbleship’s main power. When the power goes out, the HUD goes out. It is not wired to a separate backup circuit. Fortunately, Jack has a physical gimbal that remains operational even when the power is out.

image03

Another major addition is the swivel seating. Using a dedicated control on his joystick, Jack can move his seat around to get a better view as he is flying, without redirecting the Bubbleship in that direction.

image07

It is not clear based on the evidence shown whether Jack has set seat positions (one click per a certain degree of rotation), or whether he is able to hold down the control and rotate the seat based on the click duration. Jack is very familiar with this interface and piloting scheme. Even in an emergency situation when the Bubbleship’s power goes out and he loses control, Jack does not panic and goes through his emergency checklist. We see later that the Bubbleship does have an eject system (a large red handle in the top of the command pod), that detaches the entire passenger compartment and deploys a parachute. Jack decides that he does not need this rescue system and can pilot his way to safety.

Click-to-swivel

We see that Jack is often the only person in the Bubbleship, and that he often uses the seat swivel to get a better view of what he needs to survey. Piloting is a high-concentration activity, with a large amount of muscle memory training. A pilot can be expected to know how his (or her) craft will react to specific inputs at specific times. Moving around the seat allows Jack a better view of his surroundings, but could interrupt the muscle memory he uses for his daily piloting and emergency maneuvering. Given the muscle memory requirement, Jack is probably able to control the swivel based on a number of clicks, not a duration. Specific swivel points has several advantages:

  • The pilot can memorize control relationships for each swivel spot
  • Jack can click the position he wants, then forget about that control while he continues piloting
  • Less cognitive load to learn and operate
  • Automatic Swivel

Click-to-swivel has advantages, but it is not the most advantageous control scheme for the level of technology shown. We know that Jack has destinations in mind when he is traveling, or Vika has given him a waypoint. We also know that the Drones have a low level intelligence capable of free flight and complicated maneuvers. Jack could easily activate an autopilot mode (straight and level, emergency maneuvers, return to base, go to the secret cabin), then he could ‘free swivel’. This free swivel movement would be based on his eyes and head movement, with the seat merely following where Jack wants to look. Otherwise, the Bubbleship could follow his head movements for regular flight inputs, augmented by the control stick inputs.

image00

The Bubbleship would need some intelligence then to know the difference between when Jack actually wants to go somewhere, and when he is merely looking at his dashboard. Artificial intelligence is a given here. A good method would be focus tracking. When the Bubbleship tracks Jack’s focal point, it would know whether he’s looking at a spot on the horizon, or whether he’s looking at a point inside the cockpit. It would also be an effective way to focus the Bubbleship’s weapons pod for convergence—the guns would always meet at the point Jack was focused on, instead of firing wildly based on his joystick inputs.

Highly Refined

Modern day flight controls are highly refined tools with a well practiced group of users and a solid history of training programs. It makes sense to pull from this history when designing for a new flight machine, especially when its controls map so well to modern day equipment. The largest improvements can come from automation, especially when there is a solidly tested machine intelligence able to augment pilot intentions (see: the Drone, to be published later). By taking away the monotonous tasks from the pilot, and allowing them to focus on the difficult decisions, a machine can make the pilot’s life easier and safer.

Prometheus’ Flight instrument panels

There are a great many interfaces seen on the bridge of the Prometheus, and like most flight instrument panels in sci-fi, they are largely about storytelling and less about use.

Prometheus-100

The captain of the Prometheus is also a pilot, and has a captain’s chair with a heads-up display. This HUD has with real-time wireframe displays of the spaceship in plan view, presumably for glanceable damage feedback.

Prometheus-028

He also can stand afore at a waist-high panel that overlooks the ship’s view ports. This panel has a main screen in the center, grouped arrays of backlit keys to either side, a few blinking components, and an array of red and blue lit buttons above. We only see Captain Janek touch this panel once, and do not see the effects.

Prometheus-097

Navigator Chance’s instrument panel below consists of four 4:3 displays with inscrutable moving graphs and charts, one very wide display showing a topographic scan of terrain, one dim panel, two backlit reticles, and a handful of lit switches and buttons. Yellow lines surround most dials and group clusters of controls. When Chance “switches to manual”, he flips the lit switches from right to left (nicely accomplishable with a single wave of the hand) and the switches lights light up to confirm the change of state. This state would also be visible from a distance, useful for all crew within line of sight. Presumably, this is a dangerous state for the ship to be in, though, so some greater emphasis might be warranted: either a blinking warning, or a audio feedback, or possibly both.

Prometheus-094Prometheus-102

Captain Janek has a joystick control for manual landing control. It has a line of light at the top rear-facing part, but its purpose is not apparent. The degree of differentiation in the controls is great, and they seem to be clustered well.

Prometheus-104

A few contextless flight screens are shown. One for the scientist known only as Ford features 3D charts, views of spinning spaceships, and other inscrutable graphs, all of which are moving.

Prometheus-095

A contextless view shows the points of metal detected overlaid on a live view from the ship.

Prometheus-101

There is a weather screen as well that shows air density. Nearby there’s a push control, which Chance presses and keeps held down when he says, “Boss, we’ve got an incoming storm front. Silica and lots of static. This is not good.” Thought we never see the control, it’s curious how such a thing could work. Would it be an entire-ship intercom, or did Chance somehow specify Janek as a recipient with a single button?

Prometheus-078

Later we see Chance press a single button that illuminates red, after which the screens nearby change to read “COLLISION IMMINENT,” and an all-ship prerecorded announcement begins to repeat its evacuation countdown.

Prometheus-309

This is single button is perhaps the most egregious of the flight controls. As Janek says to Shaw late in the film, “This is not a warship.” If that’s the case, why would Chance have a single control that automatically knows to turn all screens red with the Big Label and provide a countdown? And why should the crew ever have to turn this switch on? Isn’t a collision one of the most serious things that could happen to the ship? Shouldn’t it be hard to, you know, turn off?