Glossary: Facing, Off-facing, Lengthwise, and Edgewise

As part of the ongoing review of the Iron Man HUD, I noticed a small feature in the Iron Man 3 UI 2nd-person UI that—in order to critique—I have to discuss some new concepts and introduce some new terms. The feature itself is genuinely small and almost not worth posting about, but the terms are interesting, so bear with me.

Most of the time JARVIS animates the HUD, the UI elements sit on an invisible sphere that surrounds his head. (And in the case of stacked elements, on concentric invisible spheres.) The window of Pepper in the following screenshot illustrates this pretty clearly. It is a rectangular video feed, but appears slightly bowed to us, being on this sphere near the periphery of this 2nd-person view.

IronMan3_HUD68
…And Pepper Potts is up next with her op-ed about the Civil Mommy Wars. Stay tuned.

Having elements slide around on the surface of this perceptual sphere is usable for Tony, since it means the elements are always facing him and thereby optimally viewable. “PEPPER POTTS,” for example, is as readable as if it was printed on a book perpendicular to his line of sight. (This notion is a bit confounded by the problems of parallax I wrote about in an earlier post, but since that seems unresolvable until Wim Wouters implements this exact HUD on Oculus Rift, let’s bypass it to focus on the new thing.)

So if it’s visually optimal to have 2D UI elements plastered to the surface of this perceptual sphere, how do we describe that suboptimal state where these same elements are not perpendicular to the line of sight, but angled away? I’m partly asking for a friend named Tony Stark because that’s some of what we see in Iron Man 3, both in 1st- and 2nd-person views. These examples aren’t egregious.

IronMan3_HUD44
The Iron Patriot debut album cover graphic is only slightly angled and so easy to read. Similarly, the altimeter thingy on the left is still wholly readable.
IronMan3_HUD64
The weird L-protractor in the corner might have some 3D use we’re just not seeing at this particular moment.

As I mentioned in the opening paragraph, these things aren’t terrible in and of themselves, but as a UI pattern could get bad as people misunderstand and overuse it, so we need a way to talk about it. To be precise, we need a way to talk about the degree of tilt away from a plane perpendicular to the line of sight. except “degree of tilt away from a plane perpendicular to the line of sight” is waaay too long.

To find this term, I did some asking around on social media. At first, lots of folks jumped to anatomical terms of location like sagittal or caudal, but should you be similarly tempted, note that these terms are fixed per the body. A UI element that is coronal in front of the face, and perfectly readable there, is utterly unreadable near the ear. A facing element would be readable in both places, and a whatever-the-antonym-is element similarly unreadable as it slid from the nose around the side. 

BodyPlanes

Eventually I got some nice adjectives that describe the particular tilt away from the line of sight. I was most happy with industrial designer ‏Abhinav Dapke’s suggestion of “lengthwise” for a tilt away from line-of-sight, since it’s a word we have already and very descriptive. It also implies another existing word for yawed-against line-of-sight, and that’s “edgewise.” (Roll along line-of-sight can be handled simply as rotation, for you completionists.)

But for the single variable that we can discuss as an antonym to facing, my crowdsourcing turned up nothing, and so I’m going to coin the ungainly adjectives off-facing and off-faced. Each is short, decryptable, not currently defined as something else, and obviously connected to its source concept, so works for many reasons.

off-facing.png

 

With these we now we can speak of those elements that are off-faced in Iron Man and similar bubble HUDs, and do a Invasion of the Body Snatchers-esque pointing and screeching when it’s too extreme.

Note that this only applies to 2D UI elements that are meant to be read. The overwhelming majority of things we see in the physical world are not oriented to our line of sight and that poses little problem. Even in the Iron Man HUD we see plenty of objects that are off-faced but rightly so, since as augmentations they bear orientation to the world, not the viewer.

IronMan3_HUD63

One of the main reasons I went to such trouble to come up with these terms is that I think the Iron Man HUD is one of the most forward-provoking sci-fi interfaces in the survey. It ought to be the Minority Report Precrime Scrubber of it’s day. I suspect it will become more and more influential, and so having these new terms are likely to become more useful and necessary as sci-fi keeps on keepin’ on.

Next up in the Iron HUD series: We discuss how JARVIS is straight-up lying to Tony Stark.

Iron Man HUD: 1st person view

In the prior post we catalogued the functions in the Iron HUD. Today we examine the 1st-person display.

When we first see the HUD, Tony is donning the Iron Man mask. Tony asks, JARVIS, “You there?” To which JARVIS replies, “At your service sir.” Tony tells him to “Engage the heads-up display”, and we see the HUD initialize. It is a dizzying mixture of blue wireframe motion graphics. Some imply system functions, such as the reticle that pinpoints Tony’s eye. Most are small dashboard-like gauges that remain small and in Tony’s peripheral vision while the information is not needed, and become larger and more central when needed. These features are catalogued in another post, but we learn about them through two points-of-view:a first-person view, which shows us what Tony’s sees as if we were there, donning the mask in his stead, and second-person view, which shows us Tony’s face overlaid against a dark background with floating graphics.

This post is about that first-person view. Specifically it’s about the visual design and the four awarenesses it displays.

Avengers-missile-fetching04

In the Augmented Reality chapter of Make It So, I identified four types of awareness seen in the survey for Augmented Reality displays:

  1. Sensor display
  2. Location awareness
  3. Context awareness
  4. Goal awareness

The Iron Man HUD illustrates all four and is a useful framework for describing and critiquing the 1st-person view.

Sensor display

When looking through the HUD “ourselves,” we can see that the HUD provides some airplane-like heads up instruments: Across the top is a horizontal compass with a thin white line for a needle. Below and to its left is a speed indicator, presented in terms of MACH. On the left side of the screen is a two-part altimeter with overlays indicating public, commercial, military, and aerospace layers of atmosphere, with a small blue tick mark indicating Tony’s current altitude.

There are just-in-time status indicators like that cyan text box on the right with its randomized rule line. The content within is all N -8 W -97 RNG EL, so, hard to tell what it means, but Tony’s a maker working with a prototype. It’s no surprise he takes some shortcuts in the interface since it’s not a commercial device. But we should note that it would reduce his cognitive load to not have to remember what those cryptic letters meant.

IronMan1_HUD08
You can just see the tops of these gauges at the bottom of this screen.

The exact sensor shown depends on the context and goal at hand.

Periphery and attention

A quick sidenote about peripheral vision and the detail of these gauges. Looking at them, it’s notable that they are small and quite detailed. That makes sense when he’s looking right at them, but when he’s not, given the amount of big, swirling graphics he“s got vying for his attention in the main display, the more those little gauges have to compete. And when it comes to your peripheral vision, localized detail and motion is not enough, owing to the limits of our foveal extent. (Props to @pixelio for the heads-up on this one.)

You see, your brain tricks you into thinking that you can see really well across your entire field of vision. In fact, you can only see really well across a few dozen degrees of that perceptual sphere, corresponding to the tiny area at the back of your eye called the fovea where all the really good photoreceptors concentrate. As your eyes dart around the scene before you, your brain puts all the snippets of detailed information together so it feels like a cohesive, well-detailed whole, but it’s ultimately just a hack. Take a look at this demonstration of the effect.

Screen Shot 2015-07-20 at 23.49.56
This only works if you view it live.

So, having those teeny little guages dancing around as a signal of troubles ahead won’t really get Tony’s attention. He could develop habits of glancing at these things, but that’s a weak strategy, since this data is so mission-critical. If he misses it and forgets to check the gauges, he’s Iron Toast. Fortunately, JARVIS is once again our deus ex machina (in so many senses) because he is able to track where Tony is looking, and if he’s not looking at the wiggling gauge, JARVIS can choose to escalate the signal: Hide the air traffic data temporarily and show the problem in the main screen. Here, as in other mission critical systems, attention management is crisis management. Now, for those of us working with pre-JARVIS tech, it’s rare today for a system to be able to

  • Track perceptual details of its users
  • Monitor a model of the user’s attention
  • Make the right call amongst competing priorities to escalate the right one

But if you could, it would be the smart and humane way to handle it.

Location Awareness

As Tony prepares for his first flight, JARVIS gives him a bit of x-ray vision, displaying a wireframe view of the Santa Monica coastline with live air traffic control icons of aircraft in the vicinity. The overhead map updates of course in real time.

IronMan1_HUD17
If my Google Earth sleuthing is right, his view means he lives in the Malibu RV Park and this view is due East.

Context Awareness

Very quickly after we meet the HUD it shows its object recognition capabilities. As Tony sweeps his glance across his garage, complex reticles jump to each car. Split-seconds afterwards, the car’’s outline is overlaid and some adjunct information about it is presented.

IronMan1_HUD10

This holds true as he’s in flight as well. When Tony passes by the Santa Monica pier, not only is the Pacific Wheel identified (as the Santa Monica Ferriswheel), but the interface shows him a Wikipedia-esque article for the thing as well.

IronMan1_HUD19

IronMan1_HUD21

While JARVIS might be tapping into location databases for both the car and the ferris wheel recognition, it’s more than that. In one scene we see him getting information on the Iron Patriot as it rockets away, and its location wouldn’t be on any real-time record for him to access.

Optical zoom

Too much detail

While this level of object detail is deeply impressive, it’s about as useful as reading Wikipedia pages hard-printed to transparencies while driving. The text is too small, too multilayered, and just pointless considering that JARVIS can tell him whatever he needs to know without even asking. Maybe he could indulge in pop-up pamphlets if he was on a long-haul flight from, say, Europe back home to the Malibu RV Park (see above), but wouldn’t Tony rather watch a movie while on Autopilot instead?

Goal awareness

Of course JARVIS is aware of Tony’s goals, and provides graphics customized to the task, whether that task is navigating flight through complex obstacle courses…

3D wayfinding

…taking down a bad guy with the next hit…

Suggested target points

…saving innocent bystanders who are freefalling from a plane…

Biometric analysis, target acquisition

…or instantly analyzing problems in an observed (and complicated) piece of machinery…

3D schematics of observed machinery with damage highlights

…JARVIS is there with the graphics to help illustrate, if not solve, the problem at hand. Most impressively, perhaps, is JARVIS’ ability to juggle all of these graphics and modes seamlessly to present just the right thing at the right time in real time. Tony never asks for a particular display, it just happens. If you needed no other proof of its strong artificial intelligence, this would be it.

Next up in the Iron HUD series: Compare and contrast the 2nd-person view.

Iron Man HUD: Just the functions

In the last post we went over the Iron HUD components. There is a great deal to say about the interactions and interface, but let’s just take a moment to recount everything that the HUD does over the Iron Man movies and The Avengers. Keep in mind that just as there are many iterations of the suit, there can be many iterations of the HUD, but since it’s largely display software controlled by JARVIS, the functions can very easily move between exosuits.

Gauges

Along the bottom of the HUD are some small gauges, which, though they change iconography across the properties, are consistently present.

IronMan1_HUD07

For the most part they persist as tiny icons and thereby hard to read, but when the suit reboots in a high-altitude freefall, we get to see giant versions of them, and can read that they are:

IronMan1_HUD13
Tony can, at a glance or request, summon more detail for any of the gauges.
IronMan1_HUD12
Even different visualizations of similar information.

Object Recognition

In the 1st-person view we see that the HUD has a separate map in the lower-left, and object recognition/awareness,

IronMan1_HUD10
IronMan1_HUD11
In the 2nd-person view, we see even more layers of information about the identified objects, floating closer to tony’s point of view.

Situational

Most of the HUD functions we see, though, are situational, brought up for Tony’s attention when JARVIS believes they are needed, or when Tony requests them. Following are screenshots that illustrate a moment when the situational function appeared. 

Iron Man

Iron Man 2

Iron Man 3

The Avengers

Some of these illustrate why I argue that JARVIS is the superhero, and Tony just the onboard manager, but rather than reverse engineering any particular function, for this post it is enough to document them and note that only the optical zoom seems to be an interactive function. This raises questions of how he initiated the mode and how he escapes the mode, but since we don’t see the mechanisms of control, it’s entirely arguable that JARVIS is just  being his usual helpful self again.

Next up in the Iron HUD series: Let’s dive deeper into the first-person view.