Dr. Strange’s augmented reality surgical assistant

We’re actually done with all of the artifacts from Doctor Strange. But there’s one last kind-of interface that’s worth talking about, and that’s when Strange assists with surgery on his own body.

After being shot with a soul-arrow by the zealot, Strange is in bad shape. He needs medical attention. He recovers his sling ring and creates a portal to the emergency room where he once worked. Stumbling with the pain, he manages to find Dr. Palmer and tell her he has a cardiac tamponade. They head to the operating theater and get Strange on the table.


When Strange passes out, his “spirit” is ejected from his body as an astral projection. Once he realizes what’s happened, he gathers his wits and turns to observe the procedure.


When Dr. Palmer approaches his body with a pericardiocentesis needle, Strange manifests so she can sense him and recommends that she aim “just a little higher.” At first she is understandably scared, but once he explains what’s happening, she gets back to business, and he acts as a virtual coach.

Continue reading

Jefferson Projection


When Imperial troopers intrude to search the house, one of the bullying officers takes interest in a device sitting on the dining table. It’s the size of a sewing machine, with a long handle along the top. It has a set of thumb toggles along the top, like old cassette tape recorder buttons.

Saun convinces the officer to sit down, stretches the thin script with a bunch of pointless fiddling of a volume slider and pantomimed delays, and at last fumbles the front of the device open. Hinged at the bottom like a drawbridge, it exposes a small black velvet display space. Understandably exasperated, the officer stands up to shout, “Will you get on with it?” Saun presses a button on the opened panel, and the searing chord of an electric guitar can be heard at once.


Inside the drawbridge-space a spot of pink light begins to glow, and mesmerized officer who, moments ago was bent on brute intimidation, but now spends the next five minutes and 23 seconds grinning dopily at the volumetric performance by Jefferson Starship.

During the performance, 6 lights link in a pattern in the upper right hand corner of the display. When the song finishes, the device goes silent. No other interactions are seen with it.


Many questions. Why is there a whole set of buttons to open the thing? Is this the only thing it can play? If not, how do you select another performance?Is it those unused buttons on the top? Why are the buttons unlabeled? Is Jefferson Starship immortal? How is it that they have only aged in the long, long time since this was recorded? Or was this volumetric recording somehow sent back in time?  Where is the button that Saun pressed to start the playback? If there was no button, and it was the entire front panel, why doesn’t it turn on and off while the officer taps (see above)? What do the little lights do other than distract? Why is the glow pink rather than Star-Wars-standard blue? Since volumetric projections are most often free-floating, why does this appear in a lunchbox? Since there already exists ubiquitous display screens, why would anyone haul this thing around? How does this officer keep his job?

Perhaps it’s best that these questions remain unanswered. For if anything were substantially different, we would risk losing this image, of the silhouette of the lead singer and his microphone. Humanity would be the poorer for it.


The holocircus

To distract Lumpy while she tends to dinner, Malla sits him down at a holotable to watch a circus program. She leans down to one of the four control panels inset around the table’s edge, presses a few buttons, and the program begins.


In the program small volumetric projections of human (not Wookie) performers appear on the surface of the table and begin a dance and acrobatic performance to a soundtrack that is, frankly, ear-curdling.

Continue reading

Grabby hologram

After Pepper tosses off the sexy bon mot “Work hard!” and leaves Tony to his Avengers initiative homework, Tony stands before the wall-high translucent displays projected around his room.

Amongst the videos, diagrams, metadata, and charts of the Tesseract panel, one item catches his attention. It’s the 3D depiction of the object, the tesseract itself, one of the Infinity Stones from the MCU. It is a cube rendered in a white wireframe, glowing cyan amidst the flat objects otherwise filling the display. It has an intense, cold-blue glow at its center.  Small facing circles surround the eight corners, from which thin cyan rule lines extend a couple of decimeters and connect to small, facing, inscrutable floating-point numbers and glyphs.


Wanting to look closer at it, he reaches up and places fingers along the edge as if it were a material object, and swipes it away from the display. It rests in his hand as if it was a real thing. He studies it for a minute and flicks his thumb forward to quickly switch the orientation 90° around the Y axis.

Then he has an Important Thought and the camera cuts to Agent Coulson and Steve Rogers flying to the helicarrier.

So regular readers of this blog (or you know, fans of blockbuster sci-fi movies in general) may have a Spidey-sense that this feels somehow familiar as an interface. Where else do we see a character grabbing an object from a volumetric projection to study it? That’s right, that seminal insult-to-scientists-and-audiences alike, Prometheus. When David encounters the Alien Astrometrics VP, he grabs the wee earth from that display to nuzzle it for a little bit. Follow the link if you want that full backstory. Or you can just look and imagine it, because the interaction is largely the same: See display, grab glowing component of the VP and manipulate it.

Prometheus-229 Two anecdotes are not yet a pattern, but I’m glad to see this particular interaction again. I’m going to call it grabby holograms (capitulating a bit on adherence to the more academic term volumetric projection.) We grow up having bodies and moving about in a 3D world, so the desire to grab and turn objects to understand them is quite natural. It does require that we stop thinking of displays as untouchable, uninterruptable movies and more like toy boxes, and it seems like more and more writers are catching on to this idea.

More graphics or more information?

Additionally,  the fact that this object is the one 3D object in its display is a nice affordance that it can be grabbed. I’m not sure whether he can pull the frame containing the JOINT DARK ENERGY MISSION video to study it on the couch, but I’m fairly certain I knew that the tesseract was grabbable before Tony reached out.

On the other hand, I do wonder what Tony could have learned by looking at the VP cube so intently. There’s no information there. It’s just a pattern on the sides. The glow doesn’t change. The little glyph sticks attached to the edges are fuigets. He might be remembering something he once saw or read, but he didn’t need to flick it like he did for any new information. Maybe he has flicked a VP tesseract in the past?

Augmented “reality”

Rather, I would have liked to have seen those glyph sticks display some useful information, perhaps acting as leaders that connected the VP to related data in the main display. One corner’s line could lead to the Zero Point Extraction chart. Another to the lovely orange waveform display. This way Tony could hold the cube and glance at its related information. These are all augmented reality additions.

Augmented VP

Or, even better, could he do some things that are possible with VPs that aren’t possible with AR. He should be able to scale it to be quite large or small. Create arbitrary sections, or plan views. Maybe fan out depictions of all objects in the SHIELD database that are similarly glowy, stone-like, or that remind him of infinity. Maybe…there’s…a…connection…there! Or better yet, have a copy of JARVIS study the data to find correlations and likely connections to consider. We’ve seen these genuine VP interactions plenty of places (including Tony’s own workshop), so they’re part of the diegesis.

Avengers_PullVP-05.pngIn any case, this simple setup works nicely, in which interaction with a cool media helps underscore the gravity of the situation, the height of the stakes. Note to selves: The imperturbable Tony Stark is perturbed. Shit is going to get real.


Stark Tower monitoring

Since Tony disconnected the power transmission lines, Pepper has been monitoring Stark Tower in its new, off-the-power-grid state. To do this she studies a volumetric dashboard display that floats above glowing shelves on a desktop.


Volumetric elements

The display features some volumetric elements, all rendered as wireframes in the familiar Pepper’s Ghost (I know, I know) visual style: translucent, edge-lit planes. A large component to her right shows Stark Tower, with red lines highlighting the power traveling from the large arc reactor in the basement through the core of the building.

The center of the screen has a similarly-rendered close up of the arc reactor. A cutaway shows a pulsing ring of red-tinged energy flowing through its main torus.

This component makes a good deal of sense, showing her the physical thing she’s meant to be monitoring but not in a photographic way, but a way that helps her quickly locate any problems in space. The torus cutaway is a little strange, since if she’s meant to be monitoring it, she should monitor the whole thing, not just a quarter of it that has been cut away.

Flat elements

The remaining elements in the display appear on a flat plane. Continue reading

Iron Man HUD: 2nd-person view

In the prior post we looked at the HUD display from Tony’s point of view. In this post we dive deeper into the 2nd-person view, which turns out to be not what it seems.

The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. But then there’s this gorgeous 2nd-person view.

When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… IronMan1_HUD00 …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. IronMan1_HUD07 You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why. Continue reading

The bug VP


In biology class, the (unnamed) professor points her walking stick (she’s blind) at a volumetric projector. The tip flashes for a second, and a volumetric display comes to life. It illustrates for the class what one of the bugs looks like. The projection device is a cylinder with a large lens atop a rolling base. A large black plug connects it to the wall.

The display of the arachnid appears floating in midair, a highly saturated screen-green wireframe that spins. It has very slight projection rays at the cylinder and a "waver" of a scan line that slowly rises up the display. When it initially illuminates, the channels are offset and only unify after a second.



The top and bottom of the projection are ringed with tick lines, and several tick lines runs vertically along the height of the bug for scale. A large, lavender label at the bottom identifies this as an ARACHNID WARRIOR CLASS. There is another lavendar key too small for us to read.The arachnid in the display is still, though the display slowly rotates around its y-axis clockwise from above. The instructor uses this as a backdrop for discussing arachnid evolution and "virtues."

After the display continues for 14 seconds, it shuts down automatically.



It’s nice that it can be activated with her walking stick, an item we can presume isn’t common, since she’s the only apparently blind character in the movie. It’s essentially gestural, though what a blind user needs with a flash for feedback is questionable. Maybe that signal is somehow for the students? What happens for sighted teachers? Do they need a walking stick? Or would a hand do? What’s the point of the flash then?

That it ends automatically seems pointlessly limited. Why wouldn’t it continue to spin until it’s dismissed? Maybe the way she activated it indicated it should only play for a short while, but it didn’t seem like that precise a gesture.

Of course it’s only one example of interaction, but there are so many other questions to answer. Are there different models that can be displayed? How would she select a different one? How would she zoom in and out? Can it display aimations? How would she control playback? There are quite a lot of unaddressed details for an imaginative designer to ponder.


The display itself is more questionable.

Scale is tough to tell on it. How big is that thing? Students would have seen video of it for years, so maybe it’s not such an issue. But a human for scale in the display would have been more immediately recognizable. Or better yet, no scale: Show the thing at 1:1 in the space so its scale is immediately apparent to all the students. And more appropriately, terrifying.

And why the green wireframe? The bugs don’t look like that. If it was showing some important detail, like carapice density, maybe, but this looks pretty even. How about some realistic color instead? Do they think it would scare kids? (More than the “gee-whiz!” girl already is?)

And lastly there’s the title. Yes, having it rotate accomodates viewers in 360 degrees, but it only reads right for half the time. Copy it, flip it 180º on the y-axis, and stack it, and you’ve got the most important textual information readable at most any time from the display.

Better of course is more personal interaction, individual displays or augmented reality where a student can turn it to examine the arachnid themselves, control the zoom, or follow up on more information. (Wnat to know more?) But the school budget in the world of Starship Troopers was undoubtedly stripped to increase military budget (what a crappy world that would be amirite?), and this single mass display might be more cost effective.

Pilot seat


The reawakened alien places his hand in the green display and holds it there for a few seconds. This summons a massive pilot seat. If the small green sphere is meant to be a map to the large cyan astrometric sphere, the mapping is questionable. Better perhaps would be to touch where the seat would appear and lift upwards through the sphere.

He climbs into the seat and presses some of the “egg buttons” arrayed on the armrests and on an oval panel above his head. The buttons illuminate in response, blinking individually from within. The blink pattern for each is regular, so it’s difficult to understand what information this visual noise conveys. A few more egg presses re-illuminate the cyan astrometric display.


A few more presses on the overhead panel revs up the spaceship’s engines and seals him in an organic spacesuit. The overhead panel slowly advances towards his face. The purpose for this seems inexplicable. If it was meant to hold the alien in place, why would it do so with controls? Even if they’re just navigation controls that no longer matter since he is on autopilot, he wouldn’t be able to take back sudden navigation control in a crisis. If the armrest panels also let him navigate, why are the controls split between the two parts?



On automatic at this point, the VP traces a thin green arc from the chair to the VP earth and adds highlight graphics around it. Then the ceiling opens and the spaceships lifts up into the air.

Alien Astrometrics


When David is exploring the ancient alien navigation interfaces, he surveys a panel, and presses three buttons whose bulbous tops have the appearance of soft-boiled eggs. As he presses them in order, electronic clucks echo in in the cavern. After a beat, one of the eggs flickers, and glows from an internal light. He presses this one, and a seat glides out for a user to sit in. He does so, and a glowing pollen volumetric projection of several aliens appears. The one before David takes a seat in the chair, which repositions itself in the semicircular indentation of the large circular table.


The material selection of the egg buttons could not be a better example of affordance. The part that’s meant to be touched looks soft and pliable, smooth and cool to the touch. The part that’s not meant to be touched looks rough, like immovable stone. At a glance, it’s clear what is interactive and what isn’t. Among the egg buttons there are some variations in orientation, size, and even surface texture. It is the bumpy-surfaced one that draws David’s attention to touch first that ultimately activates the seat.

The VP alien picks up and blows a few notes on a simple flute, which brings that seat’s interface fully to life. The eggs glow green and emit green glowing plasma arcs between certain of them. David is able to place his hand in the path of one of the arcs and change its shape as the plasma steers around him, but it does not appear to affect the display. The arcs themselves appear to be a status display, but not a control.

After the alien manipulates these controls for a bit, a massive, cyan volumetric projection appears and fills the chamber. It depicts a fluid node network mapped to the outside of a sphere. Other node network clouds appear floating everywhere in the room along with objects that look like old Bohr models of atoms, but with galaxies at their center. Within the sphere three-dimensional astronomical charts appear. Additionally huge rings appear and surround the main sphere, rotating slowly. After a few inputs from the VP alien at the interface, the whole display reconfigures, putting one of the small orbiting Bohr models at the center, illuminating emerald green lines that point to it and a faint sphere of emerald green lines that surround it. The total effect of this display is beautiful and spectacular, even for David, who is an unfeeling replicant cyborg.


At the center of the display, David observes that the green-highlighted sphere is the planet Earth. He reaches out towards it, and it falls to his hand. When it is within reach, he plucks it from its orbit, at which point the green highlights disappear with an electronic glitch sound. He marvels at it for a bit, turning it in his hands, looking at Africa. Then after he opens his hands, the VP Earth gently returns to its rightful position in the display, where it is once again highlighted with emerald, volumetric graphics.


Finally, in a blinding flash, the display suddenly quits, leaving David back in the darkness of the abandoned room, with the exception of the small Earth display, which is floating over a small pyramid-shaped protrusion before flickering away.

After the Earth fades, david notices the stasis chambers around the outside of the room. He realizes that what he has just seen (and interacted with) is a memory from one of the aliens still present.



Hilarious and insightful Youtube poster CinemaSins asks in the video “Everything Wrong with Prometheus in 4 minutes or Less,” “How the f*ck is he holding the memory of a hologram?” Fair question, but not unanswerable. The critique only stands if you presume that the display must be passive and must play uninterrupted like a television show or movie. But it certainly doesn’t have to be that way.

Imagine if this is less like a YouTube video, and more like a playback through a game engine like a holodeck StarCraft. Of course it’s entirely possible to pause the action in the middle of playback and investigate parts of the display, before pressing play again and letting it resume its course. But that playback is a live system. It would be possible to run it afresh from the paused point with changed parameters as well. This sort of interrupt-and-play model would be a fantastic learning tool for sensemaking of 4D information. Want to pause playback of the signing of the Magna Carta and pick up the document to read it? That’s a “learning moment” and one that a system should take advantage of. I’d be surprised if—once such a display were possible—it wouldn’t be the norm.


The only thing I see that’s missing in the scene is a clear signal about the different state of the playback:

  1. As it happened
  2. Paused for investigation
  3. Playing with new parameters (if it was actually available)

David moves from 1 to 2, but the only change of state is the appearance and disappearance of the green highlight VP graphics around the Earth. This is a signal that could easily be missed, and wasn’t present at the start of the display. Better would be some global change, like a global shift in color to indicate the different state. A separate signal might compare As it Happened with the results of Playing with new parameters, but that’s a speculative requirement of a speculative technology. Best to put it down for now and return to what this interface is: One of the most rich, lovely, and promising examples of sensemaking interactions seen on screen. (See what I did there?)

For more about how VP might be more than a passive playback, see the lesson in Chapter 4 of Make It So, page 84, VP Systems Should Interpret, Not Just Report.

Mission Briefing

Once the Prometheus crew has been fully revived from their hypersleep, they gather in a large gymnasium to learn the details of their mission from a prerecorded volumetric projection. To initiate the display, David taps the surface of a small tablet-sized handheld device six times, and looks up. A prerecorded VP of Peter Weyland appears and introduces the scientists Shaw and Holloway.

This display does not appear to be interactive. Weyland does mention and gesture toward Shaw and Holloway in the audience, but they could have easily been in assigned seats.

Cue Rubik’s Space Cube

After his introduction, Holloway places an object on the floor that looks like a silver Rubik’s Cube with a depressed black button in the center-top square.


He presses a middle-edge button on the top, and the cube glows and sings a note. Then a glowing-yellow “person” icon appears, glowing, at the place he touched, confirming his identity and that it’s ready to go.

He then presses an adjacent corner button. Another glowing-yellow icon appears underneath his thumb, this one a triangle-within-a-triangle, and a small projection grows from the side. Finally, by pressing the black button, all of the squares on top open by hinged lids, and the portable projection begins. A row of 7 (or 8?) “blue-box” style volumetric projections appear, showing their 3D contents with continuous, slight rotations.

Gestural control of the display

After describing the contents of each of the boxes, he taps the air towards either end of the row (there is a sparkle-sound to confirm the gesture) and he brings his middle fingers together like a prayer position. In response, the boxes slide to a center as a stack.

He then twists his hands in opposite directions, keeping the fingerpads of his middle fingers in contact. As he does this, the stack merges.


Then a forefinger tap summons an overlay that highlights a star pattern on the first plate. A middle finger swipe to the left moves the plate and its overlay off to the left. The next plate automatically highlights its star pattern, and he swipes it away. Next, with no apparent interaction, the plate dissolves in a top-down disintegration-wind effect, leaving only the VP spheres that illustrate the star pattern. These grow larger.

Halloway taps the topmost of these spheres, and the VP zooms through intersteller space to reveal an indistinct celestial sphere. He then taps the air again (nothing in particular is beneath his finger) and the display zooms to a star. Another tap zooms to a VP of LV-223.



After a beat of about 9 seconds, the presentation ends, and the VP of LV-223 collapses back into its floor cube.

Evaluating the gestures

In Chapter 5 of Make It So we list the seven pidgin gestures that Hollywood has evolved. The gestures seen in the Mission Briefing confirm two of these: Push to Move and Point to Select, but otherwise they seem idiosyncratic, not matching other gestures seen in the survey.

That said, the gestures seem sensible. On tapping the “bookends” of the blue boxes, Holloway’s finger pads come to represent the extents of the selection, so bringing them together is a reasonable gesture to indicate stacking. The twist gesture seems to lock the boxes in place, to break the connection between them and his fingertips. This twist gesture turns his hand like a key in a lock, so has a physical analogue.

It’s confusing that a tap would perform four different actions (highlight star patterns in the blue boxes, zoom to the celestial sphere, zoom to star, zoom to LV-223) but there is no indication that this is a platform for manipulating VPs as much as it is a presentation software. With this in mind he could arbitrarily assign any gesture to simply “advance the slide.”