Jasper’s Music Player


After Jasper tells a white lie to Theo, Miriam, and Kee to get them to escape the advancing gang of Fishes, he returns indoors. To set a mood, he picks up a remote control and presses a button on it while pointing it at a display.


He watches a small transparent square that rests atop some things in a nook. (It’s that decimeter-square, purplish thing on the left of the image, just under the lampshade.) The display initially shows an album queue, with thumbnails of the album covers and two bright words, unreadably small. In response to his button press, the thumbnail for Franco Battiato’s album FLEURs slides from the right to the left. A full song list for the album appears beneath the thumbnail. Then track two, the cover of Ruby Tuesday, begins to play. A small thumbnail to the right of the album cover appears, featuring some white text on a dark background and a cycling, animated border. Theo puts the remote control down, picks up the Quietus box, and walks over to Janice. *sniff*

This small bit of speculative consumer electronics gets around 17 seconds of screen time, but we see enough to consider the design.  Continue reading

Scenery display

BttF_096Jennifer is amazed to find a window-sized video display in the future McFly house. When Lorraine arrives at the home, she picks up a remote to change the display. We don’t see it up close, but it looks like she presses a single button to change the scene from a sculpted garden to one of a beach sunset, a city scape, and a windswept mountaintop. It’s a simple interface, though perhaps more work than necessary.

We don’t know how many scenes are available, but having to click one button to cycle through all of them could get very frustrating if there’s more than say, three. Adding a selection ring around the button would allow the display to go from a selected scene to a menu from which the next one might be selected from amongst options.


The first computer interface we see in the film occurs at 3:55. It’s an interface for housing and monitoring the tesseract, a cube that is described in the film as “an energy source” that S.H.I.E.L.D. plans to use to “harness energy from space.” We join the cube after it has unexpectedly and erratically begun to throw off low levels of gamma radiation.

The harnessing interface consists of a housing, a dais at the end of a runway, and a monitoring screen.


Fury walks past the dais they erected just because.

The housing & dais

The harness consists of a large circular housing that holds the cube and exposes one face of it towards a long runway that ends in a dais. Diegetically this is meant to be read more as engineering than interface, but it does raise questions. For instance, if they didn’t already know it was going to teleport someone here, why was there a dais there at all, at that exact distance, with stairs leading up to it? How’s that harnessing energy? Wouldn’t you expect a battery at the far end? If they did expect a person as it seems they did, then the whole destroying swaths of New York City thing might have been avoided if the runway had ended instead in the Hulk-holding cage that we see later in the film. So…you know…a considerable flaw in their unknown-passenger teleportation landing strip design. Anyhoo, the housing is also notable for keeping part of the cube visible to users near it, and holding it at a particular orientation, which plays into the other component of the harness—the monitor.

Avengers-cubemonitoring-03 Continue reading

Precrime forearm-comm


Though most everyone in the audience left Minority Report with the precrime scrubber interface burned into their minds (see Chapter 5 of the book for more on that interface), the film was loaded with lots of other interfaces to consider, not the least of which were the wearable devices.

Precrime forearm devices

These devices are worn when Anderton is in his field uniform while on duty, and are built into the material across the left forearm. On the anterior side just at the wrist is a microphone for communications with dispatch and other officers. By simply raising that side of his forearm near his mouth, Anderton opens the channel for communication. (See the image above.)


There is also a basic circular display in the middle of the posterior left forearm that displays a countdown for the current mission: The time remaining before the crime that was predicted to occur should take place. The text is large white characters against a dark background. Although the translucency provides some visual challenge to the noisy background of the watch (what is that in there, a Joule heating coil?), the jump-cut transitions of the seconds ticking by commands the user’s visual attention.

On the anterior forearm there are two visual output devices: one rectangular perpetrator information (and general display?) and one amber-colored circular one we never see up close. In the beginning of the film Anderton has a man pinned to the ground and scans his eyes with a handheld Eyedentiscan device. Through retinal biometrics, the pre-offender’s identity is confirmed and sent to the rectangular display, where Anderton can confirm that the man is a citizen named Howard Marks.

Wearable analysis

Checking these devices against the criteria established in the combadge writeup, it fares well. This is partially because it builds on a century of product evolution for the wristwatch.

It is sartorial, bearing displays that lay flat against the skin connected to soft parts that hold them in place.

They are social, being in a location other people are used to seeing similar technology.

It is easy to access and use for being along the forearm. Placing different kinds of information at different spots of the body means the officer can count on body memory to access particular data, e.g. Perp info is anterior middle forearm. That saves him the cognitive load of managing modes on the device.

The display size for this rectangle is smallish considering the amount of data being displayed, but being on the forearm means that Anderton can adjust its apparent size by bringing it closer or farther from his face. (Though we see no evidence of this in the film, it would be cool if the amount of information changed based on distance-to-the-observer’s face. Writing that distanceFromFace() algorithm might be tricky though.)

There might be some question about accidental activation, since Anderton could be shooting the breeze with his buddies while scratching his nose and mistakenly send a dirty joke to a dispatcher, but this seems like an unlikely and uncommon enough occurrence to simply not worry about it.

Using voice as the input is cinemagenic, but especially in his line of work a subvocalization input would keep him more quiet—and therefore safer— in the field. Still, voice inputs are fast and intuitive, making for fairly apposite I/O. Ideally he might have some haptic augmentation of the countdown, and audio augmentation of the info so Anderton wouldn’t have to pull his arm and attention away from the perpetrator, but as long as the information is glanceable and Anderton is merely confirming data (rather than new information), recognition is a fast enough cognitive process that this isn’t too much of a problem.

All in all, not bad for a “throwaway” wearable technology.



At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.

Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.


*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at.

But OK, if it’s really that limited of an Übercomputer and can only focus on whatever is occupying it at the moment, at least make the controls usable by people. Let’s do the hard work of reducing the total number of controls, so they can be clustered all within easy reach rather than spread out so you have to move around just to operate them all. Or use your feet or whatever. Differentiate the controls so they are easy to tell apart by sight and touch rather than this undifferentiated mess. Let’s take out a paint pen and actually label the buttons. Do…do something.


This display could use some rethinking as well. It’s nice that it’s overhead, so that dispatch can be thinking about field strategy rather than ground tactics. But if that’s the case, it could use some design help and some strategic information. How about downplaying the saturation on the things that don’t matter that much, like walls and plants? Then the Sandmen can focus more on the interplay of the Runner and his assailants. Next you could augment the display with information about the runner, and perhaps a best-guess prediction of where they’re likely to run, maybe the health of individuals, or the amount of ammunitition they have.

Which makes me realize that more than anything, this screen could use the hand of a real-time strategy game user interface designer, because that’s what they’re doing. The Sandmen are playing a deadly, deadly video game right here in this room, and they’re using a crappy interface to try and win it.

Mission Briefing

Once the Prometheus crew has been fully revived from their hypersleep, they gather in a large gymnasium to learn the details of their mission from a prerecorded volumetric projection. To initiate the display, David taps the surface of a small tablet-sized handheld device six times, and looks up. A prerecorded VP of Peter Weyland appears and introduces the scientists Shaw and Holloway.

This display does not appear to be interactive. Weyland does mention and gesture toward Shaw and Holloway in the audience, but they could have easily been in assigned seats.

Cue Rubik’s Space Cube

After his introduction, Holloway places an object on the floor that looks like a silver Rubik’s Cube with a depressed black button in the center-top square.


He presses a middle-edge button on the top, and the cube glows and sings a note. Then a glowing-yellow “person” icon appears, glowing, at the place he touched, confirming his identity and that it’s ready to go.

He then presses an adjacent corner button. Another glowing-yellow icon appears underneath his thumb, this one a triangle-within-a-triangle, and a small projection grows from the side. Finally, by pressing the black button, all of the squares on top open by hinged lids, and the portable projection begins. A row of 7 (or 8?) “blue-box” style volumetric projections appear, showing their 3D contents with continuous, slight rotations.

Gestural control of the display

After describing the contents of each of the boxes, he taps the air towards either end of the row (there is a sparkle-sound to confirm the gesture) and he brings his middle fingers together like a prayer position. In response, the boxes slide to a center as a stack.

He then twists his hands in opposite directions, keeping the fingerpads of his middle fingers in contact. As he does this, the stack merges.


Then a forefinger tap summons an overlay that highlights a star pattern on the first plate. A middle finger swipe to the left moves the plate and its overlay off to the left. The next plate automatically highlights its star pattern, and he swipes it away. Next, with no apparent interaction, the plate dissolves in a top-down disintegration-wind effect, leaving only the VP spheres that illustrate the star pattern. These grow larger.

Halloway taps the topmost of these spheres, and the VP zooms through intersteller space to reveal an indistinct celestial sphere. He then taps the air again (nothing in particular is beneath his finger) and the display zooms to a star. Another tap zooms to a VP of LV-223.



After a beat of about 9 seconds, the presentation ends, and the VP of LV-223 collapses back into its floor cube.

Evaluating the gestures

In Chapter 5 of Make It So we list the seven pidgin gestures that Hollywood has evolved. The gestures seen in the Mission Briefing confirm two of these: Push to Move and Point to Select, but otherwise they seem idiosyncratic, not matching other gestures seen in the survey.

That said, the gestures seem sensible. On tapping the “bookends” of the blue boxes, Holloway’s finger pads come to represent the extents of the selection, so bringing them together is a reasonable gesture to indicate stacking. The twist gesture seems to lock the boxes in place, to break the connection between them and his fingertips. This twist gesture turns his hand like a key in a lock, so has a physical analogue.

It’s confusing that a tap would perform four different actions (highlight star patterns in the blue boxes, zoom to the celestial sphere, zoom to star, zoom to LV-223) but there is no indication that this is a platform for manipulating VPs as much as it is a presentation software. With this in mind he could arbitrarily assign any gesture to simply “advance the slide.”