Routing Board

When the two AIs Colossus and Guardian are disconnected from communicating with each other, they try and ignore the spirit of the human intervention and reconnect on their own. We see the humans monitoring Colossus’ progress in this task on big board in the U.S. situation room. It shows a translucent projection map of the globe with white dots representing data centers and red icons representing missiles. Beneath it, glowing arced lines illustrate the connection routes Colossus is currently testing. When it finds that a current segment is ineffective, that line goes dark, and another segment extending from the same node illuminates.

For a smaller file size, the animated gif has been stilled between state changes, but the timing is as close as possible to what is seen in the film.

Forbin explains to the President, “It’s trying to find an alternate route.”

A first in sci-fi: Routing display 🏆

First, props to Colossus: The Forbin Project for being the first show in the survey to display something like a routing board, that is, a network of nodes through which connections are visible, variable, and important to stakeholders.

Paul Baran and Donald Davies had published their notion of a network that could, in real-time, route information dynamically around partial destruction of the network in the early 1960s, and this packet switching had been established as part of ARPAnet in the late 1960s, so Colossus was visualizing cutting edge tech of the time.

This may even be the first depiction of a routing display in all of screen sci-fi or even cinema, though I don’t have a historical perspective on other genres, like the spy genre, which is another place you might expect to see something like this. As always, if you know of an earlier one, let me know so I can keep this record up to date and honest.

A nice bit: curvy lines

Should the lines be straight or curvy? From Colossus’ point of view, the network is a simple graph. Straight lines between its nodes would suffice. But from the humans’ point of view, the literal shape of the transmission lines are important, in case they need to scramble teams to a location to manually cut the lines. Presuming these arcs mean that (and not just the way neon in a prop could bend), then the arcs are the right display. So this is good.

But, it breaks some world logic

The board presents some challenges with the logic of what’s happening in the story. If Colossus exists as a node in a network, and its managers want to cut it off from communication along that network, where is the most efficient place to “cut” communications? It is not at many points along the network. It is at the source.

Imagine painting one knot in a fishing net red and another one green. If you were trying to ensure that none of the strings that touch the red knot could trace a line to the green one, do you trim a bunch of strings in the middle, or do you cut the few that connect directly to the knot? Presuming that it’s as easy to cut any one segment as any other, the fewer number of cuts, the better. In this case that means more secure.

The network in Colossus looks to be about 40 nodes, so it’s less complicated than the fishing net. Still, it raises the question, what did the computer scientists in Colossus do to sever communications? Three lines disappear after they cut communications, but even if they disabled those lines, the rest of the network still exists. The display just makes no sense.

Before, happy / After, I will cut a Prez

Per the logic above, they would cut it off at its source. But the board shows it reaching out across the globe. You might think maybe they just cut Guardian off, leaving Colossus to flail around the network, but that’s not explicitly said in the communications between the Americans and the Russians, and the U.S. President is genuinely concerned about the AIs at this point, not trying to pull one over on the “pinkos.” So there’s not a satisfying answer.

It’s true that at this point in the story, the humans are still letting Colossus do its primary job, so it may be looking at every alternate communication network to which it has access: telephony, radio, television, and telegraph. It would be ringing every “phone” it thought Guardian might pick up, and leaving messages behind for possible asynchronous communications. I wish a script doctor had added in a line or three to clarify this.

  • FORBIN
  • We’ve cut off its direct lines to Guardian. Now it’s trying to find an indirect line. We’re confident there isn’t one, but the trouble will come when Colossus realizes it, too.

Too slow

Another thing that seems troubling is the slow speed of the shifting route. The segments stay illuminated for nearly a full second at a time. Even with 1960s copper undersea cables and switches, electronic signals should not take that long. Telephony around the world was switched from manual to automatic switching by the 1930s, so it’s not like it’s waiting on a human operating a switchboard.

You’re too slow!

Even if it was just scribbling its phone number on each network node and the words “CALL ME” in computerese, it should go much faster than this. Cinematically, you can’t go too fast or the sense of anticipation and wonder is lost, but it would be better to have it zooming through a much more complicated network to buy time. It should feel just a little too fast to focus on—frenetic, even.

This screen gets 15 seconds of screen time, and if you showed one new node per frame, that’s only 360 states you need to account for, a paltry sum compared to the number of possible paths it could test across a 38 node graph between two points.

Plus the speed would help underscore the frightening intelligence and capabilities of the thing. And yes I understand that that is a lot easier said than done nowadays with digital tools than with this analog prop.

Realistic-looking search strategies

Again, I know this was a neon, analog prop, but let’s just note that it’s not testing the network in anything that looks like a computery way. It even retraces some routes. A brute force algorithm would just test every possibility sequentially. In larger networks there are pathfinding algorithms that are optimized in different ways to find routes faster, but they don’t look like this. They look more like what you see in the video below. (Hat tip to YouTuber gray utopia.)

This would need a lot of art direction and the aforementioned speed, but it would be more believable than what we see.

What’s the right projection?

Is this the right projection to use? Of course the most accurate representation of the earth is a globe, but it has many challenges in presenting a phenomenon that could happen anywhere in the world. Not the least of these is that it occludes about half of itself, a problem that is not well-solved by making it transparent. So, a projection it must be. There are many, many ways to transform a spherical surface into a 2D image, so the question becomes which projection and why.

The map uses what looks like a hand-drawn version of Peirce quincuncial projection. (But n.b. none of the projection types I compared against it matched exactly, which is why I say it was hand-drawn.) Also those longitude and latitude lines don’t make any sense; though again, a prop. I like that it’s a non standard projection because screw Mercator, but still, why Peirce? Why at this angle?

Also, why place time zone clocks across the top as if they corresponded to the map in some meaningful way? Move those clocks.

I have no idea why the Peirce map would be the right choice here, when its principle virtue is that it can be tessellated. That’s kind of interesting if you’re scrolling and can’t dynamically re-project the coastlines. But I am pretty sure the Colossus map does not scroll. And if the map is meant to act as a quick visual reference, having it dynamic means time is wasted when users look to the map and have to orient themselves.

If this map was only for tracking issues relating to Colossus, it should be an azimuthal map, but not over the north pole. The center should be the Colossus complex in Colorado. That might be right for a monitoring map in the Colossus Programming Office. This map is over the north pole, which certainly highlights the fact that the core concern of this system is the Cold War tensions between Moscow and D.C. But when you consider that, it points out another failing. 

Later in the film the map tracks missiles (not with projected paths, sadly, but with Mattel Classic Football style yellow rectangles). But missiles could conceivably come from places not on this map. What is this office to do with a ballistic-missile submarine off of the Baja peninsula, for example? Just wait until it makes its way on screen? That’s a failure. Which takes us to the crop.

Crop

The map isn’t just about missiles. Colossus can look anywhere on the planet to test network connections. (Even nowadays, near-earth orbit and outer space.) Unless the entire network was contained just within the area described on the map, it’s excluding potentially vital information. If Colossus routed itself through through Mexico, South Africa, and Uzbekistan before finally reconnecting to Guardian, users would be flat out of luck using that map to determine the leak route. And I’m pretty sure they had a functioning telephone network in Mexico, South Africa, and the Balkan countries in the 1960s.

This needs a complete picture

SInce the missiles and networks with which Colossus is concerned are potentially global, this should be a global map. Here I will offer my usual fanboy shout-outs to the Dymaxion and Pacific-focused Waterman projection for showing connectedness and physical flow, but there would be no shame in showing the complete Peirce quincuncial. Just show the whole thing.

Maybe fill in some of the Pacific “wasted space” with a globe depiction turned to points of interest, or some other fuigetry. Which gives us a new comp something like this.

I created this proof of concept manually. With more time, I would comp it up in Processing or Python and it would be even more convincing. (And might have reached London.)

All told, this display was probably eye-opening for its original audience. Golly jeepers! This thing can draw upon resources around the globe! It has intent, and a method! And they must have cool technological maps in D.C.! But from our modern-day vantage point, it has a lot to learn. If they ever remake the film, this would be a juicy thing to fully redesign.

Advertisements

The Dark Dimension mode (5 of 5)

We see a completely new mode for the Eye in the Dark Dimension. With a flourish of his right hand over his left forearm, a band of green lines begin orbiting his forearm just below his wrist. (Another orbits just below his elbow, just off-camera in the animated gif.) The band signals that Strange has set this point in time as a “save point,” like in a video game. From that point forward, when he dies, time resets and he is returned here, alive and well, though he and anyone else in the loop is aware that it happened.

Dark-Dimension-savepoint.gif

In the scene he’s confronting a hostile god-like creature on its own mystical turf, so he dies a lot.

DoctorStrange-disintegrate.png

An interesting moment happens when Strange is hopping from the blue-ringed planetoid to the one close to the giant Dormammu face. He glances down at his wrist, making sure that his savepoint was set. It’s a nice tell, letting us know that Strange is a nervous about facing the giant, Galactus-sized primordial evil that is Dormammu. This nervousness ties right into the analysis of this display. If we changed the design, we could put him more at ease when using this life-critical interface.

DoctorStrange-thisthingon.png

Initiating gesture

The initiating gesture doesn’t read as “set a savepoint.” This doesn’t show itself as a problem in this scene, but if the gesture did have some sort of semantic meaning, it would make it easier for Strange to recall and perform correctly. Maybe if his wrist twist transitioned from moving splayed fingers to his pointing with his index finger to his wrist…ok, that’s a little too on the nose, so maybe…toward the ground, it would help symbolize the here & now that is the savepoint. It would be easier for Strange to recall and feel assured that he’d done the right thing.

I have questions about the extents of the time loop effect. Is it the whole Dark Dimension? Is it also Earth? Is it the Universe? Is it just a sphere, like the other modes of the Eye? How does he set these? There’s not enough information in the movie to backworld this, but unless the answer is “it affects everything” there seems to be some variables missing in the initiating gesture.

Setpoint-active signal

But where the initiating gesture doesn’t appear to be a problem in the scene, the wrist-glance indicates that the display is. Note that, other than being on the left forearm instead of the right, the bands look identical to the ones in the Tibet and Hong Kong modes. (Compare the Tibet screenshot below.) If Strange is relying on the display to ensure that his savepoint was set, having it look identical is not as helpful as it would be if the visual was unique. “Wait,” he might think, “Am I in the right mode, here?

Eye-of-Agamoto10.png

In a redesign, I would select an animated display that was not a loop, but an indication that time was passing. It can’t be as literal as a clock of course. But something that used animation to suggest time was progressing linearly from a point. Maybe something like the binary clock from Mission to Mars (see below), rendered in the graphic language of the Eye. Maybe make it base-3 to seem not so technological.

binary_clock_10fps.gif

Seeing a display that is still, on invocation—that becomes animated upon initialization—would mean that all he has to do is glance to confirm the unique display is in motion. “Yes, it’s working. I’m in the Groundhog Day mode, and the savepoint is set.

Tibet mode: Display for interestingness (2 of 5)

Without a display, the Eye asks Strange to do all the work of exploring the range of values available through it to discover what is of interest. (I am constantly surprised at how many interfaces in the real world repeat this mistake.) We can help by doing a bit of “pre-processing” of the information and provide Strange a key to what he will find, and where, and ways to recover exactly where interesting things happen.

watch.png
The watch from the film, for reasons that will shortly become clear.

To do this, we’ll add a ring outside the saucer that will stay fixed relative to the saucer’s rotation and contain this display. Since we need to call this ring something, and we’re in the domain of time, let’s crib some vocabulary from clocks. The fixed ring of a clock that contains the numbers and minute graduations is called a chapter ring. So we’ll use that for our ring, too.

chapter-rings.png

What chapter ring content would most help Strange?

Good: A time-focused chapter ring

Both the controlled-extents and the auto-extents shown in the prior post presume a smooth display of time. But the tome and the speculative meteorite simply don’t change much over the course of their existence. I mean, of course they do, with the book being pulled on and off shelves and pages flipped, and the meteorite arcing around the sun in the cold vacuum of space for countless millennia, but the Eye only displays the material changes to an object, not position. So as far as the Eye is concerned, the meteoroid formed, then it stays the same for most of its existence, then it has a lot of activity as it hits Earth’s atmosphere and slams into the planet.

A continuous display of the book shows little of interest for most of its existence, with a few key moments of change interspersed. To illustrate this, lets make up some change events for the tome.

Eye-of-Agamotto-event-view.png

Now let’s place those along an imaginary timeline. Given the Doctor Strange storyline, Page Torn would more likely be right next to Now, but making this change helps us explore a common boredom problem, see below. OK. Placing those events along a timeline…

Eye-of-Agamotto-time-view.png

And then, wrapping that timeline around the saucer. Much more art direction would have to happen to make this look thematically like the rest of the MCU magic geometries, but following is a conceptual diagram of how it might look.

Eye-of-Agamoto-dial.png
With time flowing smoothly, though at different speeds for the past and the future.

On the outside of the saucer is the chapter ring with the salient moments of change called out with icons (and labels). At a glance Strange would know where the fruitful moments of change occur. He can see he only has to turn his hand about 5° to the left to get to the spot where the page was ripped out.

Already easier on him, right? Some things to note.

  1. The chapter ring must stay fixed relative to the saucer to work as a reference. Imagine how useless a clock would be if its chapter ring spun in concert with any of its hands. The center can still move with his palm as the saucer does.
  2. The graduations to the left and right of “now” are of a different density, helping Strange to understand that past and future are mapped differently to accommodate the limits of his wrist and the differing time frames described.
  3. When several events occur close together in time, they could be stacked.
  4. Having the graduations evenly spaced across the range helps answer roughly when each change happened relative to the whole.
  5. The tome in front of him should automatically flip to spreads where scrubbed changes occur, so Strange doesn’t have to hunt for them. Without this feature, if Strange was trying to figure out what changed, he would have to flip through the whole book with each degree of twist to see if anything unknown had changed.

Better: A changes-focused chapter ring

If, as in this scene, the primary task of using the Eye is to look for changes, a smooth display of time on the chapter ring is less optimal than a smooth display of change. (Strange doesn’t really care when the pages were torn. He just wants to see the state of the tome before that moment.) Distribute the changes evenly around the chapter ring, and you get something like the following.

Eye-of-Agamoto-event.png

This display optimizes for easy access to the major states of the book. The now point is problematic since the even distribution puts it at the three o’clock point rather than the noon, but what we buy in exchange is that the exact same precision is required to access any of the changes and compare them. There’s no extra precision needed to scrub between the book made and the first stuff added moments. The act of comparison is made simpler. Additionally, the logarithmic time graduations help him scrub detail near known changes and quickly bypass the great stretches of time when nothing happens. By orienting our display around the changes, the interesting bits are made more easy to explore, and the boring bits are more easy to bypass.

In my comp, more white areas equal more time. Unfortunately, this visual design kind of draws attention to the empty stretches of time rather than the moments of change, so would need more attention; see the note above about needing a visual designer involved.

So…the smooth time and the distributed events display each has its advantages over the other, but for the Tibet scene, in which he’s looking to restore the lost pages of the tome, the events-focused chapter ring gets Strange to the interesting parts more confidently.


Note that all the events Strange might be scrubbing through are in the past, but that’s not all the Eye can do in the Tibet mode. So next up, let’s talk a little about the future.

Jasper’s Music Player

ChildrenofMen-player03

After Jasper tells a white lie to Theo, Miriam, and Kee to get them to escape the advancing gang of Fishes, he returns indoors. To set a mood, he picks up a remote control and presses a button on it while pointing it at a display.

ChildrenofMen-player02

He watches a small transparent square that rests atop some things in a nook. (It’s that decimeter-square, purplish thing on the left of the image, just under the lampshade.) The display initially shows an album queue, with thumbnails of the album covers and two bright words, unreadably small. In response to his button press, the thumbnail for Franco Battiato’s album FLEURs slides from the right to the left. A full song list for the album appears beneath the thumbnail. Then track two, the cover of Ruby Tuesday, begins to play. A small thumbnail to the right of the album cover appears, featuring some white text on a dark background and a cycling, animated border. Theo puts the remote control down, picks up the Quietus box, and walks over to Janice. *sniff*

This small bit of speculative consumer electronics gets around 17 seconds of screen time, but we see enough to consider the design. 

Persistent display

One very nice thing about it is that it is persistently visible. As Marshall McLuhan famously noted, we are simply not equipped with earlids. This means that when music is playing in a space, you can’t really just turn away from it to stop listening. You’ll still hear it. In UX parlance, sound is non-modal.

Yet with digital music players, the visual displays that tell you about what’s being played, or the related interfaces that help you know what you can do with the music are often hidden behind modes. Want to know what that song you can’t stop hearing is? Find your device, wake it up, enter a password, find the app, and even then you may have to root around to find the software to find what you’re looking for.

But a persistent object means that non-modal sound is accompanied by (mostly) non-modal visuals. This little box is always somewhere, glowing, and telling you what’s playing, what just played, and what’s next.

Remote control

Finding the remote is a different problem, of course, and if your household is like my household, it is a thing which seems to want to be lost. To keep that non-modality of sound matched by the controls, it would be better to have the device or the environment know when Jasper is looking at the display, and enable loose gestural or voice controls to control it.

Imagine the scene if he grabs the Quietus box, looks up to the display, and says, “Play…” then pause while he considers his options, and says “…‘Ruby Tuesday’…the Battiato one.” We would have known that his selection has deep personal meaning. If Cuarón wanted to convey that this moment has been planned for a while, Jasper could even have said, “Play her goodbye song.”

Visual layout

The visual design of the display is, like most of the technology, meant to be a peripheral thing, accepting attention but not asking for it. In this sense it works. The text is so small the audience is not tempted to read it. The thumbnails are so small it is only if you already knew the music that it would refresh your memory. But if this was a real product meant to live in the home, I would redesign the display to be usable at the 3–6 meter distance, which would require vastly reducing the number of elements, increasing their size, and perhaps overlaying text on image.

ChildrenofMen-player03

Scenery display

BttF_096Jennifer is amazed to find a window-sized video display in the future McFly house. When Lorraine arrives at the home, she picks up a remote to change the display. We don’t see it up close, but it looks like she presses a single button to change the scene from a sculpted garden to one of a beach sunset, a city scape, and a windswept mountaintop. It’s a simple interface, though perhaps more work than necessary.

We don’t know how many scenes are available, but having to click one button to cycle through all of them could get very frustrating if there’s more than say, three. Adding a selection ring around the button would allow the display to go from a selected scene to a menu from which the next one might be selected from amongst options.

J.D.E.M. LEVEL 5

The first computer interface we see in the film occurs at 3:55. It’s an interface for housing and monitoring the tesseract, a cube that is described in the film as “an energy source” that S.H.I.E.L.D. plans to use to “harness energy from space.” We join the cube after it has unexpectedly and erratically begun to throw off low levels of gamma radiation.

The harnessing interface consists of a housing, a dais at the end of a runway, and a monitoring screen.

Avengers-cubemonitoring-07

Fury walks past the dais they erected just because.

The housing & dais

The harness consists of a large circular housing that holds the cube and exposes one face of it towards a long runway that ends in a dais. Diegetically this is meant to be read more as engineering than interface, but it does raise questions. For instance, if they didn’t already know it was going to teleport someone here, why was there a dais there at all, at that exact distance, with stairs leading up to it? How’s that harnessing energy? Wouldn’t you expect a battery at the far end? If they did expect a person as it seems they did, then the whole destroying swaths of New York City thing might have been avoided if the runway had ended instead in the Hulk-holding cage that we see later in the film. So…you know…a considerable flaw in their unknown-passenger teleportation landing strip design. Anyhoo, the housing is also notable for keeping part of the cube visible to users near it, and holding it at a particular orientation, which plays into the other component of the harness—the monitor.

Avengers-cubemonitoring-03 Continue reading

Vika’s Desktop

Oblivion-Desktop-Overview

As Jack begins his preflight check in the Bubbleship, Vika touches the center of the glass surface to power up the desktop that keeps her in contact with Sally on the TET and allows her to assist and monitor Jack as he repairs the drones on the ground.

The interface components

Oblivion-Desktop-Overview-000

Continue reading

Precrime forearm-comm

MinRep-068

Though most everyone in the audience left Minority Report with the precrime scrubber interface burned into their minds (see Chapter 5 of the book for more on that interface), the film was loaded with lots of other interfaces to consider, not the least of which were the wearable devices.

Precrime forearm devices

These devices are worn when Anderton is in his field uniform while on duty, and are built into the material across the left forearm. On the anterior side just at the wrist is a microphone for communications with dispatch and other officers. By simply raising that side of his forearm near his mouth, Anderton opens the channel for communication. (See the image above.)

MinRep-080

There is also a basic circular display in the middle of the posterior left forearm that displays a countdown for the current mission: The time remaining before the crime that was predicted to occur should take place. The text is large white characters against a dark background. Although the translucency provides some visual challenge to the noisy background of the watch (what is that in there, a Joule heating coil?), the jump-cut transitions of the seconds ticking by commands the user’s visual attention.

On the anterior forearm there are two visual output devices: one rectangular perpetrator information (and general display?) and one amber-colored circular one we never see up close. In the beginning of the film Anderton has a man pinned to the ground and scans his eyes with a handheld Eyedentiscan device. Through retinal biometrics, the pre-offender’s identity is confirmed and sent to the rectangular display, where Anderton can confirm that the man is a citizen named Howard Marks.

Wearable analysis

Checking these devices against the criteria established in the combadge writeup, it fares well. This is partially because it builds on a century of product evolution for the wristwatch.

It is sartorial, bearing displays that lay flat against the skin connected to soft parts that hold them in place.

They are social, being in a location other people are used to seeing similar technology.

It is easy to access and use for being along the forearm. Placing different kinds of information at different spots of the body means the officer can count on body memory to access particular data, e.g. Perp info is anterior middle forearm. That saves him the cognitive load of managing modes on the device.

The display size for this rectangle is smallish considering the amount of data being displayed, but being on the forearm means that Anderton can adjust its apparent size by bringing it closer or farther from his face. (Though we see no evidence of this in the film, it would be cool if the amount of information changed based on distance-to-the-observer’s face. Writing that distanceFromFace() algorithm might be tricky though.)

There might be some question about accidental activation, since Anderton could be shooting the breeze with his buddies while scratching his nose and mistakenly send a dirty joke to a dispatcher, but this seems like an unlikely and uncommon enough occurrence to simply not worry about it.

Using voice as the input is cinemagenic, but especially in his line of work a subvocalization input would keep him more quiet—and therefore safer— in the field. Still, voice inputs are fast and intuitive, making for fairly apposite I/O. Ideally he might have some haptic augmentation of the countdown, and audio augmentation of the info so Anderton wouldn’t have to pull his arm and attention away from the perpetrator, but as long as the information is glanceable and Anderton is merely confirming data (rather than new information), recognition is a fast enough cognitive process that this isn’t too much of a problem.

All in all, not bad for a “throwaway” wearable technology.

Dispatch

LOGANS_RUN_map_520

At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.

Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.

LogansRun094

*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at. Continue reading

Mission Briefing

Once the Prometheus crew has been fully revived from their hypersleep, they gather in a large gymnasium to learn the details of their mission from a prerecorded volumetric projection. To initiate the display, David taps the surface of a small tablet-sized handheld device six times, and looks up. A prerecorded VP of Peter Weyland appears and introduces the scientists Shaw and Holloway.

This display does not appear to be interactive. Weyland does mention and gesture toward Shaw and Holloway in the audience, but they could have easily been in assigned seats.

Cue Rubik’s Space Cube

After his introduction, Holloway places an object on the floor that looks like a silver Rubik’s Cube with a depressed black button in the center-top square.

Prometheus-055

He presses a middle-edge button on the top, and the cube glows and sings a note. Then a glowing-yellow “person” icon appears, glowing, at the place he touched, confirming his identity and that it’s ready to go.

He then presses an adjacent corner button. Another glowing-yellow icon appears underneath his thumb, this one a triangle-within-a-triangle, and a small projection grows from the side. Finally, by pressing the black button, all of the squares on top open by hinged lids, and the portable projection begins. A row of 7 (or 8?) “blue-box” style volumetric projections appear, showing their 3D contents with continuous, slight rotations.

Gestural control of the display

After describing the contents of each of the boxes, he taps the air towards either end of the row (there is a sparkle-sound to confirm the gesture) and he brings his middle fingers together like a prayer position. In response, the boxes slide to a center as a stack.

He then twists his hands in opposite directions, keeping the fingerpads of his middle fingers in contact. As he does this, the stack merges.

Prometheus-070

Then a forefinger tap summons an overlay that highlights a star pattern on the first plate. A middle finger swipe to the left moves the plate and its overlay off to the left. The next plate automatically highlights its star pattern, and he swipes it away. Next, with no apparent interaction, the plate dissolves in a top-down disintegration-wind effect, leaving only the VP spheres that illustrate the star pattern. These grow larger.

Halloway taps the topmost of these spheres, and the VP zooms through intersteller space to reveal an indistinct celestial sphere. He then taps the air again (nothing in particular is beneath his finger) and the display zooms to a star. Another tap zooms to a VP of LV-223.

Prometheus_VP-0030

Prometheus_VP-0031

After a beat of about 9 seconds, the presentation ends, and the VP of LV-223 collapses back into its floor cube.

Evaluating the gestures

In Chapter 5 of Make It So we list the seven pidgin gestures that Hollywood has evolved. The gestures seen in the Mission Briefing confirm two of these: Push to Move and Point to Select, but otherwise they seem idiosyncratic, not matching other gestures seen in the survey.

That said, the gestures seem sensible. On tapping the “bookends” of the blue boxes, Holloway’s finger pads come to represent the extents of the selection, so bringing them together is a reasonable gesture to indicate stacking. The twist gesture seems to lock the boxes in place, to break the connection between them and his fingertips. This twist gesture turns his hand like a key in a lock, so has a physical analogue.

It’s confusing that a tap would perform four different actions (highlight star patterns in the blue boxes, zoom to the celestial sphere, zoom to star, zoom to LV-223) but there is no indication that this is a platform for manipulating VPs as much as it is a presentation software. With this in mind he could arbitrarily assign any gesture to simply “advance the slide.”