The Cookie: Matt’s controls

When using the Cookie to train the AI, Matt has a portable translucent touchscreen by which he controls some of virtual Greta’s environment. (Sharp-eyed viewers of the show will note this translucent panel is the same one he uses at home in his revolting virtual wingman hobby, but the interface is completely different.)

Black_Mirror_Cookie_18.png

The left side of the screen shows a hamburger menu, the Set Time control, a head, some gears, a star, and a bulleted list. (They’re unlabeled.) The main part of the screen is a scrolling stack of controls including Simulated Body, Control System, and Time Adjustment. Each has an large icon, a header with “Full screen” to the right, a subheader, and a time indicator. This could be redesigned to be much more compact and context-rich for expert users like Matt. It’s seen for maybe half a second, though, and it’s not the new, interesting thing, so we’ll skip it.

The right side of the screen has a stack of Smartelligence logos which are alternately used for confirmation and to put the interface to sleep.

Mute

When virtual Greta first freaks out about her circumstance and begins to scream in existential terror, Matt reaches to the panel and mutes her. (To put a fine point on it: He’s a charming monster.) In this mode she cannot make a sound, but can hear him just fine. We do not see the interface he uses to enact this. He uses it to assert conversational control over her. Later he reaches out to the same interface to unmute her.

The control he touches is the one on his panel with a head and some gears reversed out of it. The icon doesn’t make sense for that. The animation showing the unmuting shows it flipping from right to left, so does provide a bit of feedback for Matt, but it should be a more fitting icon and be labeled.

Cookie_mute
Also it’s teeny tiny, but note that the animation starts before he touches it. Is it anticipatory?

It’s not clear though, while she is muted, how he knows that she is trying to speak. Recall that she (and we) see her mouthing words silently, but from his perspective, she’s just an egg with a blue eye. The system would need some very obvious MUTE status display, that increases in intensity when the AI is trying to communicate. Depending on how smart the monitoring feature was, it could even enable some high-intensity alert system for her when she needs to communicate something vital. Cinegenically, this could have been a simple blinking of the blue camera light, though this is currently used to indicate the passage of time during the Time Adjustment (see below.)

Simulated Body

Matt can turn on a Simulated Body for her. This allows the AI to perceive herself as if she had her source’s body. In this mode she perceives herself as existing inside a room with large, wall-sized displays and a control console (more on this below), but is otherwise a featureless white.

Black_Mirror_Cookie_White_Room.png

I presume the Simulated Body is a transitional model—part of a literal desktop metaphor—meant to make it easy for the AI (and the audience) to understand things. But it would introduce a slight lag as the AI imagines reaching and manipulating the console. Presuming she can build competence in directly controlling the technologies in the house, the interface should “scaffold” away and help her gain the more efficient skills of direct control, letting go of the outmoded notion of having a body. (This, it should be noted, would not be as cinegenic since the story would just feature the egg rather than the actor’s expressive face.)

Neuropsychology nerds may be interested to know that the mind’s camera does, in fact, have spatial lags. Several experiments have been run where subjects are asked to imagine animals as seen from the side and then timed how long it took them to imagine zooming into the eye. It takes longer, usually, for us to imagine the zoom to a elephant’s eye than a mouse’s because the “distance” is farther. Even though there’s no physicality to the mind’s camera to impose this limit, our brain is tied to its experience in the real world.

Black_Mirror_Cookie_Simulated_Body.png

The interface Matt has to turn on her virtual reality is confusing. We hear 7 beeps while the camera is on his face. He sees a 3D rendering of a woman’s body in profile and silhouette. He taps the front view and it fills with red. Then he taps the side view and it fills with red. Then he taps some Smartelligence logos on the side with a thumb and then *poof* she’s got a body. While I suspect this is a post-actor interface, (i.e. Jon Hamm just tapped some things on an empty screen while on camera and then the designers had to later retrofit an interface that fit his gestures) this multi-button setup and three-tap initialization just makes no sense. It should be a simple toggle with access to optional controls like scaffolding settings (discussed above.)

Time “Adjustment”

The main tool Matt has to force compliance is a time control. When Greta initially says she won’t comply, (specifically and delightfully, she asserts, “I’m not some sort of push-button toaster monkey!”) Then he uses his interface to make it seem like 3 weeks pass for her inside her featureless white room. Then again for 6 months. The solitary confinement makes her crazy and eventually forces compliance.

Cookie_settime.gif

The interface to set the time is a two-layer virtual dial: Two chapter rings with wide blue arcs for touch targets. The first time we see him use it, he spins the outer one about 360° (before the camera cuts away) to set the time for three weeks. While he does it, the inner ring spins around the same center but at a slower rate. I presume it’s months, though the spatial relationship doesn’t make sense. Then he presses the button in the center of the control. He sees an animation of a sun and moon arcing over an illustrated house to indicate her passage of time, and then the display. Aside: Hamm plays this beat marvelously by callously chomping on the toast she has just help make.

Toast.gif

Improvements?

Ordinarily I wouldn’t speak to improvements on an interface that is used for torture, but as this could only affect a general AI that is as yet speculative, and it couldn’t be co-opted to torture real people since time travel doesn’t exist, so I think this time it’s OK. Discussing it as a general time-setting control, I can see three immediate improvements.

1. Use fast forward models

It makes most sense for her time sentence to end automatically and automatically return to real-world speed. But each time we see the time controls used, the following interaction happens near the end of the time sentence:

  • Matt reaches up to the console
  • He taps the center button of the time dial
  • He taps the stylized house illustration. In response it gets a dark overlay with a circle inside of it reading “SET TIME.” This is the same icon seen 2nd down  in the left panel.
  • He taps the center button of the time dial again. The dark overlay reads “Reset” with a new icon.
  • He taps the overlay.

Please tell me this is more post-actor interface design. Because that interaction is bonkers.

Cookie_stop.gif

If the stop function really needs a manual control, well, we have models for that that are very readily understandable by users and audiences. Have the whole thing work and look like a fast forward control rather than this confusing mess. If he does need to end it early, as he does in the 6 months sentence, let him just press a control labeled PLAY or REALTIME.

2. Add calendar controls

A dial makes sense when a user is setting minutes or hours, but a calendar-like display should be used for weeks or months. It would be immediately recognizable and usable by the user and understandable to the audience. If Hamm had touched the interface twice, I would design the first tap to set the start date and the second tap to set the end date. The third is the commit.

3. Add microinteraction feedback

Also note that as he spins the dials, he sees no feedback showing the current time setting. At 370° is it 21 or 28 days? The interface doesn’t tell him. If he’s really having to push the AI to its limits, the precision will be important. Better would be to show the time value he’s set so he could tweak it as needed, and then let that count down as time remaining while the animation progresses.

Cookie_settime.gif

Effectiveness subtlety: Why not just make the solitary confinement pass instantly for Matt? Well, recall he is trying to ride a line of torture without having the AI wig out, so he should have some feedback as to the duration of what he’s putting her through. If it was always instant, he couldn’t tell the difference between three weeks and three millennia, if he had accidentally entered the wrong value. But if real-world time is passing, and it’s taking longer than he thinks it should be, he can intervene and stop the fast-forwarding.

That, or of course, show feedback while he’s dialing.

Near the end of the episode we learn that a police officer is whimsically torturing another Cookie, and sets the time-ratio to “1000 years per minute” and then just lets it run while he leaves for Christmas break. The current time ratio should also be displayed and a control provided. It is absent from the screen.

Black_Mirror_Cookie_31.png

Add psychological state feedback

There is one “improvement” that does not pertain to real world time controls, and that’s the invisible effect of what’s happening to the AI during the fast forward. In the episode Matt explains that, like any good torturer, “The trick of it is to break them without letting them snap completely,” but while time is passing he has no indicators as to the mental state of the sentience within. Has she gone mad? (Or “wigged out” as he says.) Does he need to ease off? Give her a break?

I would add trendline indicators or sparklines showing things like:

  • Stress
  • Agitation
  • Valence of speech

I would have these trendlines highlight when any of the variables are getting close to known psychological limits. Then as time passes, he can watch the trends to know if he’s pushing things too far and ease off.

The Hong Kong Mode (4 of 5)

In the prior three posts, I’ve discussed the goods-and-bads of the Eye of Agamotto in the Tibet mode. (I thought I could squeeze the Hong Kong and the Dark Dimension modes into one post, but turns out this one was just too long. keep reading. You’ll see.) In this post we examine a mode that looks like the Tibet mode, but is actually quite different.

Hong Kong mode

Near the film’s climax, Strange uses the Eye to reverse Kaecilius’ destruction of the Hong Kong Sanctum Sanctorum (and much of the surrounding cityscape). In this scene, Kaecilius leaps at Strange, and Strange “freezes” Kaecilius in midair with the saucer. It’s done more quickly, but similarly to how he “freezes” the apple into a controlled-time mode in Tibet.

HongKong-freeze-12fps.gif

But then we see something different, and it complicates everything. As Strange twists the saucer counterclockwise, the cityscape around him—not just Kaecilius—begins to reverse slowly. (And unlike in Tibet, the saucer keeps spinning clockwise underneath his hand.) Then the rate of reversal accelerates, and even continues in its reversal after Strange drops his gesture and engages in a fight with Kaecilius, who somehow escapes the reversing time stream to join Strange and Mordo in the “observer” time stream.

So in this mode, the saucer is working much more like a shuttle wheel with no snap-back feature.

A shuttle wheel, as you’ll recall from the first post, doesn’t specify an absolute value along a range like a jog dial does. A shuttle wheel indicates a direction and rate of change. A little to the left is slow reverse. Far to the left is fast reverse. Nearly all of the shuttle wheels we use in the real world have snap-back features, because if you were just going to leave it reversing and pay attention to something else, you might as well use another control to get to the absolute beginning, like a jog dial. But, since Strange is scrubbing an endless “video stream,” (that is, time), and he can pull people and things out of the manipulated-stream and into the observer-stream and do stuff, not having a snap-back makes sense.

For the Tibet mode I argued for a chapter ring to provide some context and information about the range of values he’s scrubbing. So for shuttling along the past in the Hong Kong mode, I don’t think a chapter ring or content overview makes sense, but it would help to know the following.

  • The rate of change
  • Direction of change
  • Shifted datetime
  • Timedate difference from when he started

In the scene that information is kind of obvious from the environment, so I can see the argument for not having it. But if he was in some largely-unchanging environment, like a panic room or an underground cave or a Sanctum Sanctorum, knowing that information would save him from letting the shuttle go too far and finding himself in the Ordovician. A “home” button might also help to quickly recover from mistakes. Adding these signals would also help distinguish the two modes. They work differently, so they should look different. As it stands, they look identical.

DoctorStrange-Tibet-v-HongKong.png

He still (probably) needs future branches

Can Strange scrub the future this way? We don’t see it in the movie. But if so, we have many of the same questions as the Tibet mode future scrubber: Which timeline are we viewing & how probable is it? What other probabilities exist and how does he compare them? This argues for the addition of the future branches from that design.

Selecting the mode

So how does Strange specify the jog dial or shuttle wheel mode?

One cop-out answer is a mental command from Strange. It’s a cop-out because if the Eye responds to mental commands, this whole design exercise is moot, and we’re here to critique, practice, and learn. Not only that, but physical interfaces are more cinegenic, so better to make a concrete interaction for the film.

You might think we could modify the opening finger-tut (see the animated gif, below). But it turns out we need that for another reason: specifying the center and radius-of-effect.

DoctorStrange-tutting-comparison.gif

Center and radius-of-effect

In Tibet, the Eye appears to affect just an apple and a tome. But since we see it affecting a whole area in Hong Kong, let’s presume the Eye affects time in a sphere. For the apple and tome, it was affecting a small sphere that included the table, too, it’s just that table didn’t change in the spans of time we see. So if it works in spheres, how is the center and the radius of the sphere set?

Center

Let’s say the Eye does some simple gaze monitoring to find the salient object at his locus of attention. Then it can center the effect on the thing and automatically set the radius of effect to the thing’s size across likely-to-be scrubbed extents. In Tibet, it’s easy. Apple? Check. Tome? Check. In Hong Kong, he’s focusing on the Sanctum, and its image recognition is smart enough to understand the concept of “this building.”

Radius

But the Hong Kong radius stretches out beyond his line of sight, affecting something with a very vague visual and even conceptual definition, that is, “the wrecked neighborhood.” So auto-setting these variables wouldn’t work without reconceiving the Eye as a general artificial intelligence. That would have some massive repercussions throughout the diegesis, so let’s avoid that.

If it’s a manual control, how does he do it? Watch the animated gif above carefully and see he’s got two steps to the “turn Eye on” tut: opening the eye by making an eye shape, and after the aperture opens, spreading his hands apart, or kind of expanding the Eye. In Tibet that spreading motion is slow and close. In Hong it’s faster and farther. That’s enough evidence to say the spread*speed determines the radius. We run into the scales problem of apple-versus-neighborhood that we had in determining the time extents, but make it logarithmic and add some visual feedback and he should be able to pick arbitrary sizes with precision.

So…back to mode selection

So if we’re committing the “turn on” gesture to specifying the center-and-radius, the only other gesture left is the saucer creation. For a quick reminder, here’s how it works in Tibet.

Since the circle works pretty well for a jog dial, let’s leave this for Tibet mode. A contrasting but related gesture would be to have Strange hold his right hand flat, in a sagittal plane, with the palm facing to his left. (See an illustration, below.) Then he can tilt his hand inside the saucer to reverse or fast forward time, and withdraw his hand from the saucer graphic to leave time moving at the adjusted rate. Let the speed of the saucer indicate speed of change. To map to a clock, tilting to the left would reverse time, and tilting to the right would advance it.

How the datetime could be shown is an exercise for the reader.

The yank out

There’s one more function we see twice in the Hong Kong scene. Strange is able to pull Mordo and Wong from the reversing time stream by thrusting the saucer toward them. This is a goofy choice of a gesture that makes no semantic sense. It would make much more sense for Strange to keep his saucer hand extended, and use his left hand to pull them from the reversing stream.

DoctorStrange-yank-out.gif

Whew.

So one of the nice things about this movie interface, is that while it doesn’t hold up under the close scrutiny of this blog,  the interface to the Eye of Agamotto works while watching the film. Audience sees the apple happen, and gets that gestures + glowing green circle = adjusting time. For that, it works.

That said, we can see improvements that would not affect the script, would not require much more of the actors, and not add too much to post. It could be more consistent and believable.

But we’re not done yet. There’s one other function shown by the Eye of Agamotto when Strange takes it into the Dark Dimension, which is the final mode of the Eye, up next.

Tibet Mode Analysis: Representing the future (3 of 5)

A major problem with the use of the Eye is that it treats the past and the future similarly. But they’re not the same. The past is a long chain of arguably-knowable causes and effects. So, sure, we can imagine that as a movie to be scrubbed.

But the future? Not so much. Which brings us, briefly, to this dude.

pierre-simon-laplace.png

If we knew everything, Pierre-Simon Laplace argued in 1814, down to the state of every molecule, and we had a processor capable, we would be able to predict with perfect precision the events of the future. (You might think he’s talking about a computer or an AI, but in 1814 they used demons for their thought experiments.) In the two centuries since, there have been several major repudiations of Laplace’s demon. So let’s stick to the near-term, where there’s not one known future waiting to happen, but a set of probabilities. That means we have to rethink what the Eye shows when it lets Strange scrub the future.

Note that in the film, the “future” of the apple shown to Strange was just a likelihood, not a fact. The Eye shows it being eaten. In the actual events of the film, after the apple is set aside:

  • Strange repairs the tome
  • Mordo and Wong interrupt Strange
  • They take him into the next room for some exposition
  • The Hong Kong sanctum portal swings open
  • Kaecilius murders a redshirt
  • Kaecilius explodes Strange into the New York sanctum

Then for the next 50 minutes, The Masters of Mysticism are scrambling to save the world. I doubt any of them have time to while away in a library, there to discover an abandoned apple with a bite taken out of it, and decide—staphylococcus aureus be damned—a snack’s a snack. No, it’s safe to say the apple does not get eaten.

post-eye-no-apple.png

So the Eye gets the apple wrong, but it showed Strange that future as if it were a certainty. That’s a problem. Sure, when asked about the future, it ought to show something, but better would be to…

  • Indicate somewhere that what is being displayed is one of a set of possibilities
  • Provide options to understand the probability distribution among the set
  • Explore the alternates
  • Be notified when new data shifts the probability distribution or inserts new important possibilities

So how to display probabilities? There are lots of ways, but I am most fond of probability tree diagrams. In nerd parlance, this is a unidirectional graph where the nodes are states and the lines are labeled for probabilities. In regular language they look like sideways two-dimensional trees. See an example below from mathisfun.com. These diagrams seem to me a quick way to understand branching possibilities. (I couldn’t find any studies giving me more to work on than “seem to me”.)

probability-tree-coin2.png

In addition to being easy to understand, they afford visual manipulation. You can work branching lines around an existing design.

Now if we were actually working out a future-probabilities gestural scrubber attached to the Eye of Agamotto saucer, we’d have a whole host of things to get into next, like designing…

  1. A compact but informative display that signals the relative probabilities of each timeline
  2. The mechanism for opening that display so probabilities can be seen rather than read
  3. Labels so Strange wouldn’t have to hunt through all of them for the thing of interest (or some means of search)
  4. A selection process for picking the new timeline
  5. A comparison mode
  6. A means of collapsing the display to return to scrub mode
  7. A you-are-here signal in the display to indicate the current timeline

Which is a big set of design tasks for a hobbyist website. Fortunately for us, Strange only deals with a simple, probable (but wrong) scenario of the apple’s future as an illustration for the audience of what the Eye can do; and he only deals with the past of the tome. So while we could get into all of the above, it’s most expedient just to resolve the first one for the scene and tidy up the interface as it helps illustrate a well-thought-out and usable world.

Below I’ve drafted up an extension of my earlier conceptual diagram. I’ve added a tree to the future part of the chapter ring, using some dots to indicate the comparative likelihood of each branch. This could be made more compact, and might be good to put on a second z-axis layer to distinguish it from the saucer, but again: conceptual diagram.

Eye-of-Agamoto-tail.png

If this were implemented in the film, we would want to make sure that the probability tree begins to flicker right before Wong and Mordo shut him down, as a nod to the events happening off screen with Kaecilius that are changing those futures. This would give a clue that the Eye is smartly keeping track of real-world events and adjusting its predictions appropriately.

These changes would make the Eye more usable for Strange and smart as a model for us.

Eye-of-Agamoto-01_comp.png

Twist ending: This is a real problem we will have to solve

I skipped those design tasks for this comp, but we may not be able to avoid those problems forever. As it turns out, this is not (just) an idle, sci-fi problem. One of the promises of assistive AI is that it will be giving its humans advice, based on predictive algorithms, which will be a set of probabilistic scenarios. There may be an overwhelmingly likely next scenario, but there may also be several alternatives that users will need to explore and understand before deciding the best strategy. So, yeah, an exercise for the reader.

Wrapping up the Tibet Mode

So three posts is not the longest analysis I’ve done, bit it was a lot. In recap: Gestural time scrubbing seems like a natural interaction mapped well to analog clocks. The Eye’s saucer display is cool, but insufficient. We can help Strange much more by adding an events-based chapter ring detailing the facts of the past and the probabilities of the future.

Alas. We’re not done yet. As you’ll recall from the intro post, there are two other modes: The Hong Kong and Dark Dimension modes. Let’s next talk the Hong Kong mode, which is like the Tibet mode, but different.

Tibet mode: Display for interestingness (2 of 5)

Without a display, the Eye asks Strange to do all the work of exploring the range of values available through it to discover what is of interest. (I am constantly surprised at how many interfaces in the real world repeat this mistake.) We can help by doing a bit of “pre-processing” of the information and provide Strange a key to what he will find, and where, and ways to recover exactly where interesting things happen.

watch.png
The watch from the film, for reasons that will shortly become clear.

To do this, we’ll add a ring outside the saucer that will stay fixed relative to the saucer’s rotation and contain this display. Since we need to call this ring something, and we’re in the domain of time, let’s crib some vocabulary from clocks. The fixed ring of a clock that contains the numbers and minute graduations is called a chapter ring. So we’ll use that for our ring, too.

chapter-rings.png

What chapter ring content would most help Strange?

Good: A time-focused chapter ring

Both the controlled-extents and the auto-extents shown in the prior post presume a smooth display of time. But the tome and the speculative meteorite simply don’t change much over the course of their existence. I mean, of course they do, with the book being pulled on and off shelves and pages flipped, and the meteorite arcing around the sun in the cold vacuum of space for countless millennia, but the Eye only displays the material changes to an object, not position. So as far as the Eye is concerned, the meteoroid formed, then it stays the same for most of its existence, then it has a lot of activity as it hits Earth’s atmosphere and slams into the planet.

A continuous display of the book shows little of interest for most of its existence, with a few key moments of change interspersed. To illustrate this, lets make up some change events for the tome.

Eye-of-Agamotto-event-view.png

Now let’s place those along an imaginary timeline. Given the Doctor Strange storyline, Page Torn would more likely be right next to Now, but making this change helps us explore a common boredom problem, see below. OK. Placing those events along a timeline…

Eye-of-Agamotto-time-view.png

And then, wrapping that timeline around the saucer. Much more art direction would have to happen to make this look thematically like the rest of the MCU magic geometries, but following is a conceptual diagram of how it might look.

Eye-of-Agamoto-dial.png
With time flowing smoothly, though at different speeds for the past and the future.

On the outside of the saucer is the chapter ring with the salient moments of change called out with icons (and labels). At a glance Strange would know where the fruitful moments of change occur. He can see he only has to turn his hand about 5° to the left to get to the spot where the page was ripped out.

Already easier on him, right? Some things to note.

  1. The chapter ring must stay fixed relative to the saucer to work as a reference. Imagine how useless a clock would be if its chapter ring spun in concert with any of its hands. The center can still move with his palm as the saucer does.
  2. The graduations to the left and right of “now” are of a different density, helping Strange to understand that past and future are mapped differently to accommodate the limits of his wrist and the differing time frames described.
  3. When several events occur close together in time, they could be stacked.
  4. Having the graduations evenly spaced across the range helps answer roughly when each change happened relative to the whole.
  5. The tome in front of him should automatically flip to spreads where scrubbed changes occur, so Strange doesn’t have to hunt for them. Without this feature, if Strange was trying to figure out what changed, he would have to flip through the whole book with each degree of twist to see if anything unknown had changed.

Better: A changes-focused chapter ring

If, as in this scene, the primary task of using the Eye is to look for changes, a smooth display of time on the chapter ring is less optimal than a smooth display of change. (Strange doesn’t really care when the pages were torn. He just wants to see the state of the tome before that moment.) Distribute the changes evenly around the chapter ring, and you get something like the following.

Eye-of-Agamoto-event.png

This display optimizes for easy access to the major states of the book. The now point is problematic since the even distribution puts it at the three o’clock point rather than the noon, but what we buy in exchange is that the exact same precision is required to access any of the changes and compare them. There’s no extra precision needed to scrub between the book made and the first stuff added moments. The act of comparison is made simpler. Additionally, the logarithmic time graduations help him scrub detail near known changes and quickly bypass the great stretches of time when nothing happens. By orienting our display around the changes, the interesting bits are made more easy to explore, and the boring bits are more easy to bypass.

In my comp, more white areas equal more time. Unfortunately, this visual design kind of draws attention to the empty stretches of time rather than the moments of change, so would need more attention; see the note above about needing a visual designer involved.

So…the smooth time and the distributed events display each has its advantages over the other, but for the Tibet scene, in which he’s looking to restore the lost pages of the tome, the events-focused chapter ring gets Strange to the interesting parts more confidently.


Note that all the events Strange might be scrubbing through are in the past, but that’s not all the Eye can do in the Tibet mode. So next up, let’s talk a little about the future.

Jasper’s home alarm

When Theo, Kee, and Miriam flee the murderous Fishes, they take refuge in Jasper’s home for the night. They are awoken in the morning by Jasper’s sentry system.

ChildrenofMen_Jasper_alarm

A loud cacophonous alarm sounds, made up of what sounds like recorded dog barks, bells clanging, and someone banging a stick on a metal trash can lid. Jasper explains to everyone in the house that “It’s the alarm! Someone’s breaking in!”

They gather around a computer screen with large speakers on either side. The screen shows four video feeds labeled ROAD A, FOREST A, FRONT DOOR, and ROAD B. Labels reading MOTION DETECTED <> blink at the bottom of the ROAD A and ROAD B feeds, where we can see members of the Fishes removing the brush that hides the driveway to Jasper’s house.

The date overlays the upper right hand corner of the screen, 06-DEC-2027, 08:10:58.

Across the bottom is a control panel of white numbers and icons on red backgrounds.

  • A radio button control for the number of video feeds to be displayed. Though we are seeing the 4-up display, the icon does not appear to be different than the rest.
  • 16 enumerated icons, the purpose for which is unclear.
  • Video control icons for reverse, stop, play, and fast forward.
  • Three buttons with gray backgrounds and icons.
  • A wide button blinking MASTER ALARM

The scene cuts to Jasper’s rushing to the car outside the home, where none of the cacophony can be heard.

Similar to his car dashoard, it makes sense that Jasper has made this alarm himself. This might explain the clunky layout and somewhat inscrutable icons. (What do the numbers do? What about that flower on the gray background?)

The three jobs of an intruder alarm

Jasper’s alarm is OK. It certainly does the job of grabbing the household’s attention, which is the first job of an alarm, and does it without alerting the intruders, as we see in the shot outside the house.

It could do a bit better at the second job of an alarm, which is to inform the household of the nature of the problem. That they have to gather around the monitor takes precious time that could be used for making themselves safer. It could be improved by removing this requirement.

  • If Jasper had added more information to the audio alarm, even so basic as a prerecorded “Motion on the road! Motion on the road!” then they might not have needed to gather around the monitor at all.
  • If the relevant video feeds could be piped to wearable devices, phones, or their car, then they can fill in their understanding at the same time that they are taking steps to getting the hell out of there.
  • Having the artificial intelligence that we have in actual-world 2017 (much less speculative 2027), we know that narrow AI can process that video to have many more details in the broadcast message. “Motion on the road! I see three cars and at least a dozen armed men!”

There is arguably a third job of an advanced alarm, and this is to help the household understand the best course of action. This can be problematic when the confidence of the recommendation is low. But if the AI can confidently make a recommendation, it can use whatever actuators it has to help them along their way.

  • It could be informational, such as describing the best option. The audio alarm could encourage them to “Take the back road!” It could even alert the police (though in the world of Children of Men, Jasper would not trust them and they may be disinclined to care.)
  • The alarm could give some parameters and best-practice recommendations like, “You have 10 minutes to be in the car! Save only yourselves, carry nothing!”
  • It could keep updating the situation and the countdown so the household does not have to monitor it.
  • It can physically help as best it can, like remotely starting and positioning cars for them.

This can get conceptually tricky as the best course of action may be conditional, e.g. “If you can get to the car in 5 minutes, then escape is your best option, but if it takes longer or you have defenses, then securing the home and alerting the police is the better bet.” But that may be too much to process in the moment, and for a household that does not rehearse response scenarios, the simpler instruction may be safer.

Sleep Pod—Wake Up Countdown

On each of the sleep pods in which the Odyssey crew sleep, there is a display for monitoring the health of the sleeper. It includes some biometric charts, measurements, a body location indicator, and a countdown timer. This post focuses on that timer.

To show the remaining time of until waking Julia, the pod’s display prompts a countdown that shows hours, minutes and seconds. It shows in red the final seconds while also beeping for every second. It pops-up over the monitoring interface.

image03

Julia’s timer reaches 0:00:01.

The thing with pop-ups

We all know how it goes with pop-ups—pop-ups are bad and you should feel bad for using them. Well, in this case it could actually be not that bad.

The viewer

Although the sleep pod display’s main function is to show biometric data of the sleeper, the system prompts a popup to show the remaining time until the sleeper wakes up. And while the display has some degree of redundancy to show the data—i.e. heart rate in graphics and numbers— the design of the countdown brings two downsides for the viewer.

  1. Position: it’s placed right in the middle of the screen.
  2. Size: it’s roughly a quarter of the whole size of the display

Between the two, it partially covers both the pulse graphics and the numbers, which can be vital, i.e. life threatening—information of use to the viewer. Continue reading

Mr. Yuk

Prometheus-079

On the side of the valley in which the first complex is found, there is a giant skull carved into the overlooking crag. It’s easy—given the other transgressions in the film—to dismiss this as spookhouse attempt at being scary. But what if (stay with me here) it’s a warning sign, an alien Mr. Yuk, put there for other sentient humanoids to understand that this place is deadly with a capital D? This explains why the outpost hasn’t been disturbed by rescuers of their own race. They were smart enough to see the warning and turn right back around. (Why they didn’t nuke it from orbit is another question.)

The Material

Seeing this as a warning label raises other questions. Why wouldn’t a warning be technological or linguistic, like most of the interfaces inside the complex? The black infection material is still deadly after 2000 years. Who knows how much longer it will be viable? So where the interfaces inside are for immediate use, the warning outside needs to be effective for millennia, outlasting both the power reserves that would drive technology and the persistent semantics that would cement linguistic understanding. Rock, in contrast, lasts a very, very long time. Even during the erosion the shape and its clear meaning will simply lose clarity, not wink out altogether.

The symbol

Similarly, this shape is a clear symbol of death that is tied to biology, which changes on evolutionary timeframes, guaranteeing its readability for—hopefully—longer than the xenomorph liquid would be a danger.

For these reasons, this is labeling that is more than a Castle Grayskull set dressing attempt at scaaaarrrry, but a reasonable choice at providing an effective warning that will last as long as the danger. You know, providing visiting scientists actually pay attention to such things.