The Gatekeeper

WallE-Gatekeeper04

After the security ‘bot brings Eve across the ship (with Wall-e in tow), he arrives at the gatekeeper to the bridge. The Gatekeeper has the job of entering information about ‘bots, or activating and deactivating systems (labeled with “1”s and “0”s) into a pedestal keyboard with two small manipulator arms. It’s mounted on a large, suspended shaft, and once it sees the security ‘bot and confirms his clearance, it lets the ‘bot and the pallet through by clicking another, specific button on the keyboard.

The Gatekeeper is large. Larger than most of the other robots we see on the Axiom. It’s casing is a white shell around an inner hardware. This casing looks like it’s meant to protect or shield the internal components from light impacts or basic problems like dust. From the looks of the inner housing, the Gatekeeper should be able to move its ‘head’ up and down to point its eye in different directions, but while Wall-e and the security ‘bot are in the room, we only ever see it rotating around its suspension pole and using the glowing pinpoint in its red eye to track the objects its paying attention to.

When it lets the sled through, it sees Wall-e on the back of the sled, who waves to the Gatekeeper. In response, the Gatekeeper waves back with its jointed manipulator arm. After waving, the Gatekeeper looks at its arm. It looks surprised at the arm movement, as if it hadn’t considered the ability to use those actuators before. There is a pause that gives the distinct impression that the Gatekeeper is thinking hard about this new ability, then we see it waving the arm a couple more times to itself to confirm its new abilities.

WallE-Gatekeeper01

The Gatekeeper seems to exist solely to enter information into that pedestal. From what we can see, it doesn’t move and likely (considering the rest of the ship) has been there since the Axiom’s construction. We don’t see any other actions from the pedestal keys, but considering that one of them opens a door temporarily, it’s possible that the other buttons have some other, more permanent functions like deactivating the door security completely, or allowing a non-authorized ‘bot (or even a human) into the space.

An unutilized sentience

The robot is a sentient being, with a tedious and repetitive job, who doesn’t even know he can wave his arm until Wall-e introduces the Gatekeeper to the concept. This fits with the other technology on board the Axiom, with intelligence lacking any correlation to the robot’s function. Thankfully for the robot, he (she?) doesn’t realize their lack of a larger world until that moment.

So what’s the pedestal for?

It still leaves open the question of what the pedestal controls actually do. If they’re all connected to security doors throughout the ship, then the Gatekeeper would have to be tied into the ship’s systems somehow to see who was entering or leaving each secure area.

The pedestal itself acts as a two-stage authentication system. The Gatekeeper has a powerful sentience, and must decide if the people or robots in front of it are allowed to enter the room or rooms it guards. Then, after that decision, it must make a physical action to unlock the door to enter the secure area. This implies a high level of security, which feels appropriate given that the elevator accesses the bridge of the Axiom.

Since we’ve seen the robots have different vision modes, and improvements based on their function, it’s likely that the Gatekeeper can see more into the pedestal interface than the audience can, possibly including which doors each key links to. If not, then as a computer it would have perfect recall on what each button was for. This does not afford a human presence stepping in to take control in case the Gatekeeper has issues (like the robots seen soon after this in the ‘medbay’). But, considering Buy-N-Large’s desire to leave humans out of the loop at each possible point, this seems like a reasonable design direction for the company to take if they wanted to continue that trend.

It’s possible that the pedestal was intended for a human security guard that was replaced after the first generation of spacefarers retired. Another possibility is that Buy-N-Large wanted an obvious sign of security to comfort passengers.

What’s missing?

We learn after this scene that the security ‘bot is Otto’s ‘muscle’ and affords some protection. Given that the Security ‘bot and others might be needed at random times, it feels like he would want a way to gain access to the bridge in an emergency. Something like an integrated biometric scanner on the door that could be manually activated (eye scanner, palm scanner, RFID tags, etc.), or even a physical key device on the door that only someone like the Captain or trusted security officers would be given. Though that assumes there is more than one entrance to the bridge.

This is a great showcase system for tours and commercials of an all-access luxury hotel and lifeboat. It looks impressive, and the Gatekeeper would be an effective way to make sure only people who are really supposed to get into the bridge are allowed past the barriers. But, Buy-N-Large seems to have gone too far in their quest for intelligent robots and has created something that could be easily replaced by a simpler, hard-wired security system.

WallE-Gatekeeper05

Wall-E’s Audio Recorder

image00

Each Wall-E unit has an integrated audio recorder with three buttons: Record, Play, and a third button with an orange square. We see Record and Play used several times, but the orange button is never visibly pressed. Reason and precedent suggest it is a stop function.

What the original purpose of this capability is unclear. Wall-E uses it to record snippets of songs or audio clips from movie that he enjoys. There is no maximum length shown. There is no visible method to rewind, fast forward, or seek along the soundbite, though the clips shown are short enough that it doesn’t affect Wall-E’s ability to hear what he wants to hear.

Simple

image02

This is a very simple interface, and Wall-E is shown being able to operate it without looking at the buttons. Since all three are relatively large, placed on the front of his chest, have physical indentions, and are physically separated, it would be possible for a person with a tactile sense to tell the buttons apart once they learned the order of buttons.

Increasing the indentation of the symbols, and adding a different texture on each would make tactile discovery even easier.

image01_520

The recording capabilities shown are also short. This means that it is excellent for a contained thought, song, or event to be referenced later. Short clips allow the user of the system (in this case Wall-E) to never need to worry about where the recording is cued up to, and play whatever it is he remembers being in memory.

The Orange Button?

Unlike the Play and Record buttons, which are shown meeting standard interface practices of today, the lineup has that odd orange button that is never shown being used (except when Eve is frantically trying to wake Wall-E up, but that tells us nothing about its intended use).

My best guess, based only on its inclusion with the other two buttons, is that it represents a pause function or stop. This conjecture isn’t 100% certain because either function could easily be co-located on the play button as a dual function. Push once to play, push a second time to pause.

So what is the orange button for?

No idea.

The lesson here is that when you’re designing an interface, make sure that each button is absolutely necessary and well placed. Given the location and tactile focus of the interface, the two most used functions (record and play) should have been larger and had distinguishable texture. The third button has a less-than-obvious purpose, meaning that any humans attempting to use it in the far future will need to use trial and error to understand what it’s for.

Two of the buttons are easy to understand. But designers for this system would want to make sure that a person with no access to documentation could quickly understand the third button through immediate feedback and a function that is non-destructive to the data stored in the audio recording.

Dust Storm Alert

WallE-DustStorm04

While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.

A Well Practiced Design

The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.

Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.

WallE-DustStorm01

Equal Opportunity Alerts

By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.

For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.

Why Not Network It?

Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.

Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.

Klaatunian interior

DtESS-034

When the camera first follows Klaatu into the interior of his spaceship, we witness the first gestural interface seen in the survey. To turn on the lights, Klaatu places his hands in the air before a double column of small lights imbedded in the wall to the right of the door. He holds his hand up for a moment, and then smoothly brings it down before these lights. In response the lights on the wall extinguish and an overhead light illuminates. He repeats this gesture on a similar double column of lights to the left of the door.

The nice thing to note about this gesture is that it is simple and easy to execute. The mapping also has a nice physical referent: When the hand goes down like the sun, the lights dim. When the hand goes up like the sun, the lights illuminate.

He then approaches an instrument panel with an array of translucent controls; like a small keyboard with extended, plastic keys. As before, he holds his hand a moment at the top of the controls before swiping his hand in the air toward the bottom of the controls. In response, the panels illuminate. He repeats this on a similar panel nearby.

Having activated all of these elements, he begins to speak in his alien tongue to a circular, strangely lit panel on the wall. (The film gives no indication as to the purpose of his speech, so no conclusions about its interface can be drawn.)

DtESS-049

Gort also operates the translucent panels with a wave of his hand. To her credit, perhaps, Helen does not try to control the panels, but we can presume that, like the spaceship, some security mechanism prevents unauthorized control.

Missing affordances

Who knows how Klaatu perceives this panel. He’s an alien, after all. But for us mere humans, the interface is confounding. There are no labels to help us understand what controls what. The physical affordances of different parts of the panels imply sliding along the surface, touch, or turning, not gesture. Gestural affordances are tricky at best, but these translucent shapes actually signal something different altogether.

Overcomplicated workflow

And you have to wonder why he has to go through this rigmarole at all. Why must he turn on each section of the interface, one by one? Can’t they make just one “on” button? And isn’t he just doing one thing: Transmitting? He doesn’t even seem to select a recipient, so it’s tied to HQ. Seriously, can’t he just turn it on?

Why is this UI even here?

Or better yet, can’t the microphone just detect when he’s nearby, illuminate to let him know it’s ready, and subtly confirm when it’s “hearing” him? That would be the agentive solution.

Maybe it needs some lockdown: Power

OK. Fine. If this transmission consumes a significant amount of power, then an even more deliberate activation is warranted, perhaps the turning of a key. And once on, you would expect to see some indication of the rate of power depletion and remaining power reserves, which we don’t see, so this is pretty doubtful.

Maybe it needs some lockdown: Security

This is the one concern that might warrant all the craziness. That the interface has no affordance means that Joe Human Schmo can’t just walk in and turn it on. (In fact the misleading bits help with a plausible diversion.) The “workflow” then is actually a gestural combination that unlocks the interface and starts it recording. Even if Helen accidentally discovered the gestural aspect, there’s little to no way she could figure out those particular gestures and start intergalactic calls for help. And remembering that Klaatu is, essentially, a space ethics reconn cop, this level of security might make sense.

Perpvision

GitS-heatvision-01

The core of interaction design is the see-think-do loop that describes the outputs, human cognition, and inputs of an interactive system. A film or TV show spends time showing inputs without describing some output, only when these users are in the background and unrelated to the plot. But there are a few examples of outputs with no apparent inputs. These are hard to evaluate in a standard way because it’s such a giant piece of the puzzle. Is it a brain input? Is the technology agentive? Is it some hidden input like Myo‘s muscle sensing? Not knowing the input, a regular review is kind of pointless. All I can do is list its effects and perhaps evaluate the outputs in terms of the apparent goals. Ghost in the Shell has several of these types of inputless systems. Today’s is Kusanagi’s heat vision.

Early in the film, Kusanagi sits atop a skyscraper, jacked in, wearing dark goggles, and eavesdropping on a conversation taking place in a building far below. As she looks down, she sees through the walls of the building in a scanline screen-green view that shows the people as bright green and furniture as a dim green, with everything else being black.

She adjusts the view by steps to zoom closer and closer until her field of vision is filled with the two men conversing in her earpiece. When she hears mention of Project 2501 she thinks the message, “Major, Section 6 is ready to move in.” She reaches up to her right temple and clicks a button, to turn the goggles off before removing them.

That’s nifty. But how did she set the depth of field and the extents (the frustum) of the display so that she only sees these people, and not everyone in the building below this? How does she tell the algorithm that she wants to see furniture and not floor? (Is it thermography? Is the furniture all slightly warm?) What is she doing to increase the zoom? If it’s jacked into her head, why must she activate it several times rather than just focusing on the object with her eyes, or specifying “that person there?” How did she set the audio? Why does the audio not change with each successive zoom? If they’re from separate systems, how did she combine them?

Squint gestures

If I had to speculate what the mechanism should be, I would try to use the natural mechanisms of the eye itself. Let Kusanagi use a slight squint gesture to zoom in, and a slight widening of the eyelids to zoom out. This would let her maintain her gaze, maintain her silence, keep her body still, and keep her hands free.

The scene implies that her tools provide a set amount of zoom for each activation, but for very long distances that seems like it would be a pain. I would have the zoom automatically set itself to make the object on which she is focusing fill her field of vision less some border, and then use squint-gestures to change the zoom to the next logical thing. For instance, if she focused on a person, that person would fill her field of vision. A single widening might zoom out to show the couch on which they are sitting. Another the room. This algorithm wouldn’t be perfect, so you’d need some mechanism for arbitrary zoom. I’d say a squint or wide-eyed gesture held for a third of a second or so would trigger arbitrary zoom for as long as the gesture was maintained, with the zoom increasing logarithmically.

As for the frustum, use the same smart algorithm to watch her gaze, and set the extents to include the whole of the subject and the context in which it sits.

Red mics

TheFifthElement-redmike-001

We saw in an earlier post how the military uses communication headsets with red LEDs in the tips of the antennas that provide a social signal about the attention of its wearer. On board the spaceship to Fhloston Paradise, the same technique is used to signal functioning microphones.

TheFifthElement-redmike-002

TheFifthElement-redmike-003

The simple status signal of glowing signals to the speaker that the device is on and that their voice is being broadcast, listened to, or might be overheard.

These are two binary states: microphone recording/not, light on/off. and the relationship could be swapped such that the light illuminates when the device is not recording. But since the consequences for accidentally broadcasting the wrong thing are dire, it makes sense to associate the attention-getting signal with the costly state that requires attention and care.

The red appears elsewhere as a signal for microphone or antenna, even when it’s not glowing. We see it on Korben’s wireless phone at home, Zorg’s assistant’s headset, on Korben’s room phone aboard the Fhloston Paradise, on the handheld mic aboard Zorg’s ship, and on the President’s wireless phone. We can presume it’s a signal common pattern across all the commucication technology of this world. The commonality helps signal to anyone familiar with it the purpose of an otherwise unmarked and miniaturized component.

Rhod’s rod

TheFifthElement-Rhod-011

One of the most delightfully flamboyant characters in sci-fi is the radio star in The Fifth Element, Ruby Rhod. He wears a headpiece to hear his producers as well as to record his own voice. But to capture the voices of others, he has a technological staff that he carries.

Function

The handle of the device has a microphone built into it. Because of the length of the staff, his reach to potential interviewees is extended. The literal in-your-face nature of the microphone matches Ruby’s in-your-face show.

TheFifthElement-Rhod-004

To let interviewees know when they’re being recorded, a red light in the handle illuminates. This also lets others nearby know that the interviewee is “on air” and not to interrupt.

Ruby also has a single switch on the handle. It’s a small silver toggle. It’s likely that he can set this switch to function as he likes. The one time we see it in action, he has set it to play back an “audio cut,” (the sound clips morning radio talk show hosts insert into their programs) in this case an intimate recording of the Princess of Kodar Japhet. He flips the toggle to play the cut, and flips it back when it’s done.

Here, a different input would have worked better. The toggle switch is too easy to bump and kind of ruins the design of the handle. Better would be a billet button. This sort of momentary button sits flush with a bezel, which prevents accidental activation from, say, a finger laying across it, or resting the button against a flat surface. If Ruby wants the recorded sound to play out completely, and the button press only starts or stops the playback, it would be good to know the state of the playback, and using a billet button with a LED ring would be best.

We also know that Ruby is a performer. He would be happier if he had more than a play button, but a way to express himself. His hand is already in a grip to hold the staff, so the control should fit that—If you could outfit the billet button with directional pressure sensitivity, he could assign each direction to a control. So, for instance, while he was pressing the button, the audio would play, and the harder he pressed up, the volume for each echo would increase. Or pressing down could lower the sample in tone, etc. This would allow him to not just play the audio cut, but perform it.

Fashion

To work as a device that the character would want to carry, it has to match his sense of style. I mean this first in a general sense, and the device does that, with its handle of ornately carved silver. Ruby’s necklaces, bracelets, and rings are all silver, and they work together. The staff also works in his hand like a drum major’s baton, augmenting his larger-than-life presence with an attention-commanding object.

It has to fit his daily fashion as well, and the staff does that, too. The shaft can change appearance. I don’t know if it’s an e-ink-type surface, replaceable staves, or fabric sleeves that change out, but when Ruby’s in leopard print, the staff is in leopard print, too. When Ruby’s decked out in rose-adorned tuxedo black, the staff matches.

TheFifthElement-Rhod-002

TheFifthElement-Rhod-006

Though this is more a portable than a wearable technology, the fact that it can change to match the personal style of the wearer makes it not only functional, but since it fits his persona, desirable as well.

Good morning, Korben

5E-alarm_notext

Korben’s alarm clock is a transparent liquid-crystal display that juts out from a panel at the foot of his bed. When it goes off, it emits a high-pitched repetitive whine. To silence it, Korben must sit up and pinch it between his fingers.

There’’s some subtle, wicked effeciveness to that deactivation. Like a regular alarm clock, the tactic is to emit some annoying sound that persists until the sleeper can rouse themselves enough to turn off the alarm. The usual problem with this tactic is that the sleeper is stupefied in his half-awakeness. If he can sleepily stop the alarm and just go back to sleep, he’ll do it. This clock dissuades sleepy flailing with its sharp-ish corners. After just a few times trying to do that and failing, the scratches on his hand will teach him. Even if the motion is memorized, the sleeper has to wake enough to target it properly and execute the simple but precise input.

The display itself shows the time in astronomical format, i.e. “02:00”, the date (Director Luc Besson‘s birthday), “18 MAR 2263″, and a temperature, 27.5° C.” Since this is quite warm, I presume this is the temperature outside.

fifthelement-fish

Once Korben cancels the alarm, his apartment comes to life. Heavy-beat music begins to play and lights automatically illuminate near the fake-fish tank above the stove and in his cigarette dispenser.

fifthelement-cigs

All these signals combine to make it difficult for sleepy Korben to stay in bed past when awake Korben knows he should be up and moving.

Headsets

FifthE-UFT005

On duty military personnel—on the ship and attending the President—all wear headsets. For personnel talking to others on the bridge, this appears to be a passive mechanism with no controls, perhaps for having an audio record of conversations or ensuring that everyone on the bridge can hear one another perfectly at all times.

FifthE-UFT009

Personnel communicating with people both on the ship’s bridge and the president have a more interesting headset.

Signaling dual-presence

The headsets have antennas rising from the right ear, and each is tipped with a small glowing red light. This provides a technological signal that the device is powered, but also a social signal that the wearer may be engaged in remote conversations. Voice technologies that are too small and don’t provide the signal risk the speaker seeming crazy. Unfortunately this signal as it’s designed is only visible from certain directions. A few extra centimeters of height would help this be more visible. Additionally, if the light could have a state to indicate when the wearer is listening to audio input that others can’t hear, it would provide a person in the same room a cue to wait a moment before getting his attention.

FifthE-UFT016

Secondary conversants

Each headset has a default open connection, which is always on, sending and receiving to one particular conversant. In this way General Staedert can just keep talking and listening to the President. Secondary parties are available by means of light gray buttons on the earpieces. We see General Munro lift his hand and press (one/both of?) these buttons while learning about the growth rate of the evil planet.

FifthE-UFT011

The strategy of having one default and a few secondary conversants within easy access makes a great deal of sense. Quick question and answer transactions can occur across a broad network of experts this way and get information to a core set of decision makers.

The design tactic of having buttons to access them is OK, but perhaps not optimal. Having to press the buttons means the communicator ends up mashing his ear. The easiest to “press” wouldn’t be a button at all but a proximity switch, that simply detects the placement of the hand. This has some particular affordance challenges, but we can presume military personnel are well trained and expert users.

VP language instructor

During David’s two year journey, part of his time is spent “deconstructing dozens of ancient languages to their roots.” We see one scene illustrating a pronunciation part of this study early in the film. As he’s eating, he sees a volumetric display of a cuboid appear high in the air opposite his seat at the table. The cuboid is filled with a cyan glow in which a “talking head” instructor takes up most of the space. In the left is a column of five still images of other artificial intelligent instructors. Each image has two vertical sliders on the left, but the meaning of these sliders is not made clear. In the upper right is an obscure diagram that looks a little like a constellation with some inscrutable text below it.

On the right side of the cuboid projection, we see some other information in a pinks, blues, and cyans. This information appears to be text, bar charts, and line graphs. This information is not immediately usable to the learner, so perhaps it is material about the entire course, for when the lessons are paused: Notes about the progress towards a learning goal, advice for further study, or next steps. Presuming this is a general-purpose interface rather than a custom one made just for David, this information could be the student’s progress notes for an attending human instructor.

We enter the scene with the AI saying, “…Whilst this manner of articulation is attested in Indo-European descendants as a purely paralinguistic form, it is phonemic in the ancestral form dating back five millennia or more. Now let’s attempt Schleicher’s Fable. Repeat after me.”

In the lower part of the image is a waveform of the current phrase being studied. In the lower right is the written text of the phrase being studied, in what looks like a simplified phoenetic alphabet. As the instructor speaks this fable, each word is hilighted in the written form. When he is done, he prompts David to repeat it.

akʷunsəz dadkta,
hwælna nahast
təm ghεrmha
vagam ugεntha,

After David repeats it, the AI instructor smiles, nods, and looks pleased. He praises David’s pronunciation as “Perfect.”

This call and response seems par for modern methods of language learning software, even down to “listening” and providing feedback. Learning and studying a language is ultimately far more complicated than this, but it would be difficult to show much more of it in such a short scene. The main novelty that this interface brings to the notion of language acquisition seems to be the volumetric display and the hint of real-time progress notes.