Wall-E’s Audio Recorder


Each Wall-E unit has an integrated audio recorder with three buttons: Record, Play, and a third button with an orange square. We see Record and Play used several times, but the orange button is never visibly pressed. Reason and precedent suggest it is a stop function.

What the original purpose of this capability is unclear. Wall-E uses it to record snippets of songs or audio clips from movie that he enjoys. There is no maximum length shown. There is no visible method to rewind, fast forward, or seek along the soundbite, though the clips shown are short enough that it doesn’t affect Wall-E’s ability to hear what he wants to hear.



This is a very simple interface, and Wall-E is shown being able to operate it without looking at the buttons. Since all three are relatively large, placed on the front of his chest, have physical indentions, and are physically separated, it would be possible for a person with a tactile sense to tell the buttons apart once they learned the order of buttons.

Increasing the indentation of the symbols, and adding a different texture on each would make tactile discovery even easier.


The recording capabilities shown are also short. This means that it is excellent for a contained thought, song, or event to be referenced later. Short clips allow the user of the system (in this case Wall-E) to never need to worry about where the recording is cued up to, and play whatever it is he remembers being in memory.

The Orange Button?

Unlike the Play and Record buttons, which are shown meeting standard interface practices of today, the lineup has that odd orange button that is never shown being used (except when Eve is frantically trying to wake Wall-E up, but that tells us nothing about its intended use).

My best guess, based only on its inclusion with the other two buttons, is that it represents a pause function or stop. This conjecture isn’t 100% certain because either function could easily be co-located on the play button as a dual function. Push once to play, push a second time to pause.

So what is the orange button for?

No idea.

The lesson here is that when you’re designing an interface, make sure that each button is absolutely necessary and well placed. Given the location and tactile focus of the interface, the two most used functions (record and play) should have been larger and had distinguishable texture. The third button has a less-than-obvious purpose, meaning that any humans attempting to use it in the far future will need to use trial and error to understand what it’s for.

Two of the buttons are easy to understand. But designers for this system would want to make sure that a person with no access to documentation could quickly understand the third button through immediate feedback and a function that is non-destructive to the data stored in the audio recording.


Dust Storm Alert


While preparing for his night cycle, Wall-E is standing at the back of his transport/home. On the back drop door of the transport, he is cleaning out his collection cooler. In the middle of this ritual, an alert sounds from his external speakers. Concerned by the sound, Wall-E looks up to see a dust storm approaching. After seeing this, he hurries to finish cleaning his cooler and seal the door of the transport.

A Well Practiced Design

The Dust Storm Alert appears to override Wall-E’s main window into the world: his eyes. This is done to warn him of a very serious event that could damage him or permanently shut him down. What is interesting is that he doesn’t appear to register a visual response first. Instead, we first hear the audio alert, then Wall-E’s eye-view shows the visual alert afterward.

Given the order of the two parts of the alert, the audible part was considered the most important piece of information by Wall-E’s designers. It comes first, is unidirectional as well as loud enough for everyone to hear, and is followed by more explicit information.


Equal Opportunity Alerts

By having the audible alert first, all Wall-E units, other robots, and people in the area would be alerted of a major event. Then, the Wall-E units would be given the additional information like range and direction that they need to act. Either because of training or pre-programmed instructions, Wall-E’s vision does not actually tell him what the alert is for, or what action he should take to be safe. This could also be similar to tornado sirens, where each individual is expected to know where they are and what the safest nearby location is.

For humans interacting alongside Wall-E units each person should have their own heads-up display, likely similar to a Google-glass device. When a Wall-E unit gets a dust storm alert, the human could then receive a sympathetic alert and guidance to the nearest safe area. Combined with regular training and storm drills, people in the wastelands of Earth would then know exactly what to do.

Why Not Network It?

Whether by luck or proper programming, the alert is triggered with just enough time for Wall-E to get back to his shelter before the worst of the storm hits. Given that the alert didn’t trigger until Wall-E was able to see the dust cloud for himself, this feels like very short notice. Too short notice. A good improvement to the system would be a connection up to a weather satellite in orbit, or a weather broadcast in the city. This would allow him to be pre-warned and take shelter well before any of the storm hits, protecting him and his solar collectors.

Other than this, the alert system is effective. It warns Wall-E of the approaching storm in time to act, and it also warns everyone in the local vicinity of the same issue. While the alert doesn’t inform everyone of what is happening, at least one actor (Wall-E) knows what it means and knows how to react. As with any storm warning system, having a connection that can provide forecasts of potentially dangerous weather would be a huge plus.

Klaatunian interior


When the camera first follows Klaatu into the interior of his spaceship, we witness the first gestural interface seen in the survey. To turn on the lights, Klaatu places his hands in the air before a double column of small lights imbedded in the wall to the right of the door. He holds his hand up for a moment, and then smoothly brings it down before these lights. In response the lights on the wall extinguish and an overhead light illuminates. He repeats this gesture on a similar double column of lights to the left of the door.

The nice thing to note about this gesture is that it is simple and easy to execute. The mapping also has a nice physical referent: When the hand goes down like the sun, the lights dim. When the hand goes up like the sun, the lights illuminate.

He then approaches an instrument panel with an array of translucent controls; like a small keyboard with extended, plastic keys. As before, he holds his hand a moment at the top of the controls before swiping his hand in the air toward the bottom of the controls. In response, the panels illuminate. He repeats this on a similar panel nearby.

Having activated all of these elements, he begins to speak in his alien tongue to a circular, strangely lit panel on the wall. (The film gives no indication as to the purpose of his speech, so no conclusions about its interface can be drawn.)


Gort also operates the translucent panels with a wave of his hand. To her credit, perhaps, Helen does not try to control the panels, but we can presume that, like the spaceship, some security mechanism prevents unauthorized control.

Missing affordances

Who knows how Klaatu perceives this panel. He’s an alien, after all. But for us mere humans, the interface is confounding. There are no labels to help us understand what controls what. The physical affordances of different parts of the panels imply sliding along the surface, touch, or turning, not gesture. Gestural affordances are tricky at best, but these translucent shapes actually signal something different altogether.

Overcomplicated workflow

And you have to wonder why he has to go through this rigmarole at all. Why must he turn on each section of the interface, one by one? Can’t they make just one “on” button? And isn’t he just doing one thing: Transmitting? He doesn’t even seem to select a recipient, so it’s tied to HQ. Seriously, can’t he just turn it on?

Why is this UI even here?

Or better yet, can’t the microphone just detect when he’s nearby, illuminate to let him know it’s ready, and subtly confirm when it’s “hearing” him? That would be the agentive solution.

Maybe it needs some lockdown: Power

OK. Fine. If this transmission consumes a significant amount of power, then an even more deliberate activation is warranted, perhaps the turning of a key. And once on, you would expect to see some indication of the rate of power depletion and remaining power reserves, which we don’t see, so this is pretty doubtful.

Maybe it needs some lockdown: Security

This is the one concern that might warrant all the craziness. That the interface has no affordance means that Joe Human Schmo can’t just walk in and turn it on. (In fact the misleading bits help with a plausible diversion.) The “workflow” then is actually a gestural combination that unlocks the interface and starts it recording. Even if Helen accidentally discovered the gestural aspect, there’s little to no way she could figure out those particular gestures and start intergalactic calls for help. And remembering that Klaatu is, essentially, a space ethics reconn cop, this level of security might make sense.



The core of interaction design is the see-think-do loop that describes the outputs, human cognition, and inputs of an interactive system. A film or TV show spends time showing inputs without describing some output, only when these users are in the background and unrelated to the plot. But there are a few examples of outputs with no apparent inputs. These are hard to evaluate in a standard way because it’s such a giant piece of the puzzle. Is it a brain input? Is the technology agentive? Is it some hidden input like Myo‘s muscle sensing? Not knowing the input, a regular review is kind of pointless. All I can do is list its effects and perhaps evaluate the outputs in terms of the apparent goals. Ghost in the Shell has several of these types of inputless systems. Today’s is Kusanagi’s heat vision.

Early in the film, Kusanagi sits atop a skyscraper, jacked in, wearing dark goggles, and eavesdropping on a conversation taking place in a building far below. As she looks down, she sees through the walls of the building in a scanline screen-green view that shows the people as bright green and furniture as a dim green, with everything else being black.

She adjusts the view by steps to zoom closer and closer until her field of vision is filled with the two men conversing in her earpiece. When she hears mention of Project 2501 she thinks the message, “Major, Section 6 is ready to move in.” She reaches up to her right temple and clicks a button, to turn the goggles off before removing them.

That’s nifty. But how did she set the depth of field and the extents (the frustum) of the display so that she only sees these people, and not everyone in the building below this? How does she tell the algorithm that she wants to see furniture and not floor? (Is it thermography? Is the furniture all slightly warm?) What is she doing to increase the zoom? If it’s jacked into her head, why must she activate it several times rather than just focusing on the object with her eyes, or specifying “that person there?” How did she set the audio? Why does the audio not change with each successive zoom? If they’re from separate systems, how did she combine them?

Squint gestures

If I had to speculate what the mechanism should be, I would try to use the natural mechanisms of the eye itself. Let Kusanagi use a slight squint gesture to zoom in, and a slight widening of the eyelids to zoom out. This would let her maintain her gaze, maintain her silence, keep her body still, and keep her hands free.

The scene implies that her tools provide a set amount of zoom for each activation, but for very long distances that seems like it would be a pain. I would have the zoom automatically set itself to make the object on which she is focusing fill her field of vision less some border, and then use squint-gestures to change the zoom to the next logical thing. For instance, if she focused on a person, that person would fill her field of vision. A single widening might zoom out to show the couch on which they are sitting. Another the room. This algorithm wouldn’t be perfect, so you’d need some mechanism for arbitrary zoom. I’d say a squint or wide-eyed gesture held for a third of a second or so would trigger arbitrary zoom for as long as the gesture was maintained, with the zoom increasing logarithmically.

As for the frustum, use the same smart algorithm to watch her gaze, and set the extents to include the whole of the subject and the context in which it sits.

Red mics


We saw in an earlier post how the military uses communication headsets with red LEDs in the tips of the antennas that provide a social signal about the attention of its wearer. On board the spaceship to Fhloston Paradise, the same technique is used to signal functioning microphones.



The simple status signal of glowing signals to the speaker that the device is on and that their voice is being broadcast, listened to, or might be overheard.

These are two binary states: microphone recording/not, light on/off. and the relationship could be swapped such that the light illuminates when the device is not recording. But since the consequences for accidentally broadcasting the wrong thing are dire, it makes sense to associate the attention-getting signal with the costly state that requires attention and care.

The red appears elsewhere as a signal for microphone or antenna, even when it’s not glowing. We see it on Korben’s wireless phone at home, Zorg’s assistant’s headset, on Korben’s room phone aboard the Fhloston Paradise, on the handheld mic aboard Zorg’s ship, and on the President’s wireless phone. We can presume it’s a signal common pattern across all the commucication technology of this world. The commonality helps signal to anyone familiar with it the purpose of an otherwise unmarked and miniaturized component.

Rhod’s rod


One of the most delightfully flamboyant characters in sci-fi is the radio star in The Fifth Element, Ruby Rhod. He wears a headpiece to hear his producers as well as to record his own voice. But to capture the voices of others, he has a technological staff that he carries.


The handle of the device has a microphone built into it. Because of the length of the staff, his reach to potential interviewees is extended. The literal in-your-face nature of the microphone matches Ruby’s in-your-face show.


To let interviewees know when they’re being recorded, a red light in the handle illuminates. This also lets others nearby know that the interviewee is “on air” and not to interrupt.

Ruby also has a single switch on the handle. It’s a small silver toggle. It’s likely that he can set this switch to function as he likes. The one time we see it in action, he has set it to play back an “audio cut,” (the sound clips morning radio talk show hosts insert into their programs) in this case an intimate recording of the Princess of Kodar Japhet. He flips the toggle to play the cut, and flips it back when it’s done.

Here, a different input would have worked better. The toggle switch is too easy to bump and kind of ruins the design of the handle. Better would be a billet button. This sort of momentary button sits flush with a bezel, which prevents accidental activation from, say, a finger laying across it, or resting the button against a flat surface. If Ruby wants the recorded sound to play out completely, and the button press only starts or stops the playback, it would be good to know the state of the playback, and using a billet button with a LED ring would be best.

We also know that Ruby is a performer. He would be happier if he had more than a play button, but a way to express himself. His hand is already in a grip to hold the staff, so the control should fit that—If you could outfit the billet button with directional pressure sensitivity, he could assign each direction to a control. So, for instance, while he was pressing the button, the audio would play, and the harder he pressed up, the volume for each echo would increase. Or pressing down could lower the sample in tone, etc. This would allow him to not just play the audio cut, but perform it.


To work as a device that the character would want to carry, it has to match his sense of style. I mean this first in a general sense, and the device does that, with its handle of ornately carved silver. Ruby’s necklaces, bracelets, and rings are all silver, and they work together. The staff also works in his hand like a drum major’s baton, augmenting his larger-than-life presence with an attention-commanding object.

It has to fit his daily fashion as well, and the staff does that, too. The shaft can change appearance. I don’t know if it’s an e-ink-type surface, replaceable staves, or fabric sleeves that change out, but when Ruby’s in leopard print, the staff is in leopard print, too. When Ruby’s decked out in rose-adorned tuxedo black, the staff matches.



Though this is more a portable than a wearable technology, the fact that it can change to match the personal style of the wearer makes it not only functional, but since it fits his persona, desirable as well.

Good morning, Korben


Korben’s alarm clock is a transparent liquid-crystal display that juts out from a panel at the foot of his bed. When it goes off, it emits a high-pitched repetitive whine. To silence it, Korben must sit up and pinch it between his fingers.

There’’s some subtle, wicked effeciveness to that deactivation. Like a regular alarm clock, the tactic is to emit some annoying sound that persists until the sleeper can rouse themselves enough to turn off the alarm. The usual problem with this tactic is that the sleeper is stupefied in his half-awakeness. If he can sleepily stop the alarm and just go back to sleep, he’ll do it. This clock dissuades sleepy flailing with its sharp-ish corners. After just a few times trying to do that and failing, the scratches on his hand will teach him. Even if the motion is memorized, the sleeper has to wake enough to target it properly and execute the simple but precise input.

The display itself shows the time in astronomical format, i.e. “02:00”, the date (Director Luc Besson‘s birthday), “18 MAR 2263″, and a temperature, 27.5° C.” Since this is quite warm, I presume this is the temperature outside.


Once Korben cancels the alarm, his apartment comes to life. Heavy-beat music begins to play and lights automatically illuminate near the fake-fish tank above the stove and in his cigarette dispenser.


All these signals combine to make it difficult for sleepy Korben to stay in bed past when awake Korben knows he should be up and moving.



On duty military personnel—on the ship and attending the President—all wear headsets. For personnel talking to others on the bridge, this appears to be a passive mechanism with no controls, perhaps for having an audio record of conversations or ensuring that everyone on the bridge can hear one another perfectly at all times.


Personnel communicating with people both on the ship’s bridge and the president have a more interesting headset.

Signaling dual-presence

The headsets have antennas rising from the right ear, and each is tipped with a small glowing red light. This provides a technological signal that the device is powered, but also a social signal that the wearer may be engaged in remote conversations. Voice technologies that are too small and don’t provide the signal risk the speaker seeming crazy. Unfortunately this signal as it’s designed is only visible from certain directions. A few extra centimeters of height would help this be more visible. Additionally, if the light could have a state to indicate when the wearer is listening to audio input that others can’t hear, it would provide a person in the same room a cue to wait a moment before getting his attention.


Secondary conversants

Each headset has a default open connection, which is always on, sending and receiving to one particular conversant. In this way General Staedert can just keep talking and listening to the President. Secondary parties are available by means of light gray buttons on the earpieces. We see General Munro lift his hand and press (one/both of?) these buttons while learning about the growth rate of the evil planet.


The strategy of having one default and a few secondary conversants within easy access makes a great deal of sense. Quick question and answer transactions can occur across a broad network of experts this way and get information to a core set of decision makers.

The design tactic of having buttons to access them is OK, but perhaps not optimal. Having to press the buttons means the communicator ends up mashing his ear. The easiest to “press” wouldn’t be a button at all but a proximity switch, that simply detects the placement of the hand. This has some particular affordance challenges, but we can presume military personnel are well trained and expert users.

VP language instructor

During David’s two year journey, part of his time is spent “deconstructing dozens of ancient languages to their roots.” We see one scene illustrating a pronunciation part of this study early in the film. As he’s eating, he sees a volumetric display of a cuboid appear high in the air opposite his seat at the table. The cuboid is filled with a cyan glow in which a “talking head” instructor takes up most of the space. In the left is a column of five still images of other artificial intelligent instructors. Each image has two vertical sliders on the left, but the meaning of these sliders is not made clear. In the upper right is an obscure diagram that looks a little like a constellation with some inscrutable text below it.

On the right side of the cuboid projection, we see some other information in a pinks, blues, and cyans. This information appears to be text, bar charts, and line graphs. This information is not immediately usable to the learner, so perhaps it is material about the entire course, for when the lessons are paused: Notes about the progress towards a learning goal, advice for further study, or next steps. Presuming this is a general-purpose interface rather than a custom one made just for David, this information could be the student’s progress notes for an attending human instructor.

We enter the scene with the AI saying, “…Whilst this manner of articulation is attested in Indo-European descendants as a purely paralinguistic form, it is phonemic in the ancestral form dating back five millennia or more. Now let’s attempt Schleicher’s Fable. Repeat after me.”

In the lower part of the image is a waveform of the current phrase being studied. In the lower right is the written text of the phrase being studied, in what looks like a simplified phoenetic alphabet. As the instructor speaks this fable, each word is hilighted in the written form. When he is done, he prompts David to repeat it.

akʷunsəz dadkta,
hwælna nahast
təm ghεrmha
vagam ugεntha, Continue reading

Military communication

All telecommunications in the film are based on either a public address or a two-way radio metaphor.

Commander Adams addresses the crew.

To address the crew from inside the ship, Commander Adams grabs the microphone from its holder on the wall. Its long handle makes it easy to grab. By speaking into the lit, transparent circle mounted to one end, his voice is automatically broadcast across the ship.

Commander Adams lets Chief Quinn know he’s in command of the ship.

Quinn listens for incoming signals.

The two-way radio on his belt is routed through the communications officer back at the ship. To use it, he unclips the small cylindrical microphone from its clip, flips a small switch at the base of the box, and pulls the microphone on its tether close to his mouth to speak. When the device is active, a small array of lights on the box illuminates.

Confirming their safety by camera, Chief Quinn gets an eyeful of Alta.

The microphone also has a video camera within it. When Chief Quinn asks Commander Adams to “activate the viewer,” he does so by turning the device such that its small end faces outwards, at which time it acts as a camera, sending a video signal back to the ship, to be viewed on the “view plate.”

The Viewplate is used frequently to see outside the ship.

Altair IV looms within view.

The Viewplate is a large video screen with rounded edges that is mounted to a wall off the bridge. To the left of it three analog gauges are arranged in a column, above two lights and a stack of sliders. These are not used during the film.

Commander Adams engages the Viewplate to look for Altair IV.

The Viewplate is controlled by a wall mounted panel with a very curious placement. When Commander Adams rushes to adjust it, he steps to the panel and adjusts a few horizontal sliders, while craning around a cowling station to see if his tweaks are having the desired effect. When he’’s fairly sure it’’s correct, he has to step away from the panel to get a better view and make sure. There is no excuse for this poor placement.