Fritzes 2026 bonus award: Best Robots

The Fritzes award honors the best interfaces in a full-length motion picture in the past year. Interfaces play a special role in our movie-going experience, and are a craft all their own that does not otherwise receive focused recognition.

The 2026 Award for Best Robots: The Electric State

The Fritzes has been tracking robots in cinema for a few years now. My favorite from 2025 is The Electric State. The film is a Netflix film adaptation of Simon Stålenhag’s luscious illustrated novel of the same name. And some of the robots we see in the film are directly lifted from his illustrations. So this award partly goes to you, Simon. 

A futuristic landscape featuring a massive, rusted robot sculpture in an urban setting, with two figures standing in front of it. Cars are parked nearby under a bridge, with mountains visible in the background and a clear sky above.
A whimsical landscape featuring a large, rusty robot figure lying in a desert setting, surrounded by sparse vegetation and mountains in the background under a blue sky.

But in the movie they are animated and voiced, and there are new ones as well, so it is its own thing. It has Chris Pratt, who is problematic for offscreen reasons, and the script can be somewhat tropey, but the film has nifty world building. In the diegesis, sentient robots are seen as enemies of the state and excommunicated to form their own outcast cities. The design of the robots betray their capitalist origins. Mascots and advertisements. Job-tailored bots. They are quirky and charming and all sizes, and help critique a system that fully deserves it.

A futuristic desert scene featuring various robotic characters and a dilapidated building with the sign 'SEARS'. Numerous robots are depicted interacting and exploring the area, amidst rocky cliffs in the background.

Also check out: Superman!

 James Gunn’s first D.C. movie brought Superman to life and added some things to its lore, such as: Kal-El has four service robots that support him in his Fortress of Solitude. They’re just called Superman Robots at first. Their chest plates identify them by number: 1, 4, 5, and 12. They’re on the far side of the canny rise, one-eyed and very much robotic, with charming banter. At the end of the movie, after it is rebuilt, number four dons a cape and chooses a name, and that name is Gary. Gary’s just a mensch “with no emotional capacity whatsoever”. (And that frankness is why I like Gary.)

Also check out: M3gan 2.0!

One of the smart things the M3gan franchise uses in their diegesis is that AI and robotic housings are not tightly bound. AI can slip out of a housing, replicate itself, find new embodiments on the network, manage multiple embodiments, coordinate disparate housings, etc. Over the course of the movie, we see M3gan and her nemesis AMELIA in many kinds of robot bodies in many states of development. My favorite is the cute little toy that Gemma puts M3gan while she was figuring out if the AI could be trusted.

A small, friendly-looking robot with a teal body and large expressive eyes, standing on a cluttered workspace.

This decoupling is an important difference in AI capabilities that don’t jive with our anthropocentric models. Humans and animals can’t do that, so it’s something that bears literacy.

Shout out to the Act III robot design for AMELIA that references Hajime Sorayama’s illustrations from the 80s and 90s, because reference!

Also check out: Section 31!

Near the end of the film, Garrett finds a Droom doll in the hold of a garbage scow they’ve commandeered. The doll has sensors to detect its context, and actuators to move the arms, head, and mouth. Its three eyes can illuminate. It has speech generation and, as we discover, general reasoning capabilities. When Garrett first finds it, it says, “Hi there! I’m so glad you found me!” It suggests play time with, “Shall we do something fun together?” and spins its head around, whipping its indigo-colored hair in circles.

Garrett pours acid on its volatile power source to turn it into a bomb, and it begins to malfunction, uttering child-friendly things like “We can be friends forever” and dark things, like, “We’re all gonna die! We’re all gonna die!” It is released from the ship to explode in space and destroy another ship that is chasing it.

The conclusion that “we’re all gonna die” is immediately true in the diegesis, not just the morbid, general version of that same truth. But making this conclusion depends not just on context, but general causal reasoning. My decaying battery is going to explode and destroy everything and everyone around it, so I’m going to shout that fact. Note it does not actually issue a warning for the owner to flee, which it should do, but we can chalk that up to malfunction. It hints that the Droom are a species with vast technological resources but troublingly weak risk assessment. All from a tiny little robot with mere seconds of screen time.

Next up: The best assistants of 2025

Sci-fi Spacesuits: Biological needs

Spacesuits must support the biological functioning of the astronaut. There are probably damned fine psychological reasons to not show astronauts their own biometric data while on stressful extravehicular missions, but there is the issue of comfort. Even if temperature, pressure, humidity, and oxygen levels are kept within safe ranges by automatic features of the suit, there is still a need for comfort and control inside of that range. If the suit is to be warn a long time, there must be some accommodation for food, water, urination, and defecation. Additionally, the medical and psychological status of the wearer should be monitored to warn of stress states and emergencies.

Unfortunately, the survey doesn’t reveal any interfaces being used to control temperature, pressure, or oxygen levels. There are some for low oxygen level warnings and testing conditions outside the suit, but these are more outputs than interfaces where interactions take place.

There are also no nods to toilet necessities, though in fairness Hollywood eschews this topic a lot.

The one example of sustenance seen in the survey appears in Sunshine, we see Captain Kaneda take a sip from his drinking tube while performing a dangerous repair of the solar shields. This is the only food or drink seen in the survey, and it is a simple mechanical interface, held in place by material strength in such a way that he needs only to tilt his head to take a drink.

Similarly, in Sunshine, when Capa and Kaneda perform EVA to repair broken solar shields, Cassie tells Capa to relax because he is using up too much oxygen. We see a brief view of her bank of screens that include his biometrics.

Remote monitoring of people in spacesuits is common enough to be a trope, but has been discussed already in the Medical chapter in Make It So, for more on biometrics in sci-fi.

Crowe’s medical monitor in Aliens (1986).

There are some non-interface biological signals for observers. In the movie Alien, as the landing party investigates the xenomorph eggs, we can see that the suit outgases something like steam—slower than exhalations, but regular. Though not presented as such, the suit certainly confirms for any onlooker that the wearer is breathing and the suit functioning.

Given that sci-fi technology glows, it is no surprise to see that lots and lots of spacesuits have glowing bits on the exterior. Though nothing yet in the survey tells us what these lights might be for, it stands to reason that one purpose might be as a simple and immediate line-of-sight status indicator. When things are glowing steadily, it means the life support functions are working smoothly. A blinking red alert on the surface of a spacesuit could draw attention to the individual with the problem, and make finding them easier.

Emergency deployment

One nifty thing that sci-fi can do (but we can’t yet in the real world) is deploy biology-protecting tech at the touch of a button. We see this in the Marvel Cinematic Universe with Starlord’s helmet.

If such tech was available, you’d imagine that it would have some smart sensors to know when it must automatically deploy (sudden loss of oxygen or dangerous impurities in the air), but we don’t see it. But given this speculative tech, one can imagine it working for a whole spacesuit and not just a helmet. It might speed up scenes like this.

What do we see in the real world?

Are there real-world controls that sci-fi is missing? Let’s turn to NASA’s space suits to compare.

The Primary Life-Support System (PLSS) is the complex spacesuit subsystem that provides the life support to the astronaut, and biomedical telemetry back to control. Its main components are the closed-loop oxygen-ventilation system for cycling and recycling oxygen, the moisture (sweat and breath) removal system, and the feedwater system for cooling.

The only “biology” controls that the spacewalker has for these systems are a few on the Display and Control Module (DCM) on the front of the suit. They are the cooling control valve, the oxygen actuator slider, and the fan switch. Only the first is explicitly to control comfort. Other systems, such as pressure, are designed to maintain ideal conditions automatically. Other controls are used for contingency systems for when the automatic systems fail.

Hey, isn’t the text on this thing backwards? Yes, because astronauts can’t look down from inside their helmets, and must view these controls via a wrist mirror. More on this later.

The suit is insulated thoroughly enough that the astronaut’s own body heats the interior, even in complete shade. Because the astronaut’s body constantly adds heat, the suit must be cooled. To do this, the suit cycles water through a Liquid Cooling and Ventilation Garment, which has a fine network of tubes held closely to the astronaut’s skin. Water flows through these tubes and past a sublimator that cools the water with exposure to space. The astronaut can increase or decrease the speed of this flow and thereby the amount to which his body is cooled, by the cooling control valve, a recessed radial valve with fixed positions between 0 (the hottest) and 10 (the coolest), located on the front of the Display Control Module.

The spacewalker does not have EVA access to her biometric data. Sensors measure oxygen consumption and electrocardiograph data and broadcast it to the Mission Control surgeon, who monitors it on her behalf. So whatever the reason is, if it’s good enough for NASA, it’s good enough for the movies.


Back to sci-fi

So, we do see temperature and pressure controls on suits in the real world, which underscores their absence in sci-fi. But, if there hasn’t been any narrative or plot reason for such things to appear in a story, we should not expect them.

8 Reasons The Voight-Kampff Machine is shit (and a redesign to fix it)

Distinguishing replicants from humans is a tricky business. Since they are indistinguishable biologically, it requires an empathy test, during which the subject hears empathy-eliciting scenarios and watched carefully for telltale signs such as, “capillary dilation—the so-called blush response…fluctuation of the pupil…involuntary dilation of the iris.” To aid the blade runner in this examination, they use a portable machine called the Voight-Kampff machine, named, presumably, for its inventors.

The device is the size of a thick laptop computer, and rests flat on the table between the blade runner and subject. When the blade runner prepares the machine for the test, they turn it on, and a small adjustable armature rises from the machine, the end of which is an intricate piece of hardware, housing a powerful camera, glowing red.

The blade runner trains this camera on one of the subject’s eyes. Then, while reading from the playbook book of scenarios, they keep watch on a large monitor, which shows an magnified image of the subject’s eye. (Ostensibly, anyway. More on this below.) A small bellows on the subject’s side of the machine raises and lowers. On the blade runner’s side of the machine, a row of lights reflect the volume of the subject’s speech. Three square, white buttons sit to the right of the main monitor. In Leon’s test we see Holden press the leftmost of the three, and the iris in the monitor becomes brighter, illuminated from some unseen light source. The purpose of the other two square buttons is unknown. Two smaller monochrome monitors sit to the left of the main monitor, showing moving but otherwise inscrutable forms of information.

In theory, the system allows the blade runner to more easily watch for the minute telltale changes in the eye and blush response, while keeping a comfortable social distance from the subject. Substandard responses reveal a lack of empathy and thereby a high probability that the subject is a replicant. Simple! But on review, it’s shit. I know this is going to upset fans, so let me enumerate the reasons, and then propose a better solution.

-2. Wouldn’t a genetic test make more sense?

If the replicants are genetically engineered for short lives, wouldn’t a genetic test make more sense? Take a drop of blood and look for markers of incredibly short telomeres or something.

-1. Wouldn’t an fMRI make more sense?

An fMRI would reveal empathic responses in the inferior frontal gyrus, or cognitive responses in the ventromedial prefrontal gyrus. (The brain structures responsible for these responses.) Certinaly more expensive, but more certain.

0. Wouldn’t a metal detector make more sense?

If you are testing employees to detect which ones are the murdery ones and which ones aren’t, you might want to test whether they are bringing a tool of murder with them. Because once they’re found out, they might want to murder you. This scene should be rewritten such that Leon leaps across the desk and strangles Holden, IMHO. It would make him, and other blade runners, seem much more feral and unpredictable.

(OK, those aren’t interface issues but seriously wtf. Onward.)

1. Labels, people

Controls needs labels. Especially when the buttons have no natural affordance and the costs of experimentation to discover the function are high. Remembering the functions of unlabeled controls adds to the cognitive load for a user who should be focusing on the person across the table. At least an illuminated button helps signal the state, so that, at least, is something.

 2. It should be less intimidating

The physical design is quite intimidating: The way it puts a barrier in between the blade runner and subject. The fact that all the displays point away from the subject. The weird intricacy of the camera, its ominous HAL-like red glow. Regular readers may note that the eyepiece is red-on-black and pointy. That is to say, it is aposematic. That is to say, it looks evil. That is to say, intimidating.

I’m no emotion-scientist, but I’m pretty sure that if you’re testing for empathy, you don’t want to complicate things by introducing intimidation into the equation. Yes, yes, yes, the machine works by making the subject feel like they have to defend themselves from the accusations in the ethical dilemmas, but that stress should come from the content, not the machine.

2a. Holden should be less intimidating and not tip his hand

While we’re on this point, let me add that Holden should be less intimidating, too. When Holden tells Leon that a tortoise and a turtle are the same thing, (Narrator: They aren’t) he happens to glance down at the machine. At that moment, Leon says, “I’ve never seen a turtle,” a light shines on the pupil and the iris contracts. Holden sees this and then gets all “ok, replicant” and becomes hostile toward Leon.

In case it needs saying: If you are trying to tell whether the person across from you is a murderous replicant, and you suddenly think the answer is yes, you do not tip your hand and let them know what you know. Because they will no longer have a reason to hide their murderyness. Because they will murder you, and then escape, to murder again. That’s like, blade runner 101, HOLDEN.

3. It should display history 

The glance moment points out another flaw in the interface. Holden happens to be looking down at the machine at that moment. If he wasn’t paying attention, he would have missed the signal. The machine needs to display the interview over time, and draw his attention to troublesome moments. That way, when his attention returns to the machine, he can see that something important happened, even if it’s not happening now, and tell at a glance what the thing was.

4. It should track the subject’s eyes

Holden asks Leon to stay very still. But people are bound to involuntarily move as their attention drifts to the content of the empathy dilemmas. Are we going to add noncompliance-guilt to the list of emotional complications? Use visual recognition algorithms and high-resolution cameras to just track the subject’s eyes no matter how they shift in their seat.

5. Really? A bellows?

The bellows doesn’t make much sense either. I don’t believe it could, at the distance it sits from the subject, help detect “capillary dilation” or “ophthalmological measurements”. But it’s certainly creepy and Terry Gilliam-esque. It adds to the pointless intimidation.

6. It should show the actual subject’s eye

The eye color that appears on the monitor (hazel) matches neither Leon’s (a striking blue) or Rachel’s (a rich brown). Hat tip to Typeset in the Future for this observation. His is a great review.

7. It should visualize things in ways that make it easy to detect differences in key measurements

Even if the inky, dancing black blob is meant to convey some sort of information, the shape is too organic for anyone to make meaningful readings from it. Like seriously, what is this meant to convey?

The spectrograph to the left looks a little more convincing, but it still requires the blade runner to do all the work of recognizing when things are out of expected ranges.

8. The machine should, you know, help them

The machine asks its blade runner to do a lot of work to use it. This is visual work and memory work and even work estimating when things are out of norms. But this is all something the machine could help them with. Fortunately, this is a tractable problem, using the mighty powers of logic and design.

Pupillary diameter

People are notoriously bad at estimating the sizes of things by sight. Computers, however, are good at it. Help the blade runner by providing a measurement of the thing they are watching for: pupillary diameter. (n.b. The script speaks of both iris constriction and pupillary diameter, but these are the same thing.) Keep it convincing and looking cool by having this be an overlay on the live video of the subject’s eye.

So now there’s some precision to work with. But as noted above, we don’t want to burden the user’s memory with having to remember stuff, and we don’t want them to just be glued to the screen, hoping they don’t miss something important. People are terrible at vigilance tasks. Computers are great at them. The machine should track and display the information from the whole session.

Note that the display illustrates radius, but displays diameter. That buys some efficiencies in the final interface.

Now, with the data-over-time, the user can glance to see what’s been happening and a precise comparison of that measurement over time. But, tracking in detail, we quickly run out of screen real estate. So let’s break the display into increments with differing scales.

There may be more useful increments, but microseconds and seconds feel pretty convincing, with the leftmost column compressing gradually over time to show everything from the beginning of the interview. Now the user has a whole picture to look at. But this still burdens them into noticing when these measurements are out of normal human ranges. So, let’s plot the threshold, and note when measurements fall outside of that. In this case, it feels right that replicants display less that normal pupillary dilation, so it’s a lower-boundary threshold. The interface should highlight when the measurement dips below this.

Blush

I think that covers everything for the pupillary diameter. The other measurement mentioned in the dialogue is capillary dilation of the face, or the “so-called blush response.” As we did for pupillary diameter, let’s also show a measurement of the subject’s skin temperature over time as a line chart. (You might think skin color is a more natural measurement, but for replicants with a darker skin tone than our two pasty examples Leon and Rachel, temperature via infrared is a more reliable metric.) For visual interest, let’s show thumbnails from the video. We can augment the image with degree-of-blush. Reduce the image to high contrast grayscale, use visual recognition to isolate the face, and then provide an overlay to the face that illustrates the degree of blush.

But again, we’re not just looking for blush changes. No, we’re looking for blush compared to human norms for the test. It would look different if we were looking for more blushing in our subject than humans, but since the replicants are less empathetic than humans, we would want to compare and highlight measurements below a threshold. In the thumbnails, the background can be colored to show the median for expected norms, to make comparisons to the face easy. (Shown in the drawing to the right, below.) If the face looks too pale compared to the norm, that’s an indication that we might be looking at a replicant. Or a psychopath.

So now we have solid displays that help the blade runner detect pupillary diameter and blush over time. But it’s not that any diameter changes or blushing is bad. The idea is to detect whether the subject has less of a reaction than norms to what the blade runner is saying. The display should be annotating what the blade runner has said at each moment in time. And since human psychology is a complex thing, it should also track video of the blade runner’s expressions as well, since, as we see above, not all blade runners are able to maintain a poker face. HOLDEN.

Anyway, we can use the same thumbnail display of the face, without augmentation. Below that we can display the waveform (because they look cool), and speech-to-text the words that are being spoken. To ensure that the blade runner’s administration of the text is not unduly influencing the results, let’s add an overlay to the ideal intonation targets. Despite evidence in the film, let’s presume Holden is a trained professional, and he does not stray from those targets, so let’s skip designing the highlight and recourse-for-infraction for now.

Finally, since they’re working from a structured script, we can provide a “chapter” marker at the bottom for easy reference later.

Now we can put it all together, and it looks like this. One last thing we can do to help the blade runner is to highlight when all the signals indicate replicant-ness at once. This signal can’t be too much, or replicants being tested would know from the light on the blade runner’s face when their jig is up, and try to flee. Or murder. HOLDEN.

For this comp, I added a gray overlay to the column where pupillary and blush responses both indicated trouble. A visual designer would find some more elegant treatment.

If we were redesigning this from scratch, we could specify a wide display to accomodate this width. But if we are trying to squeeze this display into the existing prop from the movie, here’s how we could do it.

Note the added labels for the white squares. I picked some labels that would make sense in the context. “Calibrate” and “record” should be obvious. The idea behind “mark” is an easy button for the blade runner to press when they see something that looks weird, like when doctors manually annotate cardiograph output.

Lying to Leon

There’s one more thing we can add to the machine that would help out, and that’s a display for the subject. Recall the machine is meant to test for replicant-ness, which happens to equate to murdery-ness. A positive result from the machine needs to be handled carefully so what happens to Holden in the movie doesn’t happen. I mentioned making the positive-overlay subtle above, but we can also make a placebo display on the subject’s side of the interface.

The visual hierarchy of this should make the subject feel like its purpose is to help them, but the real purpose is to make them think that everything’s fine. Given the script, I’d say a teleprompt of the empathy dilemma should take up the majority of this display. Oh, they think, this is to help me understand what’s being said, like a closed caption. Below the teleprompt, at a much smaller scale, a bar at the bottom is the real point.

On the left of this bar, a live waveform of the audio in the room helps the subject know that the machine is testing things live. In the middle, we can put one of those bouncy fuiget displays that clutters so many sci-fi interfaces. It’s there to be inscrutable, but convince the subject that the machine is really sophisticated. (Hey, a diegetic fuiget!) Lastly—and this is the important part—An area shows that everything is “within range.” This tells the subject that they can be at ease. This is good for the human subject, because they know they’re innocent. And if it’s a replicant subject, this false comfort protects the blade runner from sudden murder. This test might flicker or change occasionally to something ambiguous like “at range,” to convey that it is responding to real world input, but it would never change to something incriminating.

This way, once the blade runner has the data to confirm that the subject is a replicant, they can continue to the end of the module as if everything was normal, thank the replicant for their time, and let them leave the room believing they passed the test. Then the results can be sent to the precinct and authorizations returned so retirement can be planned with the added benefit of the element of surprise.

OK

Look, I’m sad about this, too. The Voight-Kampff machine is cool. It fits very well within the art direction of the Blade Runner universe. This coolness burned the machine into my memory when I saw this film the first dozen times, but despite that, it just doesn’t stand up to inspection. It’s not hopeless, but does need a lot of thinkwork and design to make it really fit to task, and convincing to us in the audience.

Carl’s Junior

In addition to its registers, OmniBro also makes fast-food vending machines. The one we see in the film is free-standing kiosk with five main panels, one for each of the angry star’s severed arms. A nice touch that flies by in the edit is that the roof of the kiosk is a giant star, but one of the arms has broken and fallen onto a car. Its owners have clearly just abandoned it, and things have been like this long enough for the car to rust.

Idiocracy_omnibro09.png

A description

Each panel in the kiosk has:

  • A small screen and two speakers just above eye level
  • Two protruding, horizontal slots of unknown purpose
  • A metallic nozzle
  • A red laser barcode scanner
  • A 3×4 panel of icons (similar in style to what’s seen in the St. God’s interfaces) in the lower left. Sadly we don’t see these buttons in use.

But for the sake of completeness, the icons are, in western reading order:

  • No money, do not enter symbol, question
  • Taco, plus, fries
  • Burger, pizza, sundae
  • Asterisk, up-down, eye

The bottom has an illuminated dispenser port.

Idiocracy_omnibro20

In use

Joe approaches the kiosk and, hungry, watches to figure out how people get food. He hears a transaction in progress, with the kiosk telling the customer, “Enjoy your EXTRA BIG ASS FRIES.” She complains, saying, “You didn’t give me no fries. I got an empty box.”

She reaches inside the food port to see if it just got stuck, and tinto the take-out port and fishes inside to see if it just got stuck. The kiosk asks her, “Would you like another EXTRA BIG ASS FRIES?” She replies loudly into the speaker, “I said I didn’t get any.” The kiosk ignores her and continues, “Your account has been charged. Your balance is zero. Please come back when you afford to make a purchase.” The screen shows her balance as a big dollar sign with a crossout circle over it.

Frustrated, she bangs the panel, and a warning screen pops up, reading, “WARNING: Carl’s Junior frowns upon vandalism.”

Idiocracy_omnibro27
She hits it again, saying, “Come on! My kids’re starving!” (Way to take it super dark, there, Judge.) Another screen reads, “Please step back.”

Idiocracy_omnibro28

A mist sprays from the panel into her face as the voice says, “This should help you calm down. Please come back when you can afford to make a purchase! Your kids are starving. Carl’s Junior believes no child should go hungry. You are an unfit mother. Your children will be placed in the custody of Carl’s Junior.”

She stumbles away, and the kiosk wraps up the whole interaction with the tagline, “Carl’s Junior: Fuck you. I’m eating!” (This treatment of brands, it should be noted, is why the film never got broad release. See the New York Times article, or, if you can’t get past the paywall, the Mental Floss listicle, number seven.)

Joe approaches the kiosk and sticks a hand up the port. The kiosk recognizes the newcomer and says, “Welcome to Carl’s Junior. Would you like to try our EXTRA BIG ASS TACO, now with more MOLECULES?” Then the cops arrive to arrest the mom.


Critique

Now, I don’t think Judge is saying that automation is stupid. (There are few automated technologies in the film that work just fine.) I think he’s noting that poorly designed—and inhumanely designed—systems are stupid. It’s a reminder for all of us to consider the use cases where things go awry, and design for graceful degradation. (Noting the horrible pun so implied.) If we don’t, people can lose money. People can go hungry. The design matters.

Idiocracy_omnibro29
Spoiler alert: If you’re worried about the mom, the police arrive in the next beat and arrest him , so at least she’s not arrested.

I have questions

The interface inputs raise a lot of questions that are just unanswerable. Are there only four things on the menu? Why are they distributed amongst other categories of icons? Is “plus” the only customization? Does that mean another of the same thing I just ordered, or a larger size? What have I ordered already? How much is my current total? Do I have enough to pay for what I have ordered? There all sorts of purchase path best practice standards being violated or unaddressed by the scene. Of course. It’s not a demo. A lot of sci-fi scenes involve technology breaking down.

Graceful degradation

Just to make sure I’m covering the bases, here, let me note what I hope is obvious. No automation system/narrow AI is perfect. Designers and product owners must presume that there will be times when the system fails—and the system itself does not know about it. The kiosk thinks it has delivered EXTRA BIG ASS FRIES, but it’s wrong. It’s delivered an empty box. It still charged her, so it’s robbed her.

We should always be testing, finding, and repairing these failure points in the things we help make. But we should also design an easy recourse for when the automation fails and doesn’t know. This could be a human attendant (or even a button that connects to a remote human operator who could check the video feed) to see that the woman is telling the truth, mark that panel as broken and use overrides to get her EXTRA BIG ASS FRIES from one of the functioning panels or refund her money to, I guess, go get a tub of Flaturin instead? (The terrible nutrition of Idiocracy is yet another layer for some speculative scifinutrition blog to critique.)

Idiocracy_omnibro25

Again, privacy. Again, respectfulness.

The financial circumstances of a customer are not the business of any other customer. The announcement and unmistakable graphic could be an embarrassment. Adding the disingenuous 🙁 emoji when it was the damned machine’s fault only adds insult to injury. We have to make sure and not get cute when users are faced with genuine problems.

Benefit of the doubt

Anther layer of the stupid here is that OmniBro has the sensors to detect frustrated customers. (Maybe it’s a motion sensor in the panel or dispense port. Possibly emotion detectors in the voice input.) But what it does with that information is revolting. Instead of presuming that the machine has made some irritating mistake, it presumes a hostile customer, and not only gasses her into a stupor while it calls the cops, it is somehow granted the authority to take her children as indentured servants for the problems it helped cause. If you have a reasonable customer base, it’s better for the customer experience, for the brand, and the society in which it operates to give the customers the benefit of the doubt rather than the presumption of guilt.

Prevention > remedy

Another failure of the kiosk is that it discovers that she has no money only after it believes it has dispensed EXTRA BIG ASS FRIES. As we see elsewhere in the film, the OmniBro scanners work accurately at a huge distance even while the user is moving along at car speeds. It should be able to read customers in advance to know that they have no ability to pay for food. It should prevent problems rather than try (and, as it does here, fail) to remedy them. At the most self-serving level, this helps avoid the potential loss or theft of food.

At a collective level, a humane society would still find some way to not let her starve. Maybe it could automatically deduct from a basic income. Maybe it could provide information on where a free meal is available. Maybe it could just give her the food and assign a caseworker to help her out. But the citizens of Idiocracy abide a system where, instead, children can be taken away from their mothers and turned into indentured servants because of a kiosk error. It’s one thing for the corporations and politicians to be idiots. It’s another for all the citizens to be complicit in that, too.

Idiocracy_omnibro30

Fighting American Idiocracy

Since we’re on the topic of separating families: Since the fascist, racist “zero-tolerance” policy was enacted as a desperate attempt to do something in light of his failed and ridiculous border wall promise, around 3000 kids were horrifically and forcibly separated from their families. Most have been reunited, but as of August there were at least 500 children still detained, despite the efforts of many dedicated resisters. The 500 include, according to the WaPo article linked below, 22 kids under 5. I can’t imagine the permanent emotional trauma it would be for them to be ripped from their families. The Trump administration chose to pursue scapegoating to rile a desperate, racist base. The government had no reunification system. The Trump administration ignored Judge Sabraw’s court-ordered deadline to reunite these families. The GOP largely backed him on this. They are monsters. Vote them out. Early voting is open in many states. Do it now so you don’t miss your chance.

ACLU.png

Snitch phone

If you’re reading these chronologically, let me note here that I had to skip Bea Arthur’s marvelous turn as Ackmena, as she tends the bar and rebuffs the amorous petitions of the lovelorn, hole-in-the-head Krelman, before singing her frustrated patrons out of the bar when a curfew is announced. To find the next interface of note, we have to forward to when…

Han and Chewie arrive, only to find a Stormtrooper menacing Lumpy. Han knocks the blaster out of his hand, and when the Stormtrooper dives to retrieve it, he falls through the bannister of the tree house and to his death.

SWHS_generalalert-09.png
Why aren’t these in any way affiiiiixxxxxxeeeeeeed?

Han enters the home and wishes everyone a Happy Life Day. Then he bugs out.

SWHS_generalalert-12.png
But I still have to return for the insane closing number. Hold me.

Then Saun Dann returns to the home just before a general alert comes over the family Imperial Issue Media Console.

SWHS_generalalert-07.png

This is a General Alert. Calling Officer B4711. Officer B4711. We are unable to reach you on your comlink. Is there a problem. [sic] You are instructed to turn on your comlink immediately.

Dann tells the family he can handle it. He walks to the TV and pulls a card out of his wallet. He inserts it into the console, mashes a few buttons and turns his attention to the screen. After a moment of op-art static, General Alert person appears. He says, “We have two way communication, traitor Saun Dann. Is this a report about the missing trooper?”

Dann (like so many rebels) lies, saying the stormtrooper robbed the house and fled for the hills. GA says, “Very well, we’ll send out a search party.” Sean thanks him and the exchange is over. Sean hits a button, pulls his card out of the console, and returns it to his wallet.

Sadly I must bypass the plot questions about the body of the Stormtrooper that is still lying in the forest floor beneath them that will surely be found, or that GA will eventually not find B4711 in the forest and return demanding answers, or why everyone is acting like welp that’s fixed. For this blog is about interfaces.

Whether the card was meant as identification or payment, the interaction is pretty decent. Saun has no trouble fitting it in the slot, and apparently he has no trouble recalling the number to dial the Empire. The same guy in the message answers the call quickly. After the exchange, it’s quick to wrap up. Pull out card, and call is over. Seriously, that’s as short and simple as we could make it.

What was the card for?

If it was payment, we would expect some charges to appear during and after the fact, so let’s just presume it was an identification card for the Empire to track. Since the Empire is evil, they might hide or not provide feedback that the caller has been identified. So it’s not diegetically surprising to note that there’s none.

For all the interfaces that are utter crap in this show, this one actually passes muster. It tempts me to establish some sort of law—that the more mundane interfaces in a show will always be the more believable ones. I’ll think on that. It would need a name.

rat-prompt.png

If I was to add any improvement, it would be to not burden the citizen’s memory with remembering the general alert or how to act on it. What if you’d just caught the end of it? Rather than burdening memory, the Empire could add a crawl to the feed, that persistently repeats the call to action including contact information. Persuasively, it would be an annoyance that would cause citizens watching TV to really want B4711 to hurry up and turn his damn comlink on, or for someone to rat him out.

There are probably some fascist tactics for incentivizing either the Stormtrooper or a snitch’s compliance, but I’m not a fascist, so let’s not go there.

Instead let’s rejoice that there is but one more interface to review, and we can stop with the Star Wars Holiday Special.

Video call

After ditching Chewie, Boba Fett heads to a public video phone to make a quick report to his boss who turns out to be…Darth Vader (this was a time long before the Expanded Universe/Legends, so there was really only one villain to choose from).

To make the call, he approaches an alcove off an alley. The alcove has a screen with an orange bezel, and a small panel below it with a 12-key number panel to the left, a speaker, and a vertical slot. Below that is a set of three phone books. For our young readers, phone books are an ancient technology in which telephone numbers were printed in massive books, and copies kept at every public phone for reference by a caller.

faithful-wookiee-video-call-04
faithful-wookiee-video-call-05

To make the call, Fett removes a card from his belt and inserts it. We see a close up of his face for about a second after this, during which time we cannot see if he is taking any further action, but he appears to be waiting and not moving. We hear a few random noises and see some random patterns until Darth Vader comes into view. Fett reports, “I have made contact with the Rebels, and all is proceeding according as you wish, Darth Vader.” We don’t see the interaction from Vader’s side.

faithful-wookiee-video-call-06

Doorknob-simple workflow

A nice feature is that the workflow could barely be simpler. Once Fett inserts the card, the phone is activated, recipient specified, and payment taken care of. Fett has only to wait for Vader to pick up. To make this work, we have to presume that this is a special card, good only for calling Vader at no charge. It’s a nice interaction. Presuming the call is not, you know, top secret. Which, if it needs saying, it is.

The Force is not with this security

As this blog must routinely point out, the system seems to be missing multifactor authentication. The card counts as one factor, that is, something Fett possesses. There should be at least one more. A card can be stolen, so let’s instead focus on something he is and something he knows. Using just the equipment in the scene, the Empire could monitor all the video phones where it knows Fett to be. With face recognition or, more appropriately given his helmet, voice print, it could recognize him for one factor, and then ask him for a password. Two factors. No card. Even more simple and more secure.

But the security problems go beyond the authentication problems that might have some unfortunate pickpocket face to face with the galaxy’s most impulsive Force-choker. During Fett’s call, back on the Falcon, R2D2 is casually trying to find Chewbacca and Fett on the viewscreen and he happens—literally happens—across the transmission between Fett and Vader, with Vader saying, “Good work, but I want them alive. Now that you’ve got their trust, they may take you to their new base.” Fett replies, “This time we’ll get them all.” Vader ends the call saying, “I see why they call you the best bounty hunter in the galaxy.”

Note that the call is public. R2 doesn’t suspect Imperial malfeasance at this point. He’s just checking public video feeds to see if he can find out where Chewie is.

Note also that there isn’t a lick of encryption.

Note finally that the feed we see isn’t even a just a transmission signal. If it was, we’d see the call from one side or the other, in which we’d see either Fett or Vader. But in the clip we see the video switch between them to focus on the active speaker, so either R2 is doing some sweet just-in-time editing, or the signal is actually formatted especially for some third party to eavesdrop on.

So 👏 why👏 the👏 eff 👏  are top secret Imperial transmissions being made on insecure party lines? Heads up, Star Wars fans. We didn’t really need Rogue One. The Rebellion could have come across the plans to the Death Star just channel-flipping from the comfort some nearby couch.

R. S. Revenge Comms

Note: In honor of the season, Rogue One opening this week, and the reviews of Battlestar Galactica: The Mini-Series behind us, I’m reopening the Star Wars Holiday Special reviews, starting with the show-within-a-show, The Faithful Wookie. Refresh yourself of the plot if it’s been a while.

Faithful-Wookiee-02

On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)

faithful-wookiee-01-surrounds

He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.

  • A stay-state toggle switch
  • A stay-state rocker switch
  • Three dials

The lower two dials have rings under them on the panel that accentuate their color.

Map View

The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.

revengescreen

Luke, looking over the shoulder of the comms officer at the same monitor, exclaims, “It’s the Millennium Falcon!”

faithful-wookiee-12
Seriously, Luke, how can you tell this?

Watching the glowing dot and crosshairs blink and change position several times, the comms officer says, “They’re coming out of light speed. I can’t make contact.” An off-screen voice tells him to “Try a lower channel.” Something causes the channel to change (the comms officer’s hands do not touch anything that we can see), and then the monitor shows a video feed from the Falcon.

Video Feed

The video feed has an overlay to the upper left hand side, consisting of lines of text which appear from top to bottom in a palimpsest formation, even though the copy is left-aligned. At the top is a label with changing characters, looking something like a time stamp.

Faithful-Wookiee-03

Analysis of the Map View

Since we can’t read the video overlay in the video feed, and it doesn’t interfere with the image, there’s not much to say about it. Instead I’ll focus on the map view.

Hand-drawn Inconsistency

In the side angle shot, which we see first, we see the dial colors go from top to bottom, as beige, red, yellow. In the facing shot of this interface, which immediately follows the side shot, the dials go beige, yellow, red. The red and yellow are transposed. Itʼs of course possible that the dials have a variable hue, and changed at exactly the same time the camera switches. But then we have to explain where his hand went, and why we don’t see any of the other elements changing color, and so on…

This illustrates one of the problems with reviewing hand-drawn animation (and why scifiinterfaces generally frowns upon it.) It takes a hand-drawing animator extra work to keep things consistent from screen to screen. She must have a reference when drawing the interface from any new angle, and this extra work is on top of all the other things she has to manage like color and timing. Fewer people will notice transposed dial colors than, say, the comms officer turning orange instead of purple, so the interface is low on that priority stack.

Contrast that with live-action and computer-animated interfaces. In these modes of working, it takes extra work to change interfaces from shot to shot, so you run into consistency problems much less frequently.

I’ve written about this before in the abstract, but it’s nice to have a simple and easily shown example in the blog to point to.

2Dness

Another problem with the interface is that it is 2-D, but space is 3-D.

When picking a projection to display, we have to keep in mind that it is more immediate to understand an impending collision when presented as 2-D information: Constant bearing, decreasing range = Trouble. So, perhaps the view has automatically aligned itself to be perpendicular to the Falcon’s approach, which makes it easier to monitor the decreasing distance.

If so, he would need to see that automatically-aligned status reflected somewhere in the interface, and have access to controls that let him change the view and snap back to this Most Useful View. Admittedly, this is a lot of apologetics to apply, when really, it’s most likely the old trope 2-D Space.

Attention and memory

Faithful-Wookiee-01

There are some nicely designed attention cues. The crosshairs, glowing dot, and motion graphics makes it so that—even though we can’t read the language—we can tell what’s of interest on the screen. One dot moving towards another, stationary dot. We’re set up for the Falcon’s buzzing the base.

That’s probably the best thing that can be said for it.

The text is terrible, changing too fast for a human reader. (Yes, yes, put down that emerging comment. Purple-face isnʼt human, but we must evaluate interfaces considering what is useful to us, and right now that means us humans.) The text changes so much faster than the blinking, in fact, that it’s pulling attention away from it. Narratively, the rapid-fire text helps convey a sense of urgency, but it greatly costs readability. It’s not a good model for real world design.

The blinking crosshair might most accurately reflect the actual position of the detected object within the radar sweep. But it could help the officer more. As with medical signals, data points are not as interesting as information trends. As it is, it relies on his memory to piece together the information, which means he has to constantly monitor the screen to make sense. If instead the view featured an evaporating trail of data points, not only could he look away without missing too much information, but he would also notice that the speed and direction are slightly erratic, which would prove quite interesting to anyone trying to ascertain the status of the ship. One glance shows things are not as they should be. The Falcon is clearly careening.

tracking_assistance
Actual points from the animation.

Mysterious Control

When we first see the comms officer, he has his unmoving hand on one of the dials. But when we see the map switch to the video feed, none of the controls we can see are touched. This raises a possibility and a question.

The possibility is that there is control by some other mechanism. My best guess is that it is voice control, since the Rebel General says “try a lower channel” just before it switches. Maybe he was not speaking to the comms officer, but to the machine itself. And given C3PO, they clearly have the technology to recognize and act on natural language, though it’s usually associated with a full general artificial intelligence. A Rebel Siri (33 years before it came out in Apple’s iOS) makes sense from an apologetics sense.

If so, there are some aspects of the UI missing to signal to an operator that the machine is listening, and hearing, and understanding what is being said, as well as whether the speaker is authorized to control. After all, the comms officer is wearing the headset, but it was the red-bearded general who issued the command. I imagine it’s not OK for anyone on the bridge to just shout out controls.

faithful-wookiee-13
Just General Burnside, here.

The question then, is if the channel is controlled by voice, what are the physical controls for? They’re lacking labels of any kind. Perhaps they’re there as a backup, should voice control fail. Perhaps they are vestigial, left over from before voice control was installed. Maybe only the general has a voice override and the comms officer must use the physical controls. Any of these would be fine backworlding explanations, but my favorite idea is that the dials are for controlling nuanced variables in very fluid ways with instant feedback.

It’s easier to twiddle a dial to change the frequency of a radio to find a low-power signal than to keep saying “back…forward…no, back just a bit.” That would help explain what the comms officer was doing with his hands on the dials when he got something but not when the general voice-controls the channel.

In general

The interface shows some sophistication in styling and visual hierarchy, and if we give it lots of benefit of the doubt, might even be handling some presentation variables for the user in sophisticated ways. But the distractions of the rapid-fire text, the lack of trend lines, the lack of labels for the physical controls, and the missing affordances for projection control and voice control feedback make it a poor model for any real world design. 

Viper Launch Control

image02

The Galactica’s fighter launch catapults are each controlled by a ‘shooter’ in an armored viewing pane.  There is one ‘shooter’ for every two catapults.  To launch a Viper, he has a board with a series of large twist-handles, a status display, and a single button.  We can also see several communication devices:

  • Ear-mounted mic and speaker
  • Board mounted mic
  • Phone system in the background

These could relate to one of several lines of communication each:

  • The Viper pilot
  • Any crew inside the launch pod
  • Crew just outside the launch pod
  • CIC (for strategic status updates)
  • Other launch controllers at other stations
  • Engineering teams
  • ‘On call’ rooms for replacement operators

image05

Each row on the launch display appears to conform to some value coming off of the Viper or the Galactica’s magnetic catapults.  The ‘shooter’ calls off Starbuck’s launch three times due to some value he sees on his status board (fluctuating engine power right before launch).

We do not see any other data inputs.  Something like a series of cameras on a closed circuit could show him an exterior view of the entire Viper, providing additional information to the sensors.

When Starbuck is ready to launch on the fourth try, the ‘shooter’ twists the central knob and, at the same time and with the same hand, pushes down a green button.  The moment the ‘shooter’ hits the button, Starbuck’s Viper is launched into space.

image04

There are other twist knobs across the entire board, but these do not appear to conform directly to the act of launching the Viper, and they do not act like the central knob.  They appear instead to be switches, where turning them from one position to another locks them in place.

There is no obvious explanation for the number of twist knobs, but each one might conform to an electrical channel to the catapult, or some part of the earlier launch sequence.

Manual Everything

Nothing in the launch control interprets anything for the ‘shooter’.  He is given information, then expected to interpret it himself.  From what we see, this information is basic enough to not cause a problem and allow him to quickly make a decision.

Without networking the launch system together so that it can poll its own information and make its own decisions, there is little that can improve the status indicators. (And networking is made impossible in this show because of Cylon hackers.) The board is easily visible from the shooter chair, each row conforms directly to information coming in from the Viper, and the relate directly to the task at hand.

The most dangerous task the shooter does is actually decide to launch the Viper into space.  If either the Galactica or the Viper isn’t ready for that action, it could cause major damage to the Viper and the launch systems.

A two-step control for this is the best method, and the system now requires two distinct motions (a twist-and-hold, then a separate and distinct *click*).  This is effective at confirming that the shooter actually wants to send the Viper into space.

To improve this control, the twist and button could be moved far enough apart (reference, under “Two-Hand Controls” ) that it requires two hands to operate the control.  That way, there is no doubt that the shooter intends to activate the catapult.

If the controls are separated like that, it would take some amount of effort to make sure the two controls are visually connected across the board, either through color, or size, or layout.  Right now, that would be complicated by the similarity in the final twist control, and the other handles that do different jobs.

Changing these controls to large switches or differently shaped handles would make the catapult controls less confusing to use.

 

The Galactica Phone Network

image05

The phone system aboard the Galactica is a hardwired system that can be used in two modes: Point-to-point, and one-to-many.  The phones have an integrated handset wired to a control box and speaker.  The buttons on the control box are physical keys, and there are no automatic voice controls.

In Point-to-point mode, the phones act as a typical communication system, where one station can call a single other station.  In the one-to-many mode the phones are used as a public address system, where a single station can broadcast to the entire ship.

image07

The phones are also shown acting as broadcast speakers.  These speakers are able to take in many different formats of audio, and are shown broadcasting various different feeds:

  • Ship-wide Alerts (“Action Stations!”)
  • Local alarms (Damage control/Fire inside a specific bulkhead)
  • Radio Streams (pilot audio inside the launch prep area)
  • Addresses (calling a person to the closest available phone)

image06

Each station is independent and generic.  Most phones are located in public spaces or large rooms, with only a few in private areas.  These private phones serve the senior staff in their private quarters, or at their stations on the bridge.

image11

In each case, the phone stations are used as kiosks, where any crewmember can use any phone.  It is implied that there is a communications officer acting as a central operator for when a crewmember doesn’t know the appropriate phone number, or doesn’t know the current location of the person they want to reach.

Utterly Basic

There is not a single advanced piece of technology inside the phone system.  The phones act as a dirt-simple way to communicate with a place, not a person (the person just happens to be there while you’re talking).

image10

The largest disadvantage of this system is that it provides no assistance for its users: busy crewmembers of an active warship.  These crew can be expected to need to communicate in the heat of battle, and quickly relay orders or information to a necessary party.

This is easy for the lower levels of crewmembers: information will always flow up to the bridge or a secondary command center.  For the officers, this task becomes more difficult.

First, there are several crewmember classes that could be anywhere on the ship:

  • Security
  • Damage Control
  • Couriers
  • Other officers

Without broadcasting to the entire ship, it could be extremely difficult to locate these specific crewmembers in the middle of a battle for information updates or new orders.

Unconventional Enemy

The primary purpose of the Galactica was to fight the Cylons: sentient robots capable of infiltrating networked computers.  This meant that every system on the Galactica was made as basic as possible, without regard to its usability.

The Galactica’s antiquated phone system does prevent Cylon infiltration of a communications network aboard an active warship.  Nothing the phone system does requires executing outside pieces of software.

A very basic upgrade to the phone system that could provide better usability would be a near-field tag system for each crew member.  A passive near-field chip could be read by a non-networked phone terminal each time a crew member approached near the phone.  The phone could then send a basic update to a central board at the Communications Center informing the operators of where each crewmember is. Such a system would not provide an attack surface (a weakness for them to infiltrate) for the enemy, and make finding officers and crew in an emergency situation both easier and faster: major advantages for a warship.

The near field sensors would add a second benefit, in that only registered crew could access specific terminals.  As an example, the Captain and senior staff would be the only ones allowed to use the central phone system.

Brutally efficient hardware

image08

The phone system succeeds in its hardware.  Each terminal has an obvious speaker that makes a distinct sound each time the terminal is looking for a crewmember.  When the handset is in use, it is easy to tell which side is up after a very short amount of training (the cable always comes out the bottom).  

It is also obvious when the handset is active or inactive.  When a crewmember pulls the handset out of its terminal, the hardware makes a distinctive audible and physical *click* as the switch opens a channel.  The handset also slots firmly back into the terminal, making another *click* when the switch deactivates.  This is very similar to a modern-day gas pump.

With a brief amount of training, it is almost impossible to mistake when the handset activates and deactivates.

Quick Wins

For a ship built in the heat of war at a rapid pace, the designers focused on what they could design quickly and efficiently.  There is little in the way of creature comforts in the Phone interface.

Minor additions in technology or integrated functionality could have significantly improved the interface of the phone system, and may have been integrated into future ships of the Galactica’s line.  Unfortunately, we never see if the military designers of the Galactica learned from their haste.

Chef Gormaand

Hello, readers. Hope your Life Days went well. The blog is kicking off 2016 by continuing to take the Star Wars universe down another peg, here, at this heady time of its revival. Yes, yes, I’ll get back to The Avengers soon. But for now, someone’s in the kitchen with Malla.

SWHS-Gormaand-01

After she loses 03:37 of  her life calmly eavesviewing a transaction at a local variety shop, she sets her sights on dinner. She walks to the kitchen and rifles through some translucent cards on the counter. She holds a few up to the light to read something on them, doesn’t like what she sees, and picks up another one. Finding something she likes, she inserts the card into a large flat panel display on the kitchen counter. (Don’t get too excited about this being too prescient. WP tells me models existed back in the 1950s.)

In response, a prerecorded video comes up on the screen from a cooking show, in which the quirky and four-armed Chef Gourmaand shows how to prepare the succulent “Bantha Surprise.”

SWHS-Gormaand-04

And that’s it for the interaction. None of the four dials on the base of the screen are touched throughout the five minutes of the cooking show. It’s quite nice that she didn’t have to press play at all, but that’s a minor note.

The main thing to talk about is how nice the physical tokens are as a means of finding a recipe. We don’t know exactly what’s printed on them, but we can tell it’s enough for her to pick through, consider, and make a decision. This is nice for the very physical environment of the kitchen.

This sort of tangible user interface, card-as-media-command hasn’t seen a lot of play in the scifiinterfaces survey, and the only other example that comes to mind is from Aliens, when Ripley uses Carter Burke’s calling card to instantly call him AND I JUST CONNECTED ALIENS TO THE STAR WARS HOLIDAY SPECIAL.

Of course an augmented reality kitchen might have done even more for her, like…

  • Cross-referencing ingredients on hand (say it with me: slab of tender Bantha loin) with food preferences, family and general ratings, budget, recent meals to avoid repeats, health concerns, and time constraints to populate the tangible cards with choices that fit the needs of the moment, saving her from even having to consider recipes that won’t work;
  • Make the material of the cards opaque so she can read them without holding them up to a light source;
  • Augmenting the surfaces with instructional graphics (or even air around her with volumetric projections) to show her how to do things in situ rather than having to keep an eye on an arbitrary point in her kitchen;
  • Slowed down when it was clear Malla wasn’t keeping up, or automatically translated from a four-armed to a two-armed description;
  • Shown a visual representation of the whole process and the current point within it;

…but then Harvey wouldn’t have had his moment. And for your commitment to the bit, Harvey, we thank you.

bantha-cuts