Frito’s F’n Car interface

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading “PULL OVER”

IDIOCRACY-fncar

The car interface has a column of buttons down the left reading:

  • NAV
  • WTF?
  • BEER
  • FART FAN
  • HOME
  • GIRLS

At the bottom is a square of icons: car, radiation, person, and the fourth is obscured by something in the foreground. Across the bottom is Frito’s car ID “FRITO’S F’N CAR” which appears to be a label for a system status of “EVERYTHING’S A-OK, BRO”, a button labeled CHECK INGN [sic], another labeled LOUDER, and a big green circle reading GO.

idiocracy-pullover

But the car doesn’t wait for him to pull over. With some tiny beeps it slows to a stop by itself. Frito says, “It turned off my battery!” Moments after they flee the car, it is converged upon by a ring of police officers with weapons loaded (including a rocket launcher pointed backward.)

Visual Design

Praise where it’s due: Zooming is the strongest visual attention-getting signals there is (symmetrical expansion is detected on the retina within 80 milliseconds!) and while I can’t find the source from which I learned it, I recall that blinking is somewhere in the top 5. Combining these with an audio signal means it’s hard to miss this critical signal. So that’s good.

comingrightatus.png
In English: It’s comin’ right at us!

But then. Ugh. The fonts. The buttons on the chrome seem to be some free Blade Runner font knock off, the text reading “PULL OVER” is in some headachey clipped-corner freeware font that neither contrasts nor compliments the Blade Jogger font, or whatever it is. I can’t quite hold the system responsible for the font of the IPPA licence, but I just threw up a little into my Flaturin because of that rounded-top R.

bladerunner

Then there’s the bad-90s skeuomorphic, Bevel & Emboss buttons that might be defended for making the interactive parts apparent, except that this same button treatment is given to the label Frito’s F’n Car, which has no obvious reason why it would ever need to be pressed. It’s also used on the CHECK INGN and LOUDER buttons, taking their ADA-insulting contrast ratios and absolutely wrecking any readability.

I try not to second-guess designer’s intentions, but I’m pretty sure this is all deliberate. Part of the illustration of a world without much sense. Certainly no design sense.

In-Car Features

What about those features? NAV is pretty standard function, and having a HOME button is a useful shortcut. On current versions of Google Maps there’s an Explore Places Near You Function, which lists basic interests like Restaurants, Bars, and Events, and has a more menu with a big list of interests and services. It’s not a stretch to imagine that Frito has pressed GIRLS and BEER enough that it’s floated to the top nav.

explore_places_near_you

That leaves only three “novel” buttons to think about: WTF, LOUDER, and FART FAN. 

WTF?

If I have to guess, the WTF button is an all-purpose help button. Like a GM OnStar, but less well branded. Frito can press it and get connected to…well, I guess some idiot to see if they can help him with something. Not bad to have, though this probably should be higher in the visual hierarchy.

LOUDER

This bit of interface comedy is hilarious because, well, there’s no volume down affordance on the interface. Think of the “If it’s too loud, you’re too old” kind of idiocy. Of course, it could be that the media is on zero volume, and so it couldn’t be turned down any more, so the LOUDER button filled up the whole space, but…

  • The smarter convention is to leave the button in place and signal a disabled state, and
  • Given everything else about the interface, that’s giving the diegetic designer a WHOLE lot of credit. (And our real-world designer a pat on the back for subtle hilarity.)

FART FAN

This button is a little potty humor, and probably got a few snickers from anyone who caught it because amygdala, but I’m going to boldly say this is the most novel, least dumb thing about Frito’s F’n Car interface.

Heart_Jenkins_960.jpg
Pictured: A sulfuric gas nebula. Love you, NASA!

People fart. It stinks. Unless you have active charcoal filters under the fabric, you can be in for an unpleasant scramble to reclaim breathable air. The good news is that getting the airflow right to clear the car of the smell has, yes, been studied, well, if not by science, at least scientifically. The bad news is that it’s not a simple answer.

  • Your car’s built in extractor won’t be enough, so just cranking the A/C won’t cut it.
  • Rolling down windows in a moving aerodynamic car may not do the trick due to something called the boundary layer of air that “clings” to the surface of the car.
  • Rolling down windows in a less-aerodynamic car can be problematic because of the Helmholtz effect (the wub-wub-wub air pressure) and that makes this a risky tactic.
  • Opening a sunroof (if you have one) might be good, but pulls the stench up right past noses, so not ideal either.

The best strategy—according to that article and conversation amongst my less squeamish friends—is to crank the AC, then open the driver’s window a couple of inches, and then the rear passenger window half way.

But this generic strategy changes with each car, the weather (seriously, temperature matters, and you wouldn’t want to do this in heavy precipitation), and the skankness of the fart. This is all a LOT to manage when one’s eyes are meant to be on the road and you’re in an nauseated panic. Having the cabin air just refresh at the touch of one button is good for road safety.

If it’s so smart, then, why don’t we have Fart Fan panic buttons in our cars today?

I suspect car manufacturers don’t want the brand associations of having a button labeled FART FAN on their dashboards. But, IMHO, this sounds like a naming problem, not some intractable engineering problem. How about something obviously overpolite, like “Fast freshen”? I’m no longer in the travel and transportation business, but if you know someone at one of these companies, do the polite thing and share this with them.

Idiocracy-car
Another way to deal with the problem, in the meantime.

So aside from the interface considerations, there are also some strategic ones to discuss with the remote kill switch, but that deserves it’s own post, next.

Cyberspace: Bulletin Board

Johnny finds he needs a favor from a friend in cyberspace. We see Johnny type something on his virtual keyboard, then selects from a pull down menu.

JM-35-copyshop-Z-animated

A quick break in the action: In this shot we are looking at the real world, not the virtual, and I want to mention how clear and well-defined all the physical actions by actor Keanu Reeves are. I very much doubt that the headset he is wearing actually worked, so he is doing this without being able to see anything.

Will regular users of virtual reality systems be this precise with their gestures? Datagloves have always been expensive and rare, making studies difficult. But several systems offer submillimeter gestural tracking nowadays: version 2 of Microsoft Kinect, Google’s Soli, and Leap Motion are a few, and much cheaper and less fragile than a dataglove. Using any of these for regular desktop application tasks rather than games would be an interesting experiment.

Back in the film, Johnny flies through cyberspace until he finds the bulletin board of his friend. It is an unfriendly glowing shape that Johnny tries to expand or unfold without success.

JM-36-bboard-A-animated Continue reading

Cyberspace: Newark Copyshop

The transition from Beijing to the Newark copyshop is more involved. After he travels around a bit, he realizes he needs to be looking back in Newark. He “rewinds” using a pull gesture and sees the copyshop’s pyramid. First there is a predominantly blue window that unfolds as if it were paper.

jm-35-copyshop-a-animated

And then the copyshop initial window expands. Like the Beijing hotel, this is a floor plan view, but unlike the hotel it stays two dimensional. It appears that cyberspace works like the current world wide web, with individual servers for each location that can choose what appearance to present to visitors.

Johnny again selects data records, but not with a voice command. The first transition is a window that not only expands but spins as it does so, and makes a strange jump at the end from the centre to the upper left.

jm-35-copyshop-c-animated

Once again Johnny uses the two-handed expansion gesture to see the table view of the records. Continue reading

Cyberspace: Beijing Hotel

After selecting its location from a map, Johnny is now in front of the virtual entrance to the hotel. The virtual Beijing has a new color scheme, mostly orange with some red.

jm-33-hotel-a

The “entrance” is another tetrahedral shape made from geometric blocks. It is actually another numeric keypad. Johnny taps the blocks to enter a sequence of numbers.

The tetrahedral keypad

jm-33-hotel-b

Note that there can be more than one digit within a block. I mentioned earlier that it can be difficult to “press” with precision in virtual reality due to the lack of tactile feedback. Looking closely, here the fingers of Johnny’s “hands” cast a shadow on the pyramid, making depth perception easier. Continue reading

Perpvision

GitS-heatvision-01

The core of interaction design is the see-think-do loop that describes the outputs, human cognition, and inputs of an interactive system. A film or TV show spends time showing inputs without describing some output, only when these users are in the background and unrelated to the plot. But there are a few examples of outputs with no apparent inputs. These are hard to evaluate in a standard way because it’s such a giant piece of the puzzle. Is it a brain input? Is the technology agentive? Is it some hidden input like Myo‘s muscle sensing? Not knowing the input, a regular review is kind of pointless. All I can do is list its effects and perhaps evaluate the outputs in terms of the apparent goals. Ghost in the Shell has several of these types of inputless systems. Today’s is Kusanagi’s heat vision.

Early in the film, Kusanagi sits atop a skyscraper, jacked in, wearing dark goggles, and eavesdropping on a conversation taking place in a building far below. As she looks down, she sees through the walls of the building in a scanline screen-green view that shows the people as bright green and furniture as a dim green, with everything else being black.

She adjusts the view by steps to zoom closer and closer until her field of vision is filled with the two men conversing in her earpiece. When she hears mention of Project 2501 she thinks the message, “Major, Section 6 is ready to move in.” She reaches up to her right temple and clicks a button, to turn the goggles off before removing them.

That’s nifty. But how did she set the depth of field and the extents (the frustum) of the display so that she only sees these people, and not everyone in the building below this? How does she tell the algorithm that she wants to see furniture and not floor? (Is it thermography? Is the furniture all slightly warm?) What is she doing to increase the zoom? If it’s jacked into her head, why must she activate it several times rather than just focusing on the object with her eyes, or specifying “that person there?” How did she set the audio? Why does the audio not change with each successive zoom? If they’re from separate systems, how did she combine them?

Squint gestures

If I had to speculate what the mechanism should be, I would try to use the natural mechanisms of the eye itself. Let Kusanagi use a slight squint gesture to zoom in, and a slight widening of the eyelids to zoom out. This would let her maintain her gaze, maintain her silence, keep her body still, and keep her hands free.

The scene implies that her tools provide a set amount of zoom for each activation, but for very long distances that seems like it would be a pain. I would have the zoom automatically set itself to make the object on which she is focusing fill her field of vision less some border, and then use squint-gestures to change the zoom to the next logical thing. For instance, if she focused on a person, that person would fill her field of vision. A single widening might zoom out to show the couch on which they are sitting. Another the room. This algorithm wouldn’t be perfect, so you’d need some mechanism for arbitrary zoom. I’d say a squint or wide-eyed gesture held for a third of a second or so would trigger arbitrary zoom for as long as the gesture was maintained, with the zoom increasing logarithmically.

As for the frustum, use the same smart algorithm to watch her gaze, and set the extents to include the whole of the subject and the context in which it sits.

Mission Briefing

Once the Prometheus crew has been fully revived from their hypersleep, they gather in a large gymnasium to learn the details of their mission from a prerecorded volumetric projection. To initiate the display, David taps the surface of a small tablet-sized handheld device six times, and looks up. A prerecorded VP of Peter Weyland appears and introduces the scientists Shaw and Holloway.

This display does not appear to be interactive. Weyland does mention and gesture toward Shaw and Holloway in the audience, but they could have easily been in assigned seats.

Cue Rubik’s Space Cube

After his introduction, Holloway places an object on the floor that looks like a silver Rubik’s Cube with a depressed black button in the center-top square.

Prometheus-055

He presses a middle-edge button on the top, and the cube glows and sings a note. Then a glowing-yellow “person” icon appears, glowing, at the place he touched, confirming his identity and that it’s ready to go.

He then presses an adjacent corner button. Another glowing-yellow icon appears underneath his thumb, this one a triangle-within-a-triangle, and a small projection grows from the side. Finally, by pressing the black button, all of the squares on top open by hinged lids, and the portable projection begins. A row of 7 (or 8?) “blue-box” style volumetric projections appear, showing their 3D contents with continuous, slight rotations.

Gestural control of the display

After describing the contents of each of the boxes, he taps the air towards either end of the row (there is a sparkle-sound to confirm the gesture) and he brings his middle fingers together like a prayer position. In response, the boxes slide to a center as a stack.

He then twists his hands in opposite directions, keeping the fingerpads of his middle fingers in contact. As he does this, the stack merges.

Prometheus-070

Then a forefinger tap summons an overlay that highlights a star pattern on the first plate. A middle finger swipe to the left moves the plate and its overlay off to the left. The next plate automatically highlights its star pattern, and he swipes it away. Next, with no apparent interaction, the plate dissolves in a top-down disintegration-wind effect, leaving only the VP spheres that illustrate the star pattern. These grow larger.

Halloway taps the topmost of these spheres, and the VP zooms through intersteller space to reveal an indistinct celestial sphere. He then taps the air again (nothing in particular is beneath his finger) and the display zooms to a star. Another tap zooms to a VP of LV-223.

Prometheus_VP-0030

Prometheus_VP-0031

After a beat of about 9 seconds, the presentation ends, and the VP of LV-223 collapses back into its floor cube.

Evaluating the gestures

In Chapter 5 of Make It So we list the seven pidgin gestures that Hollywood has evolved. The gestures seen in the Mission Briefing confirm two of these: Push to Move and Point to Select, but otherwise they seem idiosyncratic, not matching other gestures seen in the survey.

That said, the gestures seem sensible. On tapping the “bookends” of the blue boxes, Holloway’s finger pads come to represent the extents of the selection, so bringing them together is a reasonable gesture to indicate stacking. The twist gesture seems to lock the boxes in place, to break the connection between them and his fingertips. This twist gesture turns his hand like a key in a lock, so has a physical analogue.

It’s confusing that a tap would perform four different actions (highlight star patterns in the blue boxes, zoom to the celestial sphere, zoom to star, zoom to LV-223) but there is no indication that this is a platform for manipulating VPs as much as it is a presentation software. With this in mind he could arbitrarily assign any gesture to simply “advance the slide.”