Hoverstuff

Hover technology is a thing in 2015(1985) and it appears many places.

Hoverboards

BttF_075

When Marty has troubles with Griff Tannan he borrows a young girl’’s hover scooter and breaks off its handlebar. He’s able to put his skateboarding skills to use on the resulting hover board.

Griff and his gang chases Marty on their own hover boards. Griff’s has a top of the line hover board labeled a “Pit Bull.” Though Marty clearly has to manually supply forward momentum to his, Griff’s has miniature swivel-mount jet engines that (seem to) respond to the way he shifts his weight on the board.

Hovertraction

BttF_097

George requires traction for a back problem, but this doesn’’t ground him. A hover device clamps his ankles in place and responds to foot motions to move him around.

Hover tech is ideal for leaning control, like what controls a Segway. That’s just what seems to be working in the hoverboard and hovertraction devices. Lean in the direction you wish to travel, just like walking. No modality, just new skills to learn.

Carrier Control

The second instantiation of videochat with the World Security Council that we see is  when Fury receives their order to bomb the site of the Chitauri portal. (Here’s the first.) He takes this call on the bridge, and rather than a custom hardware setup, this is a series of windows that overlay an ominous-red map of the world in an app called CARRIER CONTROL. These windows represent a built-in chat feature for discussing this very topic. There is some fuigetry on the periphery, but our focus is on these windows and the conversation happening through them.

Avengers-fury-secure-transmission01

In this version of the chat, we are assured that it is a SECURE TRANSMISSION by a legend across the top of each, but there is not the same level of assurance as in the videoconference room. If it’s still HOTP, Fury isn’t notified of it. There’s a tiny 01_AZ in the upper right of every screen, but it never changes and is the same for each participant. (An homage to Arizona? Lighter Andrew Zink? Cameraman Arthur Zajac?) Though this is a more desperate situation, you imagine that the need for security is no less dire. Having that same cypher key would be comforting if it is in fact a policy.

Different sizes of windows in the app seem to indicate a hierarchy, since the largest window is the fellow who does most of the talking in both conferences, and it does not change as others speak. Such an automated layout would spare Fury the hassle of having to manage multiple windows, though visually these look more like individual objects he’s meant to manipulate. Poor affordances.

dismiss

The only control we see is when Fury dismisses them, and to do this he just taps at the middle of the screen. The teleconference window is “push wiped” by a satellite view of New York City. Fine, he feels like punching them. But…

a) How does he actually select something in that interface without a tap?

b) A swipe would have been more meaningful, and in line with the gestural pidgin I identified in the gestural chapter of the book.

And of course, if this was the real world, you’d hope for better affordances for what can be done on this window across the board.

So though mostly effective, narratively, could use some polish.

Loki’s glaive: Projectile gestures

TRIGGER WARNING: IF YOU ARE PRONE TO SEIZURES, this is not the post for you. In fact, you can just read the text and be quit of it. The more neurologically daring of you can press “MORE,” but you have been forewarned.

If the first use of Loki’s glaive is as a melée weapon, the second use is of a projectile weapon. Loki primes it, it glows fiercely blue-white, and then he fires it with usually-deadly accuracy to the sorrow of his foes.

This blog is not interested in the details of the projectile, but what is interesting is the interface by which he primes and fires it. How does he do it? Let’s look. He fires the thing 8 times over the course of the movie. What do we see there?

Priming

At first I thought there was no priming mechanism, or that it was invisible. After all, we don’t see him squeeze it or anything. But braving the gifs I noticed that there is a gesture that precedes the glow, and that’s his expression. He gets haterface right before he fires. The only time we can’t verify it is when he’s not looking at the camera. Which is a nifty realization that the firing mechanism is an affective interface—a brain interface capable of deducing emotion.

Firing

If that’s how he primes it, loading the chamber so to speak, how does he launch it? Most of the time he fires it, he does this gesture thing, where he kind of slams the projectile away: With the glaive pointed forward in his right hand, he cocks his left arm back and then in one fast jerk, he pulls the glaive back and thrusts his left hand forward towards the target, counterbalancing the weight and sending the Magic Missile to do its nefarious work.

But then there’s this fight with Thor atop Stark tower, and for one particularly dancy move he spins around, lays the glaive across his shoulders until it’s pointed at his brother, and it fires. There’s no cocking back or counterbalancing. It just goes.

So what’s going on there? Well, it’s not clear, but at the very least it means that the thing is responding to something other than his usual gesture. We can’t see his face, so it’s Occam-logical that it’s affective, i.e. responding to his haterface again.

Ok, then, what’s all the dramatic gesture for throughout the rest of the film? Well, I think Stark said it best when he explained that, “Loki is a full-tilt diva. He wants flowers. He wants parades.” He must dance his hate, and the glaive lets him do that. Better him than Thanos, I guess.

Note that in this way the glaive serves a humane purpose similar to what Ruby Rhod’s staff does for him: it allows him to express his abundance of personality. I’m poking a bit of fun, but in all seriousness I’m quite fond of expressive technology, of things that let us do more than do, and convey a bit of who we are.

It’s nice to see that in a sci-fi interface. Even if it’s a deadly alien weapon.

Usually he’s all…
Staff-bolt03
Staff-bolt05

But this one time he’s all…

Staff-bolt08

Breakfast Sand Table

A woman in a modern kitchen holding a piece of food while looking at a countertop, with a man in a black shirt leaning on the counter beside her, both engaged in conversation.

While eating breakfast, Vika views the overnight surveillance via a touchscreen interface that is inset into the top of a white table.

Which touch tech?

Anyone interested in the touch technology should take note: Vika places her coffee cup and breakfast plate directly on the surface, which indicates that it utilizes capacitive touch technology with a glass top. Placing dishes on a resistive touchscreen, which is made of layers of plastic and glass would have interfered with the interactions and would be less durable as a tabletop.

Jack joins her at the table and leans on the surface with his hand and later with his forearm, which supports the idea that the area surrounding the viewport is not touch-enabled. If it were, it would need to incorporate palm-rejection technology in order for his arm to not interfere with Vika’s interactions.

The interface components

Oblivion-Desktop-Sandtable-001

The main viewing area is a hybrid of satellite imagery and topographic mapping, surrounded in the interface by surveillance data and video playback controls. A message next to the video playback controls reports the current location of the scav activity.

To the left of the map is a list of fuel cells that have been stolen by the scavs along with the dates they went missing. The last one on the list is flashing red to draw their attention—a new one has gone missing.

Some elements, such as the current date and number of days into the mission face out at the top and the bottom to allow both Vika and Jack to view the data from either side.

A hand points at a futuristic digital map displayed on a transparent screen, showing a detailed terrain with lines and contours, alongside various mission data indicators and a date label.

The interface is responsive to touch gestures. Vika circles an area on the map and the icon indicating unusual activity turns red. She taps the icon and a video feed begins playing. Jack zooms in on the video feed by using a five-finger multi-touch “spread” gesture.

Why is the vital information facing Jack when Vika is the one using the interface?

It’s interesting to note that the the most vital information such as the list of missing drones, video playback and the topographic shaded relief are seen from Jack’s view. This causes Vika to have to process the information and videos upside-down—even though the playback controls face her.

This can be particularly problematic with the topographic shaded relief. Shaded relief simulates the shadow cast by the sun on the surface. Viewing this relief upside-down can cause a perception illusion that results in confusion on what is a crater and what is a hill.

Better: Lenticular display

A better solution would be to utilize a lenticular interactive display. Lenticular displays are made by placing a transparent film containing tiny ridges over an image that is made up of two or more images sectioned into bands and displayed in alternating lines. The ridges in the film cause the eye to focus on one set of lines in order to come out with a cohesive image.

Then, as in the illustration below, Vika would only see the view illustrated by the white lines and Jack would only see the view illustrated by the black lines.

Diagram illustrating the concept of lenticular film and interlaced images with labeled sections showcasing different views.

Utilizing a lenticular display would solve the issue of the shaded relief perception illusion and allow Jack and Vika to each read the information and watch the video from their own perspective at the same time.

The thing that gets a little tricky about utilizing a lenticular display for this solution is the fact that it is a touch screen. The elements that are being interacted with need to be in the same position for both Jack and Vika in order for the computer to know what is being manipulated. This can be solved by flipping the individual elements such as the shaded relief on the topography and the activity icons, words, etc., while keeping them in the same location on the interface.

Smart video recording and playback

So, how did the TET know where to start the video recording and playback? Given that the other interfaces in the film have the capability to detect motion, it is likely that the video recording was automatically triggered by the scavs when they moved in to attack the drone.

Unfortunately, there is no screentime granted to the use of the actual video playback controls, but assuming they are as smart as the rest of the interfaces in the film, it is safe to expect these controls to be more useful than simply sequencing through the scenes. The interface would probably allow Vika to scrub through a grid of thumbnails to quickly find any scenes of interest.

Why circle and tap to play?

The activity alert icon on the map was static white until Vika circled an area surrounding it. Only then did it start flashing red. Other interfaces on Vika’s main desktop provide immediate feedback with an audible alert and a flashing red symbol. Why would this one require the extra effort of circling the area? It would seem simpler to flash red from the beginning and allow Vika to immediately tap on the symbol for video playback.

It is possible that she is circling the area that she wants the TET feed to focus on, but if the TET has the capability to detect the activity to begin with, it should automatically know where to focus.

Another possibility is that she is used to getting multiple alerts every morning and the circle gesture could be for playing all of the surveillance videos at the same time instead of having to tap on each one to play. If that is the case, then she may be using the circle gesture through muscle memory since people tend to use repetitive gestures without thinking about it even if there is a simpler gesture available. If a gesture isn’t used very often, users tend to forget about it.

Overall, this is a nice system that effectively allows Jack and Vika to get a quick overview of the events of the previous night and gives them a heads-up as to what is in store for them that day.

The bug VP

StarshipT_030

In biology class, the (unnamed) professor points her walking stick (she’s blind) at a volumetric projector. The tip flashes for a second, and a volumetric display comes to life. It illustrates for the class what one of the bugs looks like. The projection device is a cylinder with a large lens atop a rolling base. A large black plug connects it to the wall.

The display of the arachnid appears floating in midair, a highly saturated screen-green wireframe that spins. It has very slight projection rays at the cylinder and a "waver" of a scan line that slowly rises up the display. When it initially illuminates, the channels are offset and only unify after a second.

STARSHIP_TROOPERS_vdisplay

StarshipT_029

The top and bottom of the projection are ringed with tick lines, and several tick lines runs vertically along the height of the bug for scale. A large, lavender label at the bottom identifies this as an ARACHNID WARRIOR CLASS. There is another lavendar key too small for us to read.The arachnid in the display is still, though the display slowly rotates around its y-axis clockwise from above. The instructor uses this as a backdrop for discussing arachnid evolution and "virtues."

After the display continues for 14 seconds, it shuts down automatically.

STARSHIP_TROOPERS_vdisplay2

Interaction

It’s nice that it can be activated with her walking stick, an item we can presume isn’t common, since she’s the only apparently blind character in the movie. It’s essentially gestural, though what a blind user needs with a flash for feedback is questionable. Maybe that signal is somehow for the students? What happens for sighted teachers? Do they need a walking stick? Or would a hand do? What’s the point of the flash then?

That it ends automatically seems pointlessly limited. Why wouldn’t it continue to spin until it’s dismissed? Maybe the way she activated it indicated it should only play for a short while, but it didn’t seem like that precise a gesture.

Of course it’s only one example of interaction, but there are so many other questions to answer. Are there different models that can be displayed? How would she select a different one? How would she zoom in and out? Can it display aimations? How would she control playback? There are quite a lot of unaddressed details for an imaginative designer to ponder.

Display

The display itself is more questionable.

Scale is tough to tell on it. How big is that thing? Students would have seen video of it for years, so maybe it’s not such an issue. But a human for scale in the display would have been more immediately recognizable. Or better yet, no scale: Show the thing at 1:1 in the space so its scale is immediately apparent to all the students. And more appropriately, terrifying.

And why the green wireframe? The bugs don’t look like that. If it was showing some important detail, like carapice density, maybe, but this looks pretty even. How about some realistic color instead? Do they think it would scare kids? (More than the “gee-whiz!” girl already is?)

And lastly there’s the title. Yes, having it rotate accomodates viewers in 360 degrees, but it only reads right for half the time. Copy it, flip it 180º on the y-axis, and stack it, and you’ve got the most important textual information readable at most any time from the display.

Better of course is more personal interaction, individual displays or augmented reality where a student can turn it to examine the arachnid themselves, control the zoom, or follow up on more information. (Wnat to know more?) But the school budget in the world of Starship Troopers was undoubtedly stripped to increase military budget (what a crappy world that would be amirite?), and this single mass display might be more cost effective.

Klaatunian interior

DtESS-034

When the camera first follows Klaatu into the interior of his spaceship, we witness the first gestural interface seen in the survey. To turn on the lights, Klaatu places his hands in the air before a double column of small lights imbedded in the wall to the right of the door. He holds his hand up for a moment, and then smoothly brings it down before these lights. In response the lights on the wall extinguish and an overhead light illuminates. He repeats this gesture on a similar double column of lights to the left of the door.

The nice thing to note about this gesture is that it is simple and easy to execute. The mapping also has a nice physical referent: When the hand goes down like the sun, the lights dim. When the hand goes up like the sun, the lights illuminate.

He then approaches an instrument panel with an array of translucent controls; like a small keyboard with extended, plastic keys. As before, he holds his hand a moment at the top of the controls before swiping his hand in the air toward the bottom of the controls. In response, the panels illuminate. He repeats this on a similar panel nearby.

Having activated all of these elements, he begins to speak in his alien tongue to a circular, strangely lit panel on the wall. (The film gives no indication as to the purpose of his speech, so no conclusions about its interface can be drawn.)

DtESS-049

Gort also operates the translucent panels with a wave of his hand. To her credit, perhaps, Helen does not try to control the panels, but we can presume that, like the spaceship, some security mechanism prevents unauthorized control.

Missing affordances

Who knows how Klaatu perceives this panel. He’s an alien, after all. But for us mere humans, the interface is confounding. There are no labels to help us understand what controls what. The physical affordances of different parts of the panels imply sliding along the surface, touch, or turning, not gesture. Gestural affordances are tricky at best, but these translucent shapes actually signal something different altogether.

Overcomplicated workflow

And you have to wonder why he has to go through this rigmarole at all. Why must he turn on each section of the interface, one by one? Can’t they make just one “on” button? And isn’t he just doing one thing: Transmitting? He doesn’t even seem to select a recipient, so it’s tied to HQ. Seriously, can’t he just turn it on?

Why is this UI even here?

Or better yet, can’t the microphone just detect when he’s nearby, illuminate to let him know it’s ready, and subtly confirm when it’s “hearing” him? That would be the agentive solution.

Maybe it needs some lockdown: Power

OK. Fine. If this transmission consumes a significant amount of power, then an even more deliberate activation is warranted, perhaps the turning of a key. And once on, you would expect to see some indication of the rate of power depletion and remaining power reserves, which we don’t see, so this is pretty doubtful.

Maybe it needs some lockdown: Security

This is the one concern that might warrant all the craziness. That the interface has no affordance means that Joe Human Schmo can’t just walk in and turn it on. (In fact the misleading bits help with a plausible diversion.) The “workflow” then is actually a gestural combination that unlocks the interface and starts it recording. Even if Helen accidentally discovered the gestural aspect, there’s little to no way she could figure out those particular gestures and start intergalactic calls for help. And remembering that Klaatu is, essentially, a space ethics reconn cop, this level of security might make sense.

Thermoptic camouflage

GitS-thermoptic-03

Kusanagi is able to mentally activate a feature of her skintight bodysuit and hair(?!) that renders her mostly invisible. It does not seem to affect her face by default. After her suit has activated, she waves her hand over her face to hide it. We do not see how she activates or deactivates the suit in the first place. She seems to be able to do so at will. Since this is not based on any existing human biological capacity, a manual control mechanism would need some biological or cultural referent. The gesture she uses—covering her face with open-fingered hands—makes the most sense, since even with a hand it means, “I can see you but you can’t see me.”

In the film we see Ghost Hacker using the same technology embedded in a hooded coat he wears. He activates it by pulling the hood over his head. This gesture makes a great deal of physical sense, similar to the face-hiding gesture. Donning a hood would hide your most salient physical identifier, your face, so having it activate the camouflage is a simple synechdochic extension.

GitS-thermoptics-30

The spider tank also features this same technology on its surface, where we learn it is a delicate surface. It is disabled from a rain of glass falling on it.

GitS-spidertank-01

This tech less than perfect, distorting the background behind it, and occasionally flashing with vigorous physical activity. And of course it cannot hide the effects that the wearer is creating in the environment, as we see with splashes the water and citizens in a crowd being bumped aside.

Since this imperfection runs counter to the wearer’s goal, I’d design a silent, perhaps haptic feedback, to let the wearer know when they’re moving too fast for the suit’s processors to keep up, as a reinforcement to whatever visual effects they themselves are seeing.

UPDATE: When this was originally posted, I used the incorrect concept “metonym” to describe these gestures. The correct term is “synechdoche” and the post has been updated to reflect that.

Gestural disguise

TheFifthElement-disguise-002

When the Mangalores meet with Zorg to deliver (what they think are) the stones, their leader Aknar is wearing a human disguise. The exact nature of the speculative technology is difficult to determine. (In fact, it’s entirely arguable that this is a biological ability, but it’s more useful to presume it’s not.)

Zorg tells Aknar, “What is that you? What an ugly face. It doesn’t suit you. Take it off.” Aknar strains his chin upward and shakes his head rapidly. As he does so, the disguise fades to reveal his true face.

TheFifthElement-disguise-008

Presuming it’s a technology, the gesture is a nice design choice for the interaction. It’s not a gesture that’s likely to be done accidentally, and has a nice physical metaphor—that of shaking off water. The physicality makes it easy to remember. Plus, being a head gesture, it can be deployed in the field even when carrying a weapon. This makes it possible to dismiss and show your identity to comrades without the risk of lowering your guard. It does temporarily limit the wearer’s ability to sense danger, but I suspect Mangalores care more about keeping their finger on the trigger.

Of course it raises the question of whether what results from the shake is just another disguise, but that would depend on some external system of multifactor authentication that’s separate from the gesture.

Neuro-Visor

The second interface David has to monitor those in hypersleep is the Neuro-Visor, a helmet that lets him perceive their dreams. The helmet is round, solid, and white. The visor itself is yellow and back-lit. The yellow is the same greenish-yellow underneath the hypersleep beds and clearly establishes the connection between the devices to a new user. When we see David’s view from inside the visor, it is a cinematic, fully-immersive 3D projection of events in her dreams, that is presented in the “spot elevations” style that is predominant throughout the film (more on this display technique later).

Later in the movie we see David using this same helmet to communicate with Weyland who is in a hypersleep chamber, but Weyland is somehow conscious enough to have a back-and-forth dialogue with David. We don’t see either David’s for Weyland’s perspective in the scene.

David communicated with Weyland.

As an interface, the helmet seems straightforward. He has one Neuro-Visor for all the hypersleep chambers, and to pair the device to a particular one, he simply touches the surface of the chamber near the hyper sleeper’s head. Cyan interface elements on that translucent interface confirm the touch and presumably allow some degree of control of the visuals. To turn the Neuro-Visor off, he simply removes it from his head. These are simple and intuitive gestures that makes the Neuro-Visor one of the best and most elegantly designed interfaces in the movie.