VR Goggles

BttF_118

At the dinner table, both Marty Jr. and Marlene have VR goggles. Marty wears his continuously, but Marlene is more polite and rests hers around her neck when with the family. When she receives a call, red LEDs flash the word “PHONE” on the outside of the goggles as they ring. This would be a useful signal if the volume were turned down or the volume was baffled by ambient sounds.

BttF_120

Marty Jr’’s goggles are on, and he announces to Marty Sr. that the phone is for him and that it’s Needles.

This implies a complete wireless caller ID system (which had only just been released to market in the United States the year before the movie was released) and a single number for the household that is distributed amongst multiple communications devices simultaneously, which was not available at the time (or hey, even now), so it’s quite forward looking. Additionally, it lets the whole social circle help manage communication requests, even if it sacrifices a bit of privacy.

Night Vision Goggles

Screenshot-(248)

Genarro: “Are they heavy?”
Excited Kid: “Yeah!”
Genarro: “Then they’re expensive, put them back”
Excited Kid: [nope]

The Night Vision Goggles are large binoculars that are sized to fit on an adult head.  They are stored in a padded case in the Tour Jeep’s trunk.  When activated, a single red light illuminated in the “forehead” of the device, and four green lights appear on the rim of each lens. The green lights rotate around the lens as the user zooms the binoculars in and out. On a styling point, the goggles are painted in a very traditional and very adorable green and yellow striped dinosaur pattern.

Tim holds the goggles up as he plays with them, and it looks like they are too large for his head (although we don’t see him adjust the head support at all, so he might not have known they were adjustable).  He adjusts the zoom using two hidden controls—one on each side.  It isn’t obvious how these work. It could be that…

  • There are no controls, and it automatically focuses on the thing in the center of the view or on the thing moving.
  • One side zooms in, and the other zooms out.
  • Both controls have a zoom in/zoom out ability.
  • Each side control powers its own lens.
  • Admittedly, the last option is the least likely.

Unfortunately the movie just doesn’t give us enough information, leaving it as an exercise for us to consider.

Screenshot (241) Continue reading

Ectogoggles

Regular readers will have noticed that Starship Troopers is on a bit of pause of late, and the reason is that I am managing a bizarrely busy stint of presentations related to the scifiinterfaces project. Also it’s Halloweek and I want to do more spooky stuff. Last week I wondered e-loud if Gozer from Ghostbusters was a pink Sith, but this post is actually talking about a bit of the interfaces from the movie.

When the Ghostbusters are called to the Sedgewick Hotel, they track a ghost called Slimer from his usual haunt on the 12th floor to a ballroom. There Ray dons a pair of asymmetrical goggles that show him information about the “psycho-kinetic energy (PKE) valences” in the area. (The Ghostbusters wiki—and of course there is such a thing—identifies these alternately as paragoggles or ectogoggles.) He uses the goggles to peek from behind a curtain to look for Slimer.

Ghostbusters_binoculars_02

Far be it for this humble blog to try and reverse-engineer what PKE valences actually are, but let’s presume it generally means ghosts and ghost related activity. Here’s an animated gif of the display for your ghostspotting pleasure.

Ghostoculars_gif

As he scans the room, we see a shot from his perspective. Five outputs augment the ordinary view the googles offer.

Continue reading

Perpvision

GitS-heatvision-01

The core of interaction design is the see-think-do loop that describes the outputs, human cognition, and inputs of an interactive system. A film or TV show spends time showing inputs without describing some output, only when these users are in the background and unrelated to the plot. But there are a few examples of outputs with no apparent inputs. These are hard to evaluate in a standard way because it’s such a giant piece of the puzzle. Is it a brain input? Is the technology agentive? Is it some hidden input like Myo‘s muscle sensing? Not knowing the input, a regular review is kind of pointless. All I can do is list its effects and perhaps evaluate the outputs in terms of the apparent goals. Ghost in the Shell has several of these types of inputless systems. Today’s is Kusanagi’s heat vision.

Early in the film, Kusanagi sits atop a skyscraper, jacked in, wearing dark goggles, and eavesdropping on a conversation taking place in a building far below. As she looks down, she sees through the walls of the building in a scanline screen-green view that shows the people as bright green and furniture as a dim green, with everything else being black.

She adjusts the view by steps to zoom closer and closer until her field of vision is filled with the two men conversing in her earpiece. When she hears mention of Project 2501 she thinks the message, “Major, Section 6 is ready to move in.” She reaches up to her right temple and clicks a button, to turn the goggles off before removing them.

That’s nifty. But how did she set the depth of field and the extents (the frustum) of the display so that she only sees these people, and not everyone in the building below this? How does she tell the algorithm that she wants to see furniture and not floor? (Is it thermography? Is the furniture all slightly warm?) What is she doing to increase the zoom? If it’s jacked into her head, why must she activate it several times rather than just focusing on the object with her eyes, or specifying “that person there?” How did she set the audio? Why does the audio not change with each successive zoom? If they’re from separate systems, how did she combine them?

Squint gestures

If I had to speculate what the mechanism should be, I would try to use the natural mechanisms of the eye itself. Let Kusanagi use a slight squint gesture to zoom in, and a slight widening of the eyelids to zoom out. This would let her maintain her gaze, maintain her silence, keep her body still, and keep her hands free.

The scene implies that her tools provide a set amount of zoom for each activation, but for very long distances that seems like it would be a pain. I would have the zoom automatically set itself to make the object on which she is focusing fill her field of vision less some border, and then use squint-gestures to change the zoom to the next logical thing. For instance, if she focused on a person, that person would fill her field of vision. A single widening might zoom out to show the couch on which they are sitting. Another the room. This algorithm wouldn’t be perfect, so you’d need some mechanism for arbitrary zoom. I’d say a squint or wide-eyed gesture held for a third of a second or so would trigger arbitrary zoom for as long as the gesture was maintained, with the zoom increasing logarithmically.

As for the frustum, use the same smart algorithm to watch her gaze, and set the extents to include the whole of the subject and the context in which it sits.