VR Goggles

BttF_118

At the dinner table, both Marty Jr. and Marlene have VR goggles. Marty wears his continuously, but Marlene is more polite and rests hers around her neck when with the family. When she receives a call, red LEDs flash the word “PHONE” on the outside of the goggles as they ring. This would be a useful signal if the volume were turned down or the volume was baffled by ambient sounds.

BttF_120

Marty Jr’’s goggles are on, and he announces to Marty Sr. that the phone is for him and that it’s Needles.

This implies a complete wireless caller ID system (which had only just been released to market in the United States the year before the movie was released) and a single number for the household that is distributed amongst multiple communications devices simultaneously, which was not available at the time (or hey, even now), so it’s quite forward looking. Additionally, it lets the whole social circle help manage communication requests, even if it sacrifices a bit of privacy.

Night Vision Goggles

Screenshot-(248)

Genarro: “Are they heavy?”
Excited Kid: “Yeah!”
Genarro: “Then they’re expensive, put them back”
Excited Kid: [nope]

The Night Vision Goggles are large binoculars that are sized to fit on an adult head.  They are stored in a padded case in the Tour Jeep’s trunk.  When activated, a single red light illuminated in the “forehead” of the device, and four green lights appear on the rim of each lens. The green lights rotate around the lens as the user zooms the binoculars in and out. On a styling point, the goggles are painted in a very traditional and very adorable green and yellow striped dinosaur pattern.

Tim holds the goggles up as he plays with them, and it looks like they are too large for his head (although we don’t see him adjust the head support at all, so he might not have known they were adjustable).  He adjusts the zoom using two hidden controls—one on each side.  It isn’t obvious how these work. It could be that…

  • There are no controls, and it automatically focuses on the thing in the center of the view or on the thing moving.
  • One side zooms in, and the other zooms out.
  • Both controls have a zoom in/zoom out ability.
  • Each side control powers its own lens.
  • Admittedly, the last option is the least likely.

Unfortunately the movie just doesn’t give us enough information, leaving it as an exercise for us to consider.

Screenshot (241) Continue reading

Ectogoggles

When the Ghostbusters are called to the Sedgewick Hotel, they track a ghost called Slimer from his usual haunt on the 12th floor to a ballroom. There Ray dons a pair of asymmetrical goggles that show him information about the “psycho-kinetic energy (PKE) valences” in the area. (The Ghostbusters wiki—and of course there is such a thing—identifies these alternately as paragoggles or ectogoggles.) He uses the goggles to peek from behind a curtain to look for Slimer.

Ghostbusters_binoculars_02

Far be it for this humble blog to try and reverse-engineer what PKE valences actually are, but let’s presume it generally means ghosts and ghost related activity. Here’s an animated gif of the display for your ghostspotting pleasure.

Ghostoculars_gif

As he scans the room, we see a shot from his perspective. Five outputs augment the ordinary view the googles offer.

1. A plan position indicator (like what you see on a radar) sweeps around and around in the upper left hand corner, but never displays anything (even when Slimer appears.)

2. A bar graph on the left side that wavers up and down until Slimer is spotted, when it jumps to maximum. The bar graph adheres to the basic visual principle of “up means more.” The bar graph is colored with a stoplight gradient, with red at the bottom, yellow in the middle, and a bright screen-green at the top. Note that the graph builds from the bottom until it hits maximum, when its glow slides to the top to fully illuminate only the uppermost block. This is a special “max” mode that strongly draws the user’s attention.

3. There is a 7-segment red LED number display just below the graph, which you might think is a numerical version of the same data, but we only see it increment steadily from 03094 to 03051 during the first scan, then after a cutaway to Ray’s face, we see it drop to 01325 and continue to increment steadily until it hits 1333, where it remains steady and begins to blink. It hits this maximum about a half a second before the graph jumps to its max.

graph

4. In the very lower left is a red mode label reading “KER,” which blinks until the numbers hit 01333 in the second sequence, when KER disappears and is replaced with a steadily-glowing green “MAX.”

What the heck is KER? I don’t think there’s any diegetic answer. Ker might be an extradiegetic shout-out to Rick Kerrigan, who was production supervisor for Entertainment Effects Group / Boss Film Studios for the film, but that’s just a guess. Otherwise I got nothin’. Anyone else?

5. In the lower right is a blurry light that blinks red until Slimer is spotted, when it blinks the same screen-green as the bar graph, sweep, and MAX label.

Narratively, this is a tone interface, that doesn’t add anything to the plot, and only helps us experience and understand how it is the busters do their busting. As a tone interface, making these changes would help improve believability without affecting the plot.

Ghostbusters_binoculars_08

How to better support busting

The immediate improvements you could make to this as a “real” ghostbusting tool are fairly obvious:

  • Make the plan position indicator, you know, work.
  • Have the numbers match the graph, or, if they’re actually measuring different things, put the LED display on the other side of the view.
  • I’d change the graph color indicating no-PKE to black or dark gray. Red often connotes danger, and really, if there’s no PKE, you’re safe from the supernatural. Plus the blackbody radiation spectrum has a more physical reference and is therefore more immediate.
  • You could even lose the bar diagram—which requires looking away from the view—and replace it with a line around the view that changes color similarly. This puts the augmentation in the periphery.
  • Lose the distracting blinking red light entirely. It draws attention at a time when the Buster’s eyes need to be on the view, and it’s just duplicating information already provided in a better way by the graph.

But we can do those improvements better. In the augmented reality chapter of the book, I identified levels of awareness for these devices. The ectogoggles are an example of the simplest type, of sensor display, with the sweep giving an unfulfilled promise of the second type, location awareness. We can make even bigger improvements by considering the other levels, i.e. context and goal awareness.

Context Awareness

Context awareness implies a more sophisticated system with image recognition and display capabilities. Could the paragoggles help draw attention to where on the view the PKE is most concentrated, and how those readings are trending? Of course this wouldn’t be so important when the ghost is actually visible, but if it could lead his eyes to where the ghost is most likely going to be, it would be more useful and save him even the microseconds of an eye saccade.

A second aspect of context awareness is object or people recognition. If the goggles could recognize individual ghosts, the display be improved with some information about this particular ghost—or its category—from a database. What’s its name? What methods have failed or worked in the past to control it? Even if it doesn’t know these things, it can provide an alert that it is an UNKNOWN ENTITY, which is spooky sounding and tells the Ghostbusters to be on high alert since anything could happen.

Goal awareness

Lastly, they could be improved with goal awareness. The Ghostbusters aren’t birdwatchers. They’re there to capture that ugly spud. Can it help guide each person as to the best time to gear up the proton packs (or do it for them), where to position themselves as well as the trap, and finally when and where to fire? Certainly someone as scatterbrained as Ray could use that kind of assistance.

Ghostbusters_binoculars_00

Perpvision

GitS-heatvision-01

The core of interaction design is the see-think-do loop that describes the outputs, human cognition, and inputs of an interactive system. A film or TV show spends time showing inputs without describing some output, only when these users are in the background and unrelated to the plot. But there are a few examples of outputs with no apparent inputs. These are hard to evaluate in a standard way because it’s such a giant piece of the puzzle. Is it a brain input? Is the technology agentive? Is it some hidden input like Myo‘s muscle sensing? Not knowing the input, a regular review is kind of pointless. All I can do is list its effects and perhaps evaluate the outputs in terms of the apparent goals. Ghost in the Shell has several of these types of inputless systems. Today’s is Kusanagi’s heat vision.

Early in the film, Kusanagi sits atop a skyscraper, jacked in, wearing dark goggles, and eavesdropping on a conversation taking place in a building far below. As she looks down, she sees through the walls of the building in a scanline screen-green view that shows the people as bright green and furniture as a dim green, with everything else being black.

She adjusts the view by steps to zoom closer and closer until her field of vision is filled with the two men conversing in her earpiece. When she hears mention of Project 2501 she thinks the message, “Major, Section 6 is ready to move in.” She reaches up to her right temple and clicks a button, to turn the goggles off before removing them.

That’s nifty. But how did she set the depth of field and the extents (the frustum) of the display so that she only sees these people, and not everyone in the building below this? How does she tell the algorithm that she wants to see furniture and not floor? (Is it thermography? Is the furniture all slightly warm?) What is she doing to increase the zoom? If it’s jacked into her head, why must she activate it several times rather than just focusing on the object with her eyes, or specifying “that person there?” How did she set the audio? Why does the audio not change with each successive zoom? If they’re from separate systems, how did she combine them?

Squint gestures

If I had to speculate what the mechanism should be, I would try to use the natural mechanisms of the eye itself. Let Kusanagi use a slight squint gesture to zoom in, and a slight widening of the eyelids to zoom out. This would let her maintain her gaze, maintain her silence, keep her body still, and keep her hands free.

The scene implies that her tools provide a set amount of zoom for each activation, but for very long distances that seems like it would be a pain. I would have the zoom automatically set itself to make the object on which she is focusing fill her field of vision less some border, and then use squint-gestures to change the zoom to the next logical thing. For instance, if she focused on a person, that person would fill her field of vision. A single widening might zoom out to show the couch on which they are sitting. Another the room. This algorithm wouldn’t be perfect, so you’d need some mechanism for arbitrary zoom. I’d say a squint or wide-eyed gesture held for a third of a second or so would trigger arbitrary zoom for as long as the gesture was maintained, with the zoom increasing logarithmically.

As for the frustum, use the same smart algorithm to watch her gaze, and set the extents to include the whole of the subject and the context in which it sits.