Ectogoggles

When the Ghostbusters are called to the Sedgewick Hotel, they track a ghost called Slimer from his usual haunt on the 12th floor to a ballroom. There Ray dons a pair of asymmetrical goggles that show him information about the “psycho-kinetic energy (PKE) valences” in the area. (The Ghostbusters wiki—and of course there is such a thing—identifies these alternately as paragoggles or ectogoggles.) He uses the goggles to peek from behind a curtain to look for Slimer.

Ghostbusters_binoculars_02

Far be it for this humble blog to try and reverse-engineer what PKE valences actually are, but let’s presume it generally means ghosts and ghost related activity. Here’s an animated gif of the display for your ghostspotting pleasure.

Ghostoculars_gif

As he scans the room, we see a shot from his perspective. Five outputs augment the ordinary view the googles offer.

1. A plan position indicator (like what you see on a radar) sweeps around and around in the upper left hand corner, but never displays anything (even when Slimer appears.)

2. A bar graph on the left side that wavers up and down until Slimer is spotted, when it jumps to maximum. The bar graph adheres to the basic visual principle of “up means more.” The bar graph is colored with a stoplight gradient, with red at the bottom, yellow in the middle, and a bright screen-green at the top. Note that the graph builds from the bottom until it hits maximum, when its glow slides to the top to fully illuminate only the uppermost block. This is a special “max” mode that strongly draws the user’s attention.

3. There is a 7-segment red LED number display just below the graph, which you might think is a numerical version of the same data, but we only see it increment steadily from 03094 to 03051 during the first scan, then after a cutaway to Ray’s face, we see it drop to 01325 and continue to increment steadily until it hits 1333, where it remains steady and begins to blink. It hits this maximum about a half a second before the graph jumps to its max.

graph

4. In the very lower left is a red mode label reading “KER,” which blinks until the numbers hit 01333 in the second sequence, when KER disappears and is replaced with a steadily-glowing green “MAX.”

What the heck is KER? I don’t think there’s any diegetic answer. Ker might be an extradiegetic shout-out to Rick Kerrigan, who was production supervisor for Entertainment Effects Group / Boss Film Studios for the film, but that’s just a guess. Otherwise I got nothin’. Anyone else?

5. In the lower right is a blurry light that blinks red until Slimer is spotted, when it blinks the same screen-green as the bar graph, sweep, and MAX label.

Narratively, this is a tone interface, that doesn’t add anything to the plot, and only helps us experience and understand how it is the busters do their busting. As a tone interface, making these changes would help improve believability without affecting the plot.

Ghostbusters_binoculars_08

How to better support busting

The immediate improvements you could make to this as a “real” ghostbusting tool are fairly obvious:

  • Make the plan position indicator, you know, work.
  • Have the numbers match the graph, or, if they’re actually measuring different things, put the LED display on the other side of the view.
  • I’d change the graph color indicating no-PKE to black or dark gray. Red often connotes danger, and really, if there’s no PKE, you’re safe from the supernatural. Plus the blackbody radiation spectrum has a more physical reference and is therefore more immediate.
  • You could even lose the bar diagram—which requires looking away from the view—and replace it with a line around the view that changes color similarly. This puts the augmentation in the periphery.
  • Lose the distracting blinking red light entirely. It draws attention at a time when the Buster’s eyes need to be on the view, and it’s just duplicating information already provided in a better way by the graph.

But we can do those improvements better. In the augmented reality chapter of the book, I identified levels of awareness for these devices. The ectogoggles are an example of the simplest type, of sensor display, with the sweep giving an unfulfilled promise of the second type, location awareness. We can make even bigger improvements by considering the other levels, i.e. context and goal awareness.

Context Awareness

Context awareness implies a more sophisticated system with image recognition and display capabilities. Could the paragoggles help draw attention to where on the view the PKE is most concentrated, and how those readings are trending? Of course this wouldn’t be so important when the ghost is actually visible, but if it could lead his eyes to where the ghost is most likely going to be, it would be more useful and save him even the microseconds of an eye saccade.

A second aspect of context awareness is object or people recognition. If the goggles could recognize individual ghosts, the display be improved with some information about this particular ghost—or its category—from a database. What’s its name? What methods have failed or worked in the past to control it? Even if it doesn’t know these things, it can provide an alert that it is an UNKNOWN ENTITY, which is spooky sounding and tells the Ghostbusters to be on high alert since anything could happen.

Goal awareness

Lastly, they could be improved with goal awareness. The Ghostbusters aren’t birdwatchers. They’re there to capture that ugly spud. Can it help guide each person as to the best time to gear up the proton packs (or do it for them), where to position themselves as well as the trap, and finally when and where to fire? Certainly someone as scatterbrained as Ray could use that kind of assistance.

Ghostbusters_binoculars_00

Section No6’s crappy sniper tech

GitS-Drone_gunner-01

GitS-Drone_gunner-12

Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.

GitS-profile-06

The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.

GitS-Drone_gunner-06

GitS-Drone_gunner-07

These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.

These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.

GitS-Drone_gunner-02

This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?

Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.

This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.

R-3000 “Spider tank” vision

GitS-spidertank-22

Section 6 stations a spider tank, hidden under thermoptic camouflage, to guard Project 2501. When Kunasagi confronts the tank, we see a glimpse of the video feed from its creepy, metal, recessed eye. This view is a screen green image, overlaid with two reticles. The larger one with radial ticks shows where the weapon is pointing while the smaller one tracks the target.

I have often used the discrepancy between a weapon- and target-reticle to point out how far behind Hollywood is on the notion of agentive systems in the real world, but for the spider tank it’s very appropriate.The image processing is likely to be much faster than the actuators controlling the tank’s position and orientation. The two reticles illustrate what the tank’s AI is working on. This said, I cannot work out why there is only one weapon reticle when the tank has two barrels that move independently.

GitS-spidertank-13

GitS-spidertank-09

When the spider tank expends all of its ammunition, Kunasagi activates her thermoptic camouflage, and the tank begins to search for her. It switches from its protected white camera to a big-lens blue camera. On its processing screen, the targeting reticle disappears, and a smaller reticle appears with concentric, blinking white arcs. As Kunasagi strains to wrench open plating on the tank, her camouflage is compromised, allowing the tank to focus on her (though curiously, not to do anything like try and shake her off or slam her into the wall or something). As its confidence grows, more arcs appear, become thicker, and circle the center, indicating its confidence.

The amount of information on the augmentation layer is arbitrary, since it’s a machine using it and there are certainly other processes going on than what is visualized. If this was for a human user, there might be more or less augmentation necessary, depending on the amount of training they have and the goal awareness of the system. Certainly an actual crosshairs in the weapon reticle would help aim it very precisely.

GitS-spidertank-06