In Johnny Mnemonic we see two different types of binoculars with augmented reality overlays and other enhancements: Yakuz-oculars, and LoTek-oculars.
Yakuz-oculars
The Yakuza are the last to be seen but also the simpler of the two. They look just like a pair of current day binoculars, but this is the view when the leader surveys the LoTek bridge.
I assume that the characters here are Japanese? Anyone?
In the centre is a fixed-size green reticule. At the bottom right is what looks like the magnification factor. At the top left and bottom left are numbers, using Western digits, that change as the binoculars move. Without knowing what the labels are I can only guess that they could be azimuth and elevation angles, or distance and height to the centre of the reticule. (The latter implies some sort of rangefinder.)Continue reading →
In the last post we went over the Iron HUD components. There is a great deal to say about the interactions and interface, but let’s just take a moment to recount everything that the HUD does over the Iron Man movies and The Avengers. Keep in mind that just as there are many iterations of the suit, there can be many iterations of the HUD, but since it’s largely display software controlled by JARVIS, the functions can very easily move between exosuits.
Gauges
Along the bottom of the HUD are some small gauges, which, though they change iconography across the properties, are consistently present.
For the most part they persist as tiny icons and thereby hard to read, but when the suit reboots in a high-altitude freefall, we get to see giant versions of them, and can read that they are:
Cut to the bottom of the Hudson River where some electrical “transmission lines” rest. Tony in his Iron Man supersuit has his palm-mounted repulsor rays configured such that they create a focused beam, capable of cutting through an iron pipe to reveal power cables within. Once the pipe casing is removed, he slides a circular device onto the cabling. The cuff automatically closes, screws itself tight, and expands to replace the section of casing. Dim white lights burn brighter as hospital-green rings glow brightly around the cable’s circumference. His task done, he underwater-flies away, flying up the southern tip of Manhattan to Stark Tower.
It’s quick scene that sets up the fact that they’re using Tony’s arc reactor technology to liberate Stark Tower from the electrical grid (incidentally implying that the Avengers will never locate a satellite headquarters anywhere in Florida. Sorry, Jeb.) So, since it’s a quick scene, we can just skip the details and interaction design issues, right?
Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.
The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.
These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.
These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.
This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?
Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.
This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.
Section 6 stations a spider tank, hidden under thermoptic camouflage, to guard Project 2501. When Kunasagi confronts the tank, we see a glimpse of the video feed from its creepy, metal, recessed eye. This view is a screen green image, overlaid with two reticles. The larger one with radial ticks shows where the weapon is pointing while the smaller one tracks the target.
I have often used the discrepancy between a weapon- and target-reticle to point out how far behind Hollywood is on the notion of agentive systems in the real world, but for the spider tank it’s very appropriate.The image processing is likely to be much faster than the actuators controlling the tank’s position and orientation. The two reticles illustrate what the tank’s AI is working on. This said, I cannot work out why there is only one weapon reticle when the tank has two barrels that move independently.
When the spider tank expends all of its ammunition, Kunasagi activates her thermoptic camouflage, and the tank begins to search for her. It switches from its protected white camera to a big-lens blue camera. On its processing screen, the targeting reticle disappears, and a smaller reticle appears with concentric, blinking white arcs. As Kunasagi strains to wrench open plating on the tank, her camouflage is compromised, allowing the tank to focus on her (though curiously, not to do anything like try and shake her off or slam her into the wall or something). As its confidence grows, more arcs appear, become thicker, and circle the center, indicating its confidence.
The amount of information on the augmentation layer is arbitrary, since it’s a machine using it and there are certainly other processes going on than what is visualized. If this was for a human user, there might be more or less augmentation necessary, depending on the amount of training they have and the goal awareness of the system. Certainly an actual crosshairs in the weapon reticle would help aim it very precisely.