Doc Brown uses some specialized binoculars to verify that Marty Jr. is at the scene according to plan. He flips them open and puts his eyes up to them. When we see his view, a reticle of green corners is placed around the closest individual in view. In the lower right hand corner are three measurements, DIST, gamma, and XYZ. These numbers change continuously. A small pair of graphics at the bottom illustrate whether the reticle is to left or right of center.
As discussed in Chapter 8 of Make It So, augmented reality systems like this can have several awarenesses, and this has some sensor display and people awareness. I’m not sure what use the sensor data is to Doc, and the people detector seems unable to track a single individual consistently.
So, a throwaway interface that doesn’t help much beyond looking gee-whiz(1989).
When we first see the HUD, Tony is donning the Iron Man mask. Tony asks, JARVIS, “You there?” To which JARVIS replies, “At your service sir.” Tony tells him to Engage the heads-up display, and we see the HUD initialize. It is a dizzying mixture of blue wireframe motion graphics. Some imply system functions, such as the reticle that pinpoints Tonys eye. Most are small dashboard-like gauges that remain small and in Tonys peripheral vision while the information is not needed, and become larger and more central when needed. These features are catalogued in another post, but we learn about them through two points-of-view: a first-person view, which shows us what Tony’s sees as if we were there, donning the mask in his stead, and second-person view, which shows us Tony’s face overlaid against a dark background with floating graphics.
This post is about that first-person view. Specifically it’s about the visual design and the four awarenesses it displays.
In the Augmented Reality chapter of Make It So, I identified four types of awareness seen in the survey for Augmented Reality displays:
The Iron Man HUD illustrates all four and is a useful framework for describing and critiquing the 1st-person view. Continue reading →
In the last post we went over the Iron HUD components. There is a great deal to say about the interactions and interface, but let’s just take a moment to recount everything that the HUD does over the Iron Man movies and The Avengers. Keep in mind that just as there are many iterations of the suit, there can be many iterations of the HUD, but since it’s largely display software controlled by JARVIS, the functions can very easily move between exosuits.
Along the bottom of the HUD are some small gauges, which, though they change iconography across the properties, are consistently present.
For the most part they persist as tiny icons and thereby hard to read, but when the suit reboots in a high-altitude freefall, we get to see giant versions of them, and can read that they are:
The last Scav tech (and the last review of tech in the nerdsourced reviews of Oblivion) is a short one. During the drone assault on the Scav compound, we get a glimpse of the reticle used by the rebel Sykes as he tries to target a weak spot in a drone’s backside.
The reticle has a lot of problems, given Sykes’ task. The data on the periphery is too small to be readable. There are some distracting lines from the augmentation boxes which, if they’re just pointing to static points along the hairline, should be removed. The grid doesn’t seem to serve much purpose. There aren’t good differentiations among the ticks to be able to quickly subitize subtensions. (Read: tell how wide a thing is compared to the tick marks.) (You know, like with a ruler.)
The reticle certainly looks sci-fi, but real-world utility seems low.
The nicest and most surprising thing though is that the bullseye is the right shape and size of the thing he’s targeting. Whatever that circle thing is on the drone (a thermal exhaust port, which seem to be ubiquitously weak in spherical tech) this reticle seems to be custom-shaped to help target it. This may be giving it a lot of credit, but in a bit of apologetics, what if it had a lot of goal awareness, and adjusted the bullseye to match the thing he was targeting? Could it take on a tire shape to disable a car? Or a patella shape to help incapacitate a human attacker? That would be a very useful reticle feature.
Jack lands in a ruined stadium to do some repairs on a fallen drone. After he’s done, the drone takes a while to reboot, so while he waits, Jack’s mind drifts to the stadium and the memories he has of it.
Present information as it might be shared
Vika was in comms with Jack when she notices the alarm signal from the desktop interface. Her screen displays an all-caps red overlay reading ALERT, and a diamond overlaying the unidentified object careening toward him. She yells, “Contact! Left contact!” at Jack.
As Jack hears Vika’s warning, he turns to look drawing his pistol reflexively as he crouches. While the weapon is loading he notices that the cause of the warning was just a small, not-so-hostile dog. Continue reading →
As Vika is looking at the radar and verifying visuals on the dispatched drones with Jack, the symbols for drones 166 and 172 begin flashing red. An alert begins sounding, indicating that the two drones are down.
Vika wants to send Jack to drone 166 first. To do this she sends Jack the drone coordinates by pressing and holding the drone symbol for 166 at which time data coordinates are displayed. She then drags the data coordinates with one finger to the Bubbleship symbol and releases. The coordinates immediately display on Jack’s HUD as a target area showing the direction he needs to go. Continue reading →
When conducting reconnaissance on the bug home Planet P, Rico pauses to scan the nearby mountain crest with a pair of Federation binoculars. They feature two differently-sized objective lenses.
We get a POV for him and get to see the overlay. It includes a range-finding reticle and two 7-segment readouts in the lower corners. It looks nifty, but it’s missing some important things. Continue reading →
Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.
The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.
These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.
These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.
This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?
Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.
This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.
Section 6 stations a spider tank, hidden under thermoptic camouflage, to guard Project 2501. When Kunasagi confronts the tank, we see a glimpse of the video feed from its creepy, metal, recessed eye. This view is a screen green image, overlaid with two reticles. The larger one with radial ticks shows where the weapon is pointing while the smaller one tracks the target.
I have often used the discrepancy between a weapon- and target-reticle to point out how far behind Hollywood is on the notion of agentive systems in the real world, but for the spider tank it’s very appropriate.The image processing is likely to be much faster than the actuators controlling the tank’s position and orientation. The two reticles illustrate what the tank’s AI is working on. This said, I cannot work out why there is only one weapon reticle when the tank has two barrels that move independently.
When the spider tank expends all of its ammunition, Kunasagi activates her thermoptic camouflage, and the tank begins to search for her. It switches from its protected white camera to a big-lens blue camera. On its processing screen, the targeting reticle disappears, and a smaller reticle appears with concentric, blinking white arcs. As Kunasagi strains to wrench open plating on the tank, her camouflage is compromised, allowing the tank to focus on her (though curiously, not to do anything like try and shake her off or slam her into the wall or something). As its confidence grows, more arcs appear, become thicker, and circle the center, indicating its confidence.
The amount of information on the augmentation layer is arbitrary, since it’s a machine using it and there are certainly other processes going on than what is visualized. If this was for a human user, there might be more or less augmentation necessary, depending on the amount of training they have and the goal awareness of the system. Certainly an actual crosshairs in the weapon reticle would help aim it very precisely.