As Jack searches early in the film for Drone 172, he parks his bike next to a sinkhole in the desert and cautiously peers into it. As he does so, he is being observed from afar by a sinister looking Scav through a set of asymmetrical…well, it’s not exactly right to call them binoculars.
They look kind of binocular, but that term technically refers to a machine that displays two slighty-offset images shown independently to each eye such that the user perceives stereopsis, or a single field in 3D. But a quick shot from the Scav’s perspective shows that this is not what is shown at all.
This device’s two lenses take in different spectrums of light and displays them side by side, with a little (albeit inscrutable) augmentation at the periphery. The larger display on the left appears to be visible light and the smaller on the right appears—based on the strong highlight around the bike’s engine and Jack’s body—to be infrared, or heat.
At this point in the story, the audience is meant to believe that the scavs are still the evil alien race, and this interface helps to convey that. It seems foreign, mysterious. All of its typographic elements (letters, numbers, symbols) are squeezed into little more than 4×4 grids of pixels, so we’re not even sure if this is a human language. So, fine, this interface serves its narrative purpose here. “Oh my,” we must think, “…he is being watched. But by what? And why?”
But after we find out [again, spoilers] that the scavs are the Terran survivors after the Tet attack, we can look at this again to understand that this interface is for humans, and with that in mind it does not fare well.
Yes, the periphery is augmented, so that’s good, but the information is unusably small, and forces the user to glance back and forth between the two images to put the disparate information together.
Two views reduce the amount of information
It almost goes without saying, but let’s say it—by dividing the available display into two halves, the amount of visual information provided to the Scav is roughly a quarter of what it would be with a single view. And since the purpose of the device is to magnify, this is a significant loss.
Two views add work
In this scene, which is quite barren, it’s very easy to tell that the objects that are warm in the right are the only two objects on the left, but if you imagine looking at a cityscape, where the bomb (hot) looks very much like every other thing around it, you can see where piecing those two disparate views together in your head can become problematic.
This is made worse when the views aren’t even positionally synchronized. In the gif below you’ll see that when you superimpose them, they drift away from each other, making the comparison between the two even more difficult. There are diegetic reasons why this might have happened, but rather than reverse engineering why, let’s just leave it that it makes using it more difficult.
The blur and low-contrast don’t help
Note that the thermal view is blurrier and lower-contrast. That might be an artifact of the diegetic tech, but it would confound quick mapping in a complex image. Even if it’s just a lower-res image, at least the device should perform some auto-leveling and sharpening functions on the live image to help make it easy to use.
Having one scaled makes it worse
The scaling makes the mapping of items from one screen to the other more difficult. Again, in the Oblivion example, there are two objects on the left and two objects on the right, and the “horizons” on which they walk are roughly aligned, so it’s trivial to track one to the other. But if the image is highly repetitive—say for example, a building—the scaling would make it difficult to map the useful point-of-interest on the right to the best-resolution image on the left. Quick…in which window is the sniper?
A more direct solution
Better would be a live augmentation of a single, visual-light image. The visual light is the best anchor to the real world, with augmentation helping to convey specialness to the objects in the scene. In the comp below, you’ll see a single image where the “hot spots” have been augmented with a soft red and some trend lines in white. That red color is not arbitrary, by the way. It builds on the human experience with black body radiation associations of red == hot. This saves the (quite human) user both the physical work of glancing back and forth and the extra cognitive processing to recall that green/yellow == heat.
Excellent example with the sniper. Nice work.