Disclosure (1994)

Our next 3D file browsing system is from the 1994 film Disclosure. Thanks to site reader Patrick H Lauke for the suggestion.

Like Jurassic Park, Disclosure is based on a Michael Crichton novel, although this time without any dinosaurs. (Would-be scriptwriters should compare the relative success of these two films when planning a study program.) The plot of the film is corporate infighting within Digicom, manufacturer of high tech CD-ROM drives—it was the 1990s—and also virtual reality systems. Tom Sanders, executive in charge of the CD-ROM production line, is being set up to take the blame for manufacturing failures that are really the fault of cost-cutting measures by rival executive Meredith Johnson.

The Corridor: Hardware Interface

The virtual reality system is introduced at about 40 minutes, using the narrative device of a product demonstration within the company to explain to the attendees what it does. The scene is nicely done, conveying all the important points we need to know in two minutes. (To be clear, some of the images used here come from a later scene in the film, but it’s the same system in both.)

The process of entangling yourself with the necessary hardware and software is quite distinct from interacting with the VR itself, so let’s discuss these separately, starting with the physical interface.

Tom wearing VR headset and one glove, being scanned. Disclosure (1994)

In Disclosure the virtual reality user wears a headset and one glove, all connected by cables to the computer system. Like most virtual reality systems, the headset is responsible for visual display, audio, and head movement tracking; the glove for hand movement and gesture tracking. 

There are two “laser scanners” on the walls. These are the planar blue lights, which scan the user’s body at startup. After that they track body motion, although since the user still has to wear a glove, the scanners presumably just track approximate body movement and orientation without fine detail.

Lastly, the user stands on a concave hexagonal plate covered in embedded white balls, which allows the user to “walk” on the spot.

Closeup of user standing on curved surface of white balls. Disclosure (1994)

Searching for Evidence

The scene we’re most interested in takes place later in the film, the evening before a vital presentation which will determine Tom’s future. He needs to search the company computer files for evidence against Meredith, but discovers that his normal account has been blocked from access.   He knows though that the virtual reality demonstrator is on display in a nearby hotel suite, and also knows about the demonstrator having unlimited access. He sneaks into the hotel suite to use The Corridor. Tom is under a certain amount of time pressure because a couple of company VIPs and their guests are downstairs in the hotel and might return at any time.

The first step for Tom is to launch the virtual reality system. This is done from an Indy workstation, using the regular Unix command line.

The command line to start the virtual reality system. Disclosure (1994)

Next he moves over to the VR space itself. He puts on the glove but not the headset, presses a key on the keyboard (of the VR computer, not the workstation), and stands still for a moment while he is scanned from top to bottom.

Real world Tom, wearing one VR glove, waits while the scanners map his body. Disclosure (1994)

On the left is the Indy workstation used to start the VR system. In the middle is the external monitor which will, in a moment, show the third person view of the VR user as seen earlier during the product demonstration.

Now that Tom has been scanned into the system, he puts on the headset and enters the virtual space.

The Corridor: Virtual Interface

“The Corridor,” as you’ve no doubt guessed, is a three dimensional file browsing program. It is so named because the user will walk down a corridor in a virtual building, the walls lined with “file cabinets” containing the actual computer files.

Three important aspects of The Corridor were mentioned during the product demonstration earlier in the film. They’ll help structure our tour of this interface, so let’s review them now, as they all come up in our discussion of the interfaces.

  1. There is a voice-activated help system, which will summon a virtual “Angel” assistant.
  2. Since the computers themselves are part of a multi-user network with shared storage, there can be more than one user “inside” The Corridor at a time.
    Users who do not have access to the virtual reality system will appear as wireframe body shapes with a 2D photo where the head should be.
  3. There are no access controls and so the virtual reality user, despite being a guest or demo account, has unlimited access to all the company files. This is spectacularly bad design, but necessary for the plot.

With those bits of system exposition complete, now we can switch to Tom’s own first person view of the virtual reality environment.

Virtual world Tom watches his hands rezzing up, right hand with glove. Disclosure (1994)

There isn’t a real background yet, just abstract streaks. The avatar hands are rezzing up, and note that the right hand wearing the glove has a different appearance to the left. This mimics the real world, so eases the transition for the user.

Overlaid on the virtual reality view is a Digicom label at the bottom and four corner brackets which are never explained, although they do resemble those used in cameras to indicate the preferred viewing area.

To the left is a small axis indicator, the three green lines labeled X, Y, and Z. These show up in many 3D applications because, silly though it sounds, it is easy in a 3D computer environment to lose track of directions or even which way is up. A common fix for the user being unable to see anything is just to turn 180 degrees around.

We then switch to a third person view of Tom’s avatar in the virtual world.

Tom is fully rezzed up, within cloud of visual static. Disclosure (1994)

This is an almost photographic-quality image. To remind the viewers that this is in the virtual world rather than real, the avatar follows the visual convention described in chapter 4 of Make It So for volumetric projections, with scan lines and occasional flickers. An interesting choice is that the avatar also wears a “headset”, but it is translucent so we can see the face.

Now that he’s in the virtual reality, Tom has one more action needed to enter The Corridor. He pushes a big button floating before him in space.

Tom presses one button on a floating control panel. Disclosure (1994)

This seems unnecessary, but we can assume that in the future of this platform, there will be more programs to choose from.

The Corridor rezzes up, the streaks assembling into wireframe components which then slide together as the surfaces are shaded. Tom doesn’t have to wait for the process to complete before he starts walking, which suggests that this is a Level Of Detail (LOD) implementation where parts of the building are not rendered in detail until the user is close enough for it to be worth doing.

Tom enters The Corridor. Nearby floor and walls are fully rendered, the more distant section is not complete. Disclosure (1994)

The architecture is classical, rendered with the slightly artificial-looking computer shading that is common in 3D computer environments because it needs much less computation than trying for full photorealism.

Instead of a corridor this is an entire multistory building. It is large and empty, and as Tom is walking bits of architecture reshape themselves, rather like the interior of Hogwarts in Harry Potter.

Although there are paintings on some of the walls, there aren’t any signs, labels, or even room numbers. Tom has to wander around looking for the files, at one point nearly “falling” off the edge of the floor down an internal air well. Finally he steps into one archway room entrance and file cabinets appear in the walls.

Tom enters a room full of cabinets. Disclosure (1994)

Unlike the classical architecture around him, these cabinets are very modern looking with glowing blue light lines. Tom has found what he is looking for, so now begins to manipulate files rather than browsing.

Virtual Filing Cabinets

The four nearest cabinets according to the titles above are

  1. Communications
  2. Operations
  3. System Control
  4. Research Data.

There are ten file drawers in each. The drawers are unmarked, but labels only appear when the user looks directly at it, so Tom has to move his head to centre each drawer in turn to find the one he wants.

Tom looks at one particular drawer to make the title appear. Disclosure (1994)

The fourth drawer Tom looks at is labeled “Malaysia”. He touches it with the gloved hand and it slides out from the wall.

Tom withdraws his hand as the drawer slides open. Disclosure (1994)

Inside are five “folders” which, again, are opened by touching. The folder slides up, and then three sheets, each looking like a printed document, slide up and fan out.

Axis indicator on left, pointing down. One document sliding up from a folder. Disclosure (1994)

Note the tilted axis indicator at the left. The Y axis, representing a line extending upwards from the top of Tom’s head, is now leaning towards the horizontal because Tom is looking down at the file drawer. In the shot below, both the folder and then the individual documents are moving up so Tom’s gaze is now back to more or less level.

Close up of three “pages” within a virtual document. Disclosure (1994)

At this point the film cuts away from Tom. Rival executive Meredith, having been foiled in her first attempt at discrediting Tom, has decided to cover her tracks by deleting all the incriminating files. Meredith enters her office and logs on to her Indy workstation. She is using a Command Line Interface (CLI) shell, not the standard SGI Unix shell but a custom Digicom program that also has a graphical menu. (Since it isn’t three dimensional it isn’t interesting enough to show here.)

Tom uses the gloved hand to push the sheets one by one to the side after scanning the content.

Tom scrolling through the pages of one folder by swiping with two fingers. Disclosure (1994)

Quick note: This is harder than it looks in virtual reality. In a 2D GUI moving the mouse over an interface element is obvious. In three dimensions the user also has to move their hand forwards or backwards to get their hand (or finger) in the right place, and unless there is some kind of haptic feedback it isn’t obvious to the user that they’ve made contact.

Tom now receives a nasty surprise.

The shot below shows Tom’s photorealistic avatar at the left, standing in front of the open file cabinet. The green shape on the right is the avatar of Meredith who is logged in to a regular workstation. Without the laser scanners and cameras her avatar is a generic wireframe female humanoid with a face photograph stuck on top. This is excellent design, making The Corridor usable across a range of different hardware capabilities.

Tom sees the Meredith avatar appear. Disclosure (1994)

Why does The Corridor system place her avatar here? A multiuser computer system, or even just a networked file server,  obviously has to know who is logged on. Unix systems in general and command line shells also track which directory the user is “in”, the current working directory. Meredith is using her CLI interface to delete files in a particular directory so The Corridor can position her avatar in the corresponding virtual reality location. Or rather, the avatar glides into position rather than suddenly popping into existence: Tom is only surprised because the documents blocked his virtual view.

Quick note: While this is plausible, there are technical complications. Command line users often open more than one shell at a time in different directories. In such a case, what would The Corridor do? Duplicate the wireframe avatar in each location? In the real world we can’t be in more than one place at a time, would doing so contradict the virtual reality metaphor?

There is an asymmetry here in that Tom knows Meredith is “in the system” but not vice versa. Meredith could in theory use CLI commands to find out who else is logged on and whether anyone was running The Corridor, but she would need to actively seek out that information and has no reason to do so. It didn’t occur to Tom either, but he doesn’t need to think about it,  the virtual reality environment conveys more information about the system by default.

We briefly cut away to Meredith confirming her CLI delete command. Tom sees this as the file drawer lid emitting beams of light which rotate down. These beams first erase the floating sheets, then the folders in the drawer. The drawer itself now has a red “DELETED” label and slides back into the wall.

Tom watches Meredith deleting the files in an open drawer. Disclosure (1994)

Tom steps further into the room. The same red labels appear on the other file drawers even though they are currently closed.

Tom watches Meredith deleting other, unopened, drawers. Disclosure (1994)

Talking to an Angel

Tom now switches to using the system voice interface, saying “Angel I need help” to bring up the virtual reality assistant. Like everything else we’ve seen in this VR system the “angel” rezzes up from a point cloud, although much more quickly than the architecture: people who need help tend to be more impatient and less interested in pausing to admire special effects.

The voice assistant as it appears within VR. Disclosure (1994)

Just in case the user is now looking in the wrong direction the angel also announces “Help is here” in a very natural sounding voice.

The angel is rendered with white robe, halo, harp, and rapidly beating wings. This is horribly clichéd, but a help system needs to be reassuring in appearance as well as function. An angel appearing as a winged flying serpent or wheel of fire would be more original and authentic (yes, really: ​​Biblically Accurate Angels) but users fleeing in terror would seriously impact the customer satisfaction scores.

Now Tom has a short but interesting conversation with the angel, beginning with a question:

  • Tom
  • Is there any way to stop these files from being deleted?
  • Angel
  • I’m sorry, you are not level five.
  • Tom
  • Angel, you’re supposed to protect the files!
  • Angel
  • Access control is restricted to level five.

Tom has made the mistake, as described in chapter 9 Anthropomorphism of the book, of ascribing more agency to this software program than it actually has. He thinks he is engaged in a conversational interface (chapter 6 Sonic Interfaces) with a fully autonomous system, which should therefore be interested in and care about the wellbeing of the entire system. Which it doesn’t, because this is just a limited-command voice interface to a guide.

Even though this is obviously scripted, rather than a genuine error I think this raises an interesting question for real world interface designers: do users expect that an interface with higher visual quality/fidelity will be more realistic in other aspects as well? If a voice interface assistant has a simple polyhedron with no attempt at photorealism (say, like Bit in Tron) or with zoomorphism (say, like the search bear in Until the End of the World) will users adjust their expectations for speech recognition downwards? I’m not aware of any research that might answer this question. Readers?

Despite Tom’s frustration, the angel has given an excellent answer – for a guide. A very simple help program would have recited the command(s) that could be used to protect files against deletion. Which would have frustrated Tom even more when he tried to use one and got some kind of permission denied error. This program has checked whether the user can actually use commands before responding.

This does contradict the earlier VR demonstration where we were told that the user had unlimited access. I would explain this as being “unlimited read access, not write”, but the presenter didn’t think it worthwhile to go into such detail for the mostly non-technical audience.

Tom is now aware that he is under even more time pressure as the Meredith avatar is still moving around the room. Realising his mistake, he uses the voice interface as a query language.

“Show me all communications with Malaysia.”
“Telephone or video?”
“Video.”

This brings up a more conventional looking GUI window because not everything in virtual reality needs to be three-dimensional. It’s always tempting for a 3D programmer to re-implement everything, but it’s also possible to embed 2D GUI applications into a virtual world.

Tom looks at a conventional 2D display of file icons inside VR. Disclosure (1994)

The window shows a thumbnail icon for each recorded video conference call. This isn’t very helpful, so Tom again decides that a voice query will be much faster than looking at each one in turn.

“Show me, uh, the last transmission involving Meredith.”

There’s a short 2D transition effect swapping the thumbnail icon display for the video call itself, which starts playing at just the right point for plot purposes.

Tom watches a previously recorded video call made by Meredith (right). Disclosure (1994)

While Tom is watching and listening, Meredith is still typing commands. The camera orbits around behind the video conference call window so we can see the Meredith avatar approach, which also shows us that this window is slightly three dimensional, the content floating a short distance in front of the frame. The film then cuts away briefly to show Meredith confirming her “kill all” command. The video conference recordings are deleted, including the one Tom is watching.

Tom is informed that Meredith (seen here in the background as a wireframe avatar) is deleting the video call. Disclosure (1994)

This is also the moment when the downstairs VIPs return to the hotel suite, so the scene ends with Tom managing to sneak out without being detected.

Virtual reality has saved the day for Tom. The documents and video conference calls have been deleted by Meredith, but he knows that they once existed and has a colleague retrieve the files he needs from the backup tapes. (Which is good writing: the majority of companies shown in film and TV never seem to have backups for files, no matter how vital.) Meredith doesn’t know that he knows, so he has the upper hand to expose her plot.

Analysis

How believable is the interface?

I won’t spend much time on the hardware, since our focus is on file browsing in three dimensions. From top to bottom, the virtual reality system starts as believable and becomes less so.

Hardware

The headset and glove look like real VR equipment, believable in 1994 and still so today. Having only one glove is unusual, and makes impossible some of the common gesture actions described in chapter 5 of Make It So, which require both hands.

The “laser scanners” that create the 3D geometry and texture maps for the 3D avatar and perform real time body tracking would more likely be cameras, but that would not sound as cool.

And lastly the walking platform apparently requires our user to stand on large marbles or ball bearings and stay balanced while wearing a headset. Uh…maybe…no. Apologetics fails me. To me it looks like it would be uncomfortable to walk on, almost like deterrent paving.

Software

The Corridor, unlike the 3D file browser used in Jurassic Park, is a special effect created for the film. It was a mostly-plausible, near future system in 1994, except for the photorealistic avatar. Usually this site doesn’t discuss historical context (the  “new criticism” stance), but I think in this case it helps to explain how this interface would have appeared to audiences almost two decades ago.

I’ll start with the 3D graphics of the virtual building. My initial impression was that The Corridor could have been created as an interactive program in 1994, but that was my memory compressing the decade. During the 1990s 3D computer graphics, both interactive and CGI, improved at a phenomenal rate. The virtual building would not have been interactive in 1994, was possible on the most powerful systems six years later in 2000, and looks rather old-fashioned compared to what the game consoles of the 21st C can achieve.

For the voice interface I made the opposite mistake. Voice interfaces on phones and home computing appliances have become common in the second decade of the 21st C, but in reality are much older. Apple Macintosh computers in 1994 had text-to-speech synthesis with natural sounding voices and limited vocabulary voice command recognition. (And without needing an Internet connection!) So the voice interface in the scene is believable.

The multi-user aspects of The Corridor were possible in 1994. The wireframe avatars for users not in virtual reality are unflattering or perhaps creepy, but not technically difficult. As a first iteration of a prototype system it’s a good attempt to span a range of hardware capabilities.

The virtual reality avatar, though, is not believable for the 1990s and would be difficult today. Photographs of the body, made during the startup scan, could be used as a texture map for the VR avatar. But live video of the face would be much more difficult, especially when the face is partly obscured by a headset.

How well does the interface inform the narrative of the story?

The virtual reality system in itself is useful to the overall narrative because it makes the Digicom company seem high tech. Even in 1994 CD-ROM drives weren’t very interesting.

The Corridor is essential to the tension of the scene where Tom uses it to find the files, because otherwise the scene would be much shorter and really boring. If we ignore the virtual reality these are the interface actions:

  • Tom reads an email.
  • Meredith deletes the folder containing those emails.
  • Tom finds a folder full of recorded video calls.
  • Tom watches one recorded video call.
  • Meredith deletes the folder containing the video calls.

Imagine how this would have looked if both were using a conventional 2D GUI, such as the Macintosh Finder or MS Windows Explorer. Double click, press and drag, double click…done.

The Corridor slows down Tom’s actions and makes them far more visible and understandable. Thanks to the virtual reality avatar we don’t have to watch an actor push a mouse around. We see him moving and swiping, be surprised and react; and the voice interface adds extra emotion and some useful exposition. It also helps with the plot, giving Tom awareness of what Meredith is doing without having to actively spy on her, or look at some kind of logs or recordings later on.

Meredith, though, can’t use the VR system because then she’d be aware of Tom as well. Using a conventional workstation visually distinguishes and separates Meredith from Tom in the scene.

So overall, though the “action” is pretty mundane, it’s crucial to the plot, and the VR interface helps make this interesting and more engaging.

How well does the interface equip the character to achieve their goals?

As described in the film itself, The Corridor is a prototype for demonstrating virtual reality. As a file browser it’s awful, but since Tom has lost all his normal privileges this is the only system available, and he does manage to eventually find the files he needs.

At the start of the scene, Tom spends quite some time wandering around a vast multi-storey building without a map, room numbers, or even coordinates overlaid on his virtual view. Which seems rather pointless because all the files are in one room anyway. As previously discussed for Johnny Mnemonic, walking or flying everywhere in your file system seems like a good idea at first, but often becomes tedious over time. Many actual and some fictional 3D worlds give users the ability to teleport directly to any desired location.

Then the file drawers in each cabinet have no labels either, so Tom has to look carefully at each one in turn. There is so much more the interface could be doing to help him with his task, and even help the users of the VR demo learn and explore its technology as well.

Contrast this with Meredith, who uses her command line interface and 2D GUI to go through files like a chainsaw.

Tom becomes much more efficient with the voice interface. Which is just as well, because if he hadn’t, Meredith would have deleted the video conference recordings while he was still staring at virtual filing cabinets. However neither the voice interface nor the corresponding file display need three dimensional graphics.

There is hope for version 2.0 of The Corridor, even restricting ourselves to 1994 capabilities. The first and most obvious is to copy 2D GUI file browsers, or the 3D file browser from Jurassic Park, and show the corresponding text name next to each graphical file or folder object. The voice interface is so good that it should be turned on by default without requiring the angel. And finally add some kind of map overlay with a you are here moving dot, like the maps that players in 3D games such as Doom could display with a keystroke.

Film making challenge: VR on screen

Virtual reality (or augmented reality systems such as Hololens) provide a better viewing experience for 3D graphics by creating the illusion of real three dimensional space rather than a 2D monitor. But it is always a first person view and unlike conventional 2D monitors nobody else can usually see what the VR user is seeing without a deliberate mirroring/debugging display. This is an important difference from other advanced or speculative technologies that film makers might choose to include. Showing a character wielding a laser pistol instead of a revolver or driving a hover car instead of a wheeled car hardly changes how to stage a scene, but VR does.

So, how can we show virtual reality in film?

There’s the first-person view corresponding to what the virtual reality user is seeing themselves. (Well, half of what they see since it’s not stereographic, but it’s cinema VR, so close enough.) This is like watching a screencast of someone else playing a first person computer game, the original active experience of the user becoming passive viewing by the audience. Most people can imagine themselves in the driving seat of a car and thus make sense of the turns and changes of speed in a first person car chase, but the film audience probably won’t be familiar with the VR system depicted and will therefore have trouble understanding what is happening. There’s also the problem that viewing someone else’s first-person view, shifting and changing in response to their movements rather than your own, can make people disoriented or nauseated.

A third-person view is better for showing the audience the character and the context in which they act. But not the diegetic real-world third-person view, which would be the character wearing a geeky headset and poking at invisible objects. As seen in Disclosure, the third person view should be within the virtual reality.

But in doing that, now there is a new problem: the avatar in virtual reality representing the real character. If the avatar is too simple the audience may not identify it with the real world character and it will be difficult to show body language and emotion. More realistic CGI avatars are increasingly expensive and risk falling into the Uncanny Valley. Since these films are science fiction rather than factual, the easy solution is to declare that virtual reality has achieved the goal of being entirely photorealistic and just film real actors and sets. Adding the occasional ripple or blur to the real world footage to remind the audience that it’s meant to be virtual reality, again as seen in Disclosure, is relatively cheap and quick.
So, solving all these problems results in the cinematic trope we can call Extradiegetic Avatars, which are third-person, highly-lifelike “renderings” of characters, with a telltale Hologram Projection Imperfection for audience readability, that may or may not be possible within the world of the film itself.

Tattoo surveillance

In the prior Idiocracy post I discussed the car interface, especially in terms of how it informs the passengers what is happening when it is remotely shut down. Today let’s talk about the passive interface that shuts it down: Namely, Joe’s tattoo and the distance-scanning vending machine.

It’s been a while since that prior post, so here’s a recap of what’s happening in Idiocracy in this scene:

When Frito is driving Joe and Rita away from the cops, Joe happens to gesture with his hand above the car window, where a vending machine he happens to be passing spots the tattoo. Within seconds two harsh beeps sound in the car and a voice says, “You are harboring a fugitive named NOT SURE. Please, pull over and wait for the police to incarcerate your passenger.”

Frito’s car begins slowing down, and the dashboard screen shows a picture of Not Sure’s ID card and big red text zooming in a loop reading PULL OVER.

It’s a fast scene and the beat feels more like a filmmaker’s excuse to get them out of the car and on foot as they hunt for the Time Masheen. I breezed by it in an earlier post, but it bears some more investigation.

This is a class of transaction where, like taxes and advertising, the subject is an unwilling and probably uncooperative participant. But this same interface has to work for payment, in which the subject is a willing participant. Keep this in mind as we look first at the proximate problem, i.e. locating the fugitive for apprehension; and at the ultimate goal, i.e. how a culture deals with crime.

A quick caveat: While it’s fair to say I’m an expert on interaction design, I’m Just a Guy when it comes to criminology and jurisprudence. And these are ideas with some consequence. Feel free to jump in and engage in friendly debate on any of these points.

Proximate problem: Finding the fugitive

The red scan is fast, but it’s very noticable. The sudden flash of light, the red color. This could easily tip a fugitive off and cause them to redouble efforts at evasion, maybe even covering up the tattoo, making the law’s job of apprehending them that much harder. Better would be some stealthier means of detection like RFID chips. I know, that’s not as cinegenic, so the movie version would instead use image recognition, showing the point of view from the vending machine camera (machine point of view or MPOV), with some UI clues showing it identifying, zooming in to, and confirming the barcode.

Yes, that’s a shout-out.

So we can solve stealth-detection cinematically, using tropes. But anytime a designer is asked to consider a scenario, it is a good idea to see if the problem can be more effectively addressed somewhere higher up the goal chain. Is stealth-detection really better?

Goal chain

  • Why is the system locating him? To tell authorities so they can go there and apprehend him.
  • Why are they apprehending him? He has shown an inability to regulate damaging anti-social behavior (in the eyes of the law, anyway) and the offender must be incarcerated.
  • Why do we try to incarcerate criminals? To minimize potential damage to society while the offender is rehabilitated.
  • Why do we try to rehabilitate criminals? Well, in the Idiocracy, it’s an excuse for damnatio ad vehiculum, that is, violent public spectacle based on the notion that jurisprudence is about punishment-as-deterrent. (Pro-tip: That doesn’t work. Did I say that doesn’t work? Because that doesn’t work.) In a liberal democracy like ours, it’s because we understand that the mechanisms of law are imperfect and we don’t want the state to enact irreversible capital punishment when it could be wrong, and, moreover, that human lives have intrinsic value. We should try to give people who have offended a chance to demonstrate an understanding of their crime and the willingness to behave lawfully in the future. Between incarceration and rehabilitation, we seek to minimize crime.
  • Why do we try to minimize crime? (This ought to be self-evident, but juuust in case…) Humans thrive when they do not need to guard against possible attack by every other human they encounter. They can put their resources towards the pursuit of happiness rather than the defense of encroachment. Such lawful societies benefit from network effects.

The MPOV suggestion above fixes the problem at the low level of detection, but each step in the goal chain invites design at a more effective level. It’s fun to look at each of these levels and imagine an advanced-technology solution (and even find sci-fi examples of each), but for this post, let’s look at the last one, minimizing crime, in the context of the tattoo scanner.

Ultimate problem: Preventing crime

In his paper “Deterrence in the Twenty-First Century,” Daniel Nagin reviewed state of the art criminology findings and listed five things about deterrence. Number one in his list is that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Research shows clearly that the chance of being caught is a vastly more effective deterrent than even draconian punishment.

Daniel S. Nagin, 2013

How might we increase the evident chance of being caught?

  1. Fund police forces well so they are well-staffed, well-trained, and have a near-constant, positive presence in communities, and impressive capture rates. Word would get around.
  2. Nagin himself suggests concentrating police presence in criminal hotspots, ensuring that they have visible handcuffs and walkie-talkies.
  3. Another way might be media: Of making sure that potential criminals hear an overwhelming number of stories through their network of criminals being captured successfully. This could involve editorial choice, or even media manipulation, filtering to ensure that “got caught” narratives appear in feeds more than “got away with it” ones. But we’re hopefully becoming more media savvy as a result of Recent Things, and this seems more deceptive than persuasive.
  4. The other way is to increase the sense of observation. And that leads us (as so many things do) to the panopticon.

The Elaboratory*

The Panopticon is almost a trope at this point, but that’s what this scene points to. If you’re not familiar, it is an idea about the design of buildings in which “a number of persons are meant to be kept under inspection,” conceived in the late 1700s by Samuel Bentham and formalized by his brother James in letters to their father. Here is a useful illustration.

*Elaboratory was one of the alternate terms he suggested for the idea. It didn’t catch on since it didn’t have the looming all-seeing-eye ring of the other term.

Elevation, section, and plan as drawn by Willey Reveley, 1791

The design of the panopticon is circular, with prisoners living in isolated cells along the perimeter. The interior wall of each cell is open to view so the inmate can be observed by a person in a central tower or “inspector’s lodge.” Things are structured so the inmates cannot tell whether or not they are being observed. (Bentham suggested louvers.) Over time, the idea goes, the inmate internalizes the unseen authority as a constant presence, and begins to regulate themselves, behaving as they believe the guard would have them behave. Bentham thought this was ideal from an efficacy and economic standpoint.

“Ideal perfection, if that were the object, would require that each person should actually [be under the eyes of the persons who should inspect them], during every instant of time.”

—Jeremey Bentham

It’s an idea that has certainly enjoyed currency. If you hadn’t come across the idea via Bentham, you may have come across it via Foucault in Discipline and Punish, who regarded it not as a money-saving design, but as an illustration of the effect of power. Or maybe Orwell, who did not use the term, but extended it to all of society in 1984. Or perhaps you heard it from Shoshana Zuboff, who in The Age of the Smart Machine reconceived it for information technology in a work environment.

Umm…Carol? Why aren’t you at your centrifuge?

In Benjamen Walker’s podcast Theory of Everything, he dedicates an episode to the argument that as a metaphor it needs to be put away, since…

  1. It builds on one-way observation, and modern social media has us sharing information about ourselves willingly, all the time. The diagram is more dream catcher than bicycle wheel. We volunteer ourselves to the inspector, any inspector, and can become inspectors to anyone else any time. Sousveillance. Stalking.
  2. Most modern uses of the metaphor are anti-government, but surveillance capitalism is a more pernicious problem (here in the West), where advertising uses all the information it can to hijack your reward systems and schlorp money out of you.
  3. Bentham regarded it as a tool for behavior modification, but the metaphor is not used to talk about how surveillance changes us and our identities, but rather as a violation of privacy rights.

It’s a good series, check it out, and hat tip to Brother-from-a-Scottish-Mother John V Willshire for pointing me in its direction.

To Walker’s list I will add another major difference: Panopticon inmates must know they are being watched. It’s critical to the desired internalization of authority. But modern surveillance tries its best to be invisible despite the fact that it gathers an enormous amount of information. (Fortunately it often fails to be invisible, and social media channels can be used to expose the surveillance.)

Guns are bad.

But then, Idiocracy

In Idiocracy, this interface—of the tattoo and the vending machine—is what puts this squarely back in Bentham’s metaphor. The ink is in a place that will be seen very often by the owner, and a place that’s very difficult to casually hide. (I note that the overwhelming majority of Hillfinger [sic] shirts in the movie are even short-sleeved.) So it serves as that permanent—and permanently-visible—identifier. You are being watched. (Holy crap now I have yet another reason to love Person of Interest. It’s adding to our collective media impression the notion of AI surveillance. Anyway…) In this scene, it’s a clear signal that he and his co-offenders could see, which means they would tell their friends this story of how easily Joe was caught. It’s pretty cunningly designed as a conspicuous signal.

Imagine how this might work throughout that world. As people went around their business in the Idiocracy, stochastic flashes of light on their and other people’s wrists keep sending a signal that everyone is being watched. It’s crappy surveillance which we don’t like for all the reasons we don’t like it, but it illustrates why stealth-detection may not be the ideal for crime preventions and why this horrible tattoo might be the thing that a bunch of doomed eggheads might have designed for the future when all that was left was morons. Turns out at least for the Idiocracy, this is a pretty well-designed signal for deterrence, which is the ultimate goal of this interface.

Beep.


3 of 3: Brain Hacking

The hospital doesn’t have the equipment to decrypt and download the actual data. But Jane knows that the LoTeks can, so they drive to the ruined bridge that is the LoTek home base. As mentioned earlier under Door Bombs and Safety Catches the bridge guards nearly kill them due to a poorly designed defensive system. Once again Johnny is not impressed by the people who are supposed to help him.

When Johnny has calmed down, he is introduced to Jones, the LoTek codebreaker who decrypts corporate video broadcasts. Jones is a cyborg dolphin. Continue reading

Airport Security

After fleeing the Yakuza in the hotel, Johnny arrives in the Free City of Newark, and has to go through immigration control. This process appears to be entirely automated, starting with an electronic passport reader.

jm-9-customs-b

After that there is a security scanner, which is reminiscent of HAL from the film 2001: A Space Odyssey.

jm-9-customs-f

The green light runs over Johnny from top to bottom. Continue reading

The Evidence Tray (ordinary use)

LogansRun082

Sandmen surrender any physical objects recovered from the bodies of runners to the Übercomputer for evaluation via a strange device I’m calling The Evidence Tray.

LogansRun083

As a Sandman enters the large interrogation chamber, a transparent cylinder lowers from the ceiling. At the top of this cylinder an arm continuously rotates bearing four pin lights. A chrome cone sits in the center of the base. The Sandman can access the interior of the cylinder through a large oblong opening in the side the top of which is just taller than Sandmen (who seem to be a near-uniform height).

The Sandman puts any evidence he has found into the bottom of this cylinder. (What if the evidence was too large to fit? What if the critical evidence is not physical, or ephemeral? But I digress.) In response to his placing the objects, lights on the rotating arm illuminate, scanning them. The voice of the Übercomputer prompts the Sandman to “identify,” a request that is repeated on a large screen mounted on the wall in view through the transparent backing of the Evidence Tray.

LogansRun090

The Sandman identifies himself by placing his palm on a cone in the cylinder’s center, positioning his lifeclock in the small indention in its tip. The base section of the cylinder illuminates, and after a pause, the voice and screen confirm that his identity has been “affirmed.” Logan removes his hand, and in a flash of blue light the objects in the tray disappear. The film gives no clue as to whether the objects are teleported somewhere or disintegrated into thin air.

LogansRun-vaporizer04

Objections

There are of course the usual objections to the authentication. The lifeclock check is really a biometric check, something that Logan “is” (since he can’t remove the lifeclock) and—per the principles of multifactor authentication—should need to provide an additional factor, such as something he has (like a key) and something he knows (like a password).

There’s another objection there to the fact that the authentication requires that his hand be put into a teleport/distingration chamber. Perhaps narratively this shows the audence the insane levels of trust citizens have in their Nanny Program, but for the real world let’s just say it’s best that you don’t require police to submit to a Flash Gordon Wood Beast just to hand over exhibit A.

There’s a nice touch to the transparent walls allowing him to see the computer screen through it, to get the visual confirmation of what he’s hearing. But I suspect the curved surface also adds a bit of distortion to his view that doesn’t help readability. So the industrial design aspects of the interface sort of even out. Unless I’m missing something. Any industrial designers want to weigh in?

A final objection is the unnecessarily vast architecture that is part of the workflow. Why this giant room with a thin cylinder in the middle of it? Sure there are narrative reasons for it (welcome to this digital heart of darkness) but it seems like something that Sandmen would be doing routinely, and this giant ritual just makes a creepy, big deal about it.

Better

Better might be a wide, waist-high cubby off to the side of their offices, whatever those are, with a wide tray and computer screen. Sandmen could drop the evidence into the tray and place their hands into an authenticator outside the tray, initiating the scan. This would save them the awkward time of waiting for the computer to order them to authenticate, and tightly couple the objects with their identity. The improved semiotics say, “I, Logan, found these and am surrendering them to you.” Then if the computer needed to speak more about it, it could summon them to an interlocution room, or something with a similarly awkward 70s name.

REAL TIME FULL SCAN HACKING

GitS-cybrain-06

When Section 9 monitors a cyborg’s brain for real-time evidence of hacking, we see a monitoring scan. It shows a screen-green wireframe brain floating at an oblique angle in a black space. A 2D rectangle repeatedly builds it with a “wipe” from front to back, which leaves a dim 3D trail in its passing that describes the brain shape. Fans of the National Library of Medicine’s The Visible Human Project may see similarities, though the project’s visualizations would not be available until a year after the film’s release.

In the upper left is a legend reading, “REAL TIME FULL SCAN HACKING” with some numbers, with another unintelligible legend in the lower right. The values in the upper left never change, and the values in the lower legend change too rapidly to read them. After a beat, a text overlay appears on the right hand side of the screen with vaguely-medical terms listed in all capital letters, flying by too quickly to read*. There is an additional device seen in the corner of the frame, with progress-bar-like displays with thick green lines that wobble left and right. Two waveforms hang above this, their labels off screen. Yellow “fireworks” appear near the “temples” of the brain, indicating the parts under attack.

A question of usefulness

If data doesn’t change or changes too fast to read, it is worth asking if the data should be shown at all. If it’s moving too fast, other representations might work better, like a progress bar, a map, or sparkline. Of course, we know that many programmers may use this kind of output during the run of a program so that if the program stops, the last few activities may be immediately known, so this may be more code than interface.

*Vaguely-medical terms

If you’re the sort of nerd who obsesses over details, following is the text that flashes on the right hand side of the display. There’s nothing in it that is really helpful or informative to a review. It’s mostly internal organs or parts of the brain augmented with “CHECKS” and “CONNECTS”. There’s one exception, about halfway through the 5-second sequence, where it reads “M.YGODDESS CHECK.” Diegetically, it could be a programmers slang for a body part. More likely it’s a reference to Oh! My Goddess!, a manga by Kosuke Fujishima that’s been in print since 1988.

GitS-cybrain-07

ACCESS
CHECK CONNECT
MOTOR FIBERS CHECK
CONNECT POINT NCL
NCL. AMBIGUOUS
SEARCH AN ARTFICIAL B
NCL. AMBIGUOUS CHECK
AN ARTIFICIAL BODY’S PO
GANGLION SUPERIUS CHECK
NO REJECTION
FORAMEN JUGULARE PAG
GANGLION INFERIUS
GANGLION INFERIUS
PROPER VOLTAGE
RAMIPHARMNGEI CAL.L.D
N. LARYNGEUS SUPERIOR
RAMIPHARYNGI CHECK
PLEXUS PHARYNGEUS CHECK
PLEXUS PHARYNGEUS CHECK
NEXT
M.LEVATOR VELI PALAT
MM.CONSTRICTORES PHA
CALLING…
M.LEVATOR VELI PALAT
MM.CONSTRICTORES PHA
CONNECT
N.LARYNGEUS SUPERIOR
N.LARYNGEUS RECURRE
RAMUS EXTERNUS CHECK
NEXT
M.CIRCOTHYROIDEUS
RAMIESOPHAGEI CALLIN
N.LARYNGEUS RECURRED
NO REJECTION
CHECK FEEDBACK TO
NCL. AMBIGUUS
RAMITRACHEALES CHEC
FEEDBACK TO NCL. AMBI
RAMIESOPHAGEI CHECK
NEXT
N.LARYNGEUS INFERIOR
CONNECT N.VAGUS MOTOR
CHECK OVER
EXTEROCEPTIVE SENSOR
CHECK STRAT
CONNECT POINT NCL
NCL. SPINALIS N TRIG
SEARCH AN ARTIFICAL B
NCL.SPINALIS N.TRIGG
CHECK
AN ARTIFICIAL BODY’S PO
TR.SPINALIS N. TIGGER
NO REJECTION
TR.SPINALIS N.TRIGE
CANALICULUS MASTOID
VISCEROMOTOR FIBERS
CANALICULUS MASTOIDS
CONNECT POINT NCL
NCL. DORSALIS N. VAGI
RAMUS AURICULARIS CH
CHECK FEEDBACK TO
NCL. SPINALIS N. TRIGEG
SEARCH AN ARTIFICIAL B
N. VAGUS ENERROCEPTIN
FEEDBACK TO
NCL. SPINALIS TRIGER
CHECK OVER
ANARTIFICAL BODY’S PO
NCL.DORSAL IS N. VAGI
GANGLION SUPERIUS
NO REJECTION
GANGLION SUPERIUS CH
FORAMEN JUGULARE PAS
GANGLION INFERIUS CHE
SAFETY CONNECT PROGR
RAMICORDIACICERVICA
CALLING…
RAMICORDIACICERVICA
NO REJECTION
NEXT
RAMICORDIACICERVICA
CALLING…
PLESUS CARDIACUS CAL
RAMICORDIACICERVICA
PLESUS CARDIACUS CHE
M. ATSUMO TOKAORU CHE
ATOMIC DISPOSITION C
M.YGODDESS CHECK
CHECK OVER
GUSTATORY FIBERS
CHECK STRAT
CONNECT POINT NCL.
NCL. SOLITARIUS
SEARCH AN ARTIFICAIAL B
NCL. SOLITARIUS CHECK
AN ARTIFICIAL BODY’S PO
GANGLION SUPERIUS
NO NOIZE
NEXT
GANGLION SUPERIUS CH
FORAMEN JUGULARE PRE
GANGLION INFERIUS CHE
GANGLION INFERIUS CHE
RAMIPHARYNGEI CALLING
RAMIPHARYNGEI CHECK
PLEXUS PHARYNGEUS CA
NO REJECTION
PLEXUS PHARYNGEUS CH
TASTE BUDS CALLING
CHECK FEEDBACK TO
NCL. SOLITARIUS
TASTE BUDS CONNECT
FEEDBACK NCL. SOLITAR
CHECK OVER
VISCEPOSENSORY FIBER
CHECK STRAT
CONNECT POINT NCL
NCL SOLITARIUS
SEACH AN ARTIFICIAL B
NCL. SOLITARIUS CHECK
AN ARTIFICIAL BODY’S PO
TRACTUS SOLITARIUS C
NO NOIZE
TRACTUS SOLITARIUS C
GANGLION SUPERIUS CA
NO REJECTION
GANGLION SUPERIUS CH
FORAMEN JUGULARE PAS
GANGLION INFERIUS CA
N.LARYNGEUS SUPERIOR
N.LARYNGEUS RECURRED
PLEXUS PULMONAL IS CA
N. LARYNGEUS RECURRED
RAMIESOPHAGUI CALLI
N. LARYNGEYS INFERIOR
RAMITRACHEALES SUPERIOR
RAMUS INTERNUS CALLI
PLEXUS INTERNUS CALLI
PLEXUS PULMONALIS CH
PLEXUS ESOPHAGEUS CA
RAMIESOPHAGEI CHECK
N.LARYNGEUS INFERIOR
PLEXUS EXOPHAGEUS CH
TRUNCUS VAGALIS POST
RAMITRACHEALES CHEC
TRUNCUS VAGALIS ANTE
RAMUS INTERNUS CHECK
VOCAL CORO CALLING
TRUNCUS VAGALIS POST
RAMICOEL CALLING
RAMIRENALES CALLING
TRUNCUS VAGALIS ANTE
RAMIHEPATICI CHECK
PLEXUS HAPATICUS CAL
RAMIGASTRICIPOSTER
RAMIRENALES CHECK
PLEXUS RENALIS CALLI
RAMICOELIACI CHECK
PLEXUS COELICUS CALL
RAMIHEPATICI CHECK
PLEXUSHEPATICUS CALL
RAMIGASTRICI ANTERIO
PLEXUS COELICUS CHEC
RAMI GASTRICIPOSTER
PLEXUS RENALIS CHECK
RAMIGASTRICI ANTERIO
CHECK FEEDBACK TO
BCL. SOLITARUS
PLEXUS HEPATICUS CHE
FEEDBACK TO NCL. SOLIT
VOCAL CORD CHECK
CHECK OVER
CHECK CONNECT
MOTOR FIBERS CHECK
CONNECT POINT NCL
NCL. AMBIGUUS
SEARCH AN ARTIFICAL B
NCL.AMBIGUOUS CHECK
AN ARTIFICAL BODY’S
GANGLION SUPERIUS CA
GANGLION SUPERIUS CH
NO REJECTION
FORAMEN JUGULARE PAS
GANGLION INFERIUS CAL
GANGLION INFERIUS CHE
PROPER VOLTAGE

Topography “Pups”

The “pups,” as low-grade sociopath and geologist Fifield calls them, are a set of spheres that float around and spatially map the surface contours of a given space in real-time.

Prometheus-202

To activate them, Fifield twists their hemispheres 90 degrees along their equator, and they begin to glow red along two red rings.

When held up for a few seconds, they rise to the vertical center of the space they are in, and begin to fly in different directions, shining laser in a coronal ring as they go.

Prometheus-118

In this way they scan the space and report what they detect of the internal topography back to the ship, where it is reconstructed in 3D in real time. The resulting volumetric map features not just the topography, but icons (yellow rotating diamonds with last initials above them) to represent the locations of individual scientists and of course the pups themselves.

Prometheus-188

The pups continue forward along the axis of a space until they find a door, at which they will wait until they are let inside. How they recognize doors in alien architecture is a mystery. But they must, or the first simple dead-end or burrow would render it inert.

The pups are simple, and for that they’re pretty cool. Activation by twist-and-lift is easy through the constraints of the environment suits, easy to remember, and quick to execute, but deliberate enough not to be performed accidentally. Unfortunately we never see how they are retreived, but it raises some interesting interaction design challenges.