These examples, although fictional, demonstrate that “3D” can be used in different ways.
In Jurassic Park and Hackers, 3D graphics are used to create a richer display with more information density, though it is not photorealistic. The Jurassic Park file browser is primarily a symbolic 2D representation of the file system hierarchy, projected onto a perspective ground plane to make more elements visible at once. The third dimension is used to indicate the number of sub elements or their size. In Hackers, the City of Text towers most likely represent the actual contents of each physical disk drive in the corresponding real world location, and the pulses and colors indicate levels of activity or threat.
The Corridor in Disclosure, and its VirtuGood 6500 close copy in Community, instead create a more photorealistic virtual world. The file system becomes a building or landscape, and the users are embodied within the virtual world as an avatar. Like the pre-computer memory palace, this should take advantage of the human ability to remember and navigate our way around. But The Corridor blows it by putting all the files within one room, and representing them as sheets of paper within identical filing cabinets. Walking through the 3D architecture becomes a pretty but time wasting diversion.
I’m personally disappointed not to find any true computer memory palaces, whether fictional or real. As mentioned in the introduction, an essential characteristic of the memory palace is that each item be stored in a unique location, visually distinct from any other. None of the 3D file systems I’ve been able to find do this, instead using generic icons throughout. Computers are actually quite good at creating almost infinite variations in appearance, e.g. fractals in 2D and various CGI landscapes and underwater environments in 3D. A computer memory palace would at least be more interesting to look at.
Where are they today?
Since the 1990s the 3D file browser has seemingly faded away, both in reality and in film/TV. Let’s (briefly) think about why.
The SGI 3D file browser shown in Jurassic Park was not the only one to be released as a real piece of software. Although personal computers could easily run such a 3D file browser by the year 2000, and mobile phones a few years later, the systems we actually use have remained two dimensional. The only widespread use of 3D spatial organisation that I’m aware of is the Apple Time Machine backup software, which uses distance from the viewer to represent increasing age. It’s a linear sequence of 2D desktops rather than allowing true three dimensional movement in any direction. Even native 3D systems like the Oculus Quest present the user a 2D GUI wrapped around the user in a cylinder.
We don’t have our files arranged into 3D buildings or worlds, but there have been other developments since the first 2D file browsers. Keyword search is now built into most GUI desktops. Photo collections can be viewed by timeline, or by geographical location; and music collections arranged by genre, artist, or album. So one likely reason why we don’t have real world 3D file browsers is that in themselves they don’t provide enough of an advantage over the existing 2D GUIs to make changing worthwhile.
User interfaces in film and TV are not constrained by reality or practicality so their absence must be due to other reasons. Sometimes real world interface trends affect what we see on the screen, for instance the replacement of command line interfaces by graphical, but for file browsing we’re still using the 2D GUI browsers from the 1990s. And it’s not because of technical difficulty or expense, because we’ve seen that 1990 feature-film 3D effects can now be created in the budget of a sitcom episode. An example is the 2008 film Iron Man, already mentioned for using a 3D trashcan within Tony Stark’s CAD software system. Later in the film, Pepper needs to copy some files from the corporate PC of evil executive Obadiah Stane. As in the earlier films covered in this review, Stark Industries is portrayed as an advanced technology company so this PC also has a custom GUI created for the film. Here though there is only a very slight use of 3D to arrange flat file icons in order, otherwise it closely resembles existing 2D desktops. The filmmakers could have inserted a 3D file browser with perhaps volumetric projection to match Tony’s 3D CAD system but chose not to.
Pepper selects a folder in the text list at left and it is also highlighted in the graphical list of overlaid translucent icons at right. Iron Man (2008)
Copying computer files (or more dramatically “the data”) still happens in science fiction or near future film settings, but also has become more common in everyday life with the spread of personal computers and now smartphones worldwide. In my opinion, this is the most likely reason why we don’t see 3D and VR file browsers any more: we the audience know how to copy files and search for them, and won’t be impressed by attempts to make it “high tech” with fanciful user interfaces. File systems and browsers have become, well, boring. So we can look back on these cinematic dalliances with 3D file management fondly, but recognize it as a thing we tried for a while, and learned from, but eventually put down.
Our next 3D file browsing system is from the 1994 film Disclosure. Thanks to site reader Patrick H Lauke for the suggestion.
Like Jurassic Park, Disclosure is based on a Michael Crichton novel, although this time without any dinosaurs. (Would-be scriptwriters should compare the relative success of these two films when planning a study program.) The plot of the film is corporate infighting within Digicom, manufacturer of high tech CD-ROM drives—it was the 1990s—and also virtual reality systems. Tom Sanders, executive in charge of the CD-ROM production line, is being set up to take the blame for manufacturing failures that are really the fault of cost-cutting measures by rival executive Meredith Johnson.
The Corridor: Hardware Interface
The virtual reality system is introduced at about 40 minutes, using the narrative device of a product demonstration within the company to explain to the attendees what it does. The scene is nicely done, conveying all the important points we need to know in two minutes. (To be clear, some of the images used here come from a later scene in the film, but it’s the same system in both.)
The process of entangling yourself with the necessary hardware and software is quite distinct from interacting with the VR itself, so let’s discuss these separately, starting with the physical interface.
Tom wearing VR headset and one glove, being scanned. Disclosure (1994)
In Disclosure the virtual reality user wears a headset and one glove, all connected by cables to the computer system. Like most virtual reality systems, the headset is responsible for visual display, audio, and head movement tracking; the glove for hand movement and gesture tracking.
There are two “laser scanners” on the walls. These are the planar blue lights, which scan the user’s body at startup. After that they track body motion, although since the user still has to wear a glove, the scanners presumably just track approximate body movement and orientation without fine detail.
Lastly, the user stands on a concave hexagonal plate covered in embedded white balls, which allows the user to “walk” on the spot.
Closeup of user standing on curved surface of white balls. Disclosure (1994)
Searching for Evidence
The scene we’re most interested in takes place later in the film, the evening before a vital presentation which will determine Tom’s future. He needs to search the company computer files for evidence against Meredith, but discovers that his normal account has been blocked from access. He knows though that the virtual reality demonstrator is on display in a nearby hotel suite, and also knows about the demonstrator having unlimited access. He sneaks into the hotel suite to use The Corridor. Tom is under a certain amount of time pressure because a couple of company VIPs and their guests are downstairs in the hotel and might return at any time.
The first step for Tom is to launch the virtual reality system. This is done from an Indy workstation, using the regular Unix command line.
The command line to start the virtual reality system. Disclosure (1994)
Next he moves over to the VR space itself. He puts on the glove but not the headset, presses a key on the keyboard (of the VR computer, not the workstation), and stands still for a moment while he is scanned from top to bottom.
Real world Tom, wearing one VR glove, waits while the scanners map his body. Disclosure (1994)
On the left is the Indy workstation used to start the VR system. In the middle is the external monitor which will, in a moment, show the third person view of the VR user as seen earlier during the product demonstration.
Now that Tom has been scanned into the system, he puts on the headset and enters the virtual space.
The Corridor: Virtual Interface
“The Corridor,” as you’ve no doubt guessed, is a three dimensional file browsing program. It is so named because the user will walk down a corridor in a virtual building, the walls lined with “file cabinets” containing the actual computer files.
Three important aspects of The Corridor were mentioned during the product demonstration earlier in the film. They’ll help structure our tour of this interface, so let’s review them now, as they all come up in our discussion of the interfaces.
There is a voice-activated help system, which will summon a virtual “Angel” assistant.
Since the computers themselves are part of a multi-user network with shared storage, there can be more than one user “inside” The Corridor at a time. Users who do not have access to the virtual reality system will appear as wireframe body shapes with a 2D photo where the head should be.
There are no access controls and so the virtual reality user, despite being a guest or demo account, has unlimited access to all the company files. This is spectacularly bad design, but necessary for the plot.
With those bits of system exposition complete, now we can switch to Tom’s own first person view of the virtual reality environment.
Virtual world Tom watches his hands rezzing up, right hand with glove. Disclosure (1994)
There isn’t a real background yet, just abstract streaks. The avatar hands are rezzing up, and note that the right hand wearing the glove has a different appearance to the left. This mimics the real world, so eases the transition for the user.
Overlaid on the virtual reality view is a Digicom label at the bottom and four corner brackets which are never explained, although they do resemble those used in cameras to indicate the preferred viewing area.
To the left is a small axis indicator, the three green lines labeled X, Y, and Z. These show up in many 3D applications because, silly though it sounds, it is easy in a 3D computer environment to lose track of directions or even which way is up. A common fix for the user being unable to see anything is just to turn 180 degrees around.
We then switch to a third person view of Tom’s avatar in the virtual world.
Tom is fully rezzed up, within cloud of visual static. Disclosure (1994)
This is an almost photographic-quality image. To remind the viewers that this is in the virtual world rather than real, the avatar follows the visual convention described in chapter 4 of Make It So for volumetric projections, with scan lines and occasional flickers. An interesting choice is that the avatar also wears a “headset”, but it is translucent so we can see the face.
Now that he’s in the virtual reality, Tom has one more action needed to enter The Corridor. He pushes a big button floating before him in space.
Tom presses one button on a floating control panel. Disclosure (1994)
This seems unnecessary, but we can assume that in the future of this platform, there will be more programs to choose from.
The Corridor rezzes up, the streaks assembling into wireframe components which then slide together as the surfaces are shaded. Tom doesn’t have to wait for the process to complete before he starts walking, which suggests that this is a Level Of Detail (LOD) implementation where parts of the building are not rendered in detail until the user is close enough for it to be worth doing.
Tom enters The Corridor. Nearby floor and walls are fully rendered, the more distant section is not complete. Disclosure (1994)
The architecture is classical, rendered with the slightly artificial-looking computer shading that is common in 3D computer environments because it needs much less computation than trying for full photorealism.
Instead of a corridor this is an entire multistory building. It is large and empty, and as Tom is walking bits of architecture reshape themselves, rather like the interior of Hogwarts in Harry Potter.
Although there are paintings on some of the walls, there aren’t any signs, labels, or even room numbers. Tom has to wander around looking for the files, at one point nearly “falling” off the edge of the floor down an internal air well. Finally he steps into one archway room entrance and file cabinets appear in the walls.
Tom enters a room full of cabinets. Disclosure (1994)
Unlike the classical architecture around him, these cabinets are very modern looking with glowing blue light lines. Tom has found what he is looking for, so now begins to manipulate files rather than browsing.
Virtual Filing Cabinets
The four nearest cabinets according to the titles above are
Communications
Operations
System Control
Research Data.
There are ten file drawers in each. The drawers are unmarked, but labels only appear when the user looks directly at it, so Tom has to move his head to centre each drawer in turn to find the one he wants.
Tom looks at one particular drawer to make the title appear. Disclosure (1994)
The fourth drawer Tom looks at is labeled “Malaysia”. He touches it with the gloved hand and it slides out from the wall.
Tom withdraws his hand as the drawer slides open. Disclosure (1994)
Inside are five “folders” which, again, are opened by touching. The folder slides up, and then three sheets, each looking like a printed document, slide up and fan out.
Axis indicator on left, pointing down. One document sliding up from a folder. Disclosure (1994)
Note the tilted axis indicator at the left. The Y axis, representing a line extending upwards from the top of Tom’s head, is now leaning towards the horizontal because Tom is looking down at the file drawer. In the shot below, both the folder and then the individual documents are moving up so Tom’s gaze is now back to more or less level.
Close up of three “pages” within a virtual document. Disclosure (1994)
At this point the film cuts away from Tom. Rival executive Meredith, having been foiled in her first attempt at discrediting Tom, has decided to cover her tracks by deleting all the incriminating files. Meredith enters her office and logs on to her Indy workstation. She is using a Command Line Interface (CLI) shell, not the standard SGI Unix shell but a custom Digicom program that also has a graphical menu. (Since it isn’t three dimensional it isn’t interesting enough to show here.)
Tom uses the gloved hand to push the sheets one by one to the side after scanning the content.
Tom scrolling through the pages of one folder by swiping with two fingers. Disclosure (1994)
Quick note: This is harder than it looks in virtual reality. In a 2D GUI moving the mouse over an interface element is obvious. In three dimensions the user also has to move their hand forwards or backwards to get their hand (or finger) in the right place, and unless there is some kind of haptic feedback it isn’t obvious to the user that they’ve made contact.
Tom now receives a nasty surprise.
The shot below shows Tom’s photorealistic avatar at the left, standing in front of the open file cabinet. The green shape on the right is the avatar of Meredith who is logged in to a regular workstation. Without the laser scanners and cameras her avatar is a generic wireframe female humanoid with a face photograph stuck on top. This is excellent design, making The Corridor usable across a range of different hardware capabilities.
Tom sees the Meredith avatar appear. Disclosure (1994)
Why does The Corridor system place her avatar here? A multiuser computer system, or even just a networked file server, obviously has to know who is logged on. Unix systems in general and command line shells also track which directory the user is “in”, the current working directory. Meredith is using her CLI interface to delete files in a particular directory so The Corridor can position her avatar in the corresponding virtual reality location. Or rather, the avatar glides into position rather than suddenly popping into existence: Tom is only surprised because the documents blocked his virtual view.
Quick note: While this is plausible, there are technical complications. Command line users often open more than one shell at a time in different directories. In such a case, what would The Corridor do? Duplicate the wireframe avatar in each location? In the real world we can’t be in more than one place at a time, would doing so contradict the virtual reality metaphor?
There is an asymmetry here in that Tom knows Meredith is “in the system” but not vice versa. Meredith could in theory use CLI commands to find out who else is logged on and whether anyone was running The Corridor, but she would need to actively seek out that information and has no reason to do so. It didn’t occur to Tom either, but he doesn’t need to think about it, the virtual reality environment conveys more information about the system by default.
We briefly cut away to Meredith confirming her CLI delete command. Tom sees this as the file drawer lid emitting beams of light which rotate down. These beams first erase the floating sheets, then the folders in the drawer. The drawer itself now has a red “DELETED” label and slides back into the wall.
Tom watches Meredith deleting the files in an open drawer. Disclosure (1994)
Tom steps further into the room. The same red labels appear on the other file drawers even though they are currently closed.
Tom watches Meredith deleting other, unopened, drawers. Disclosure (1994)
Talking to an Angel
Tom now switches to using the system voice interface, saying “Angel I need help” to bring up the virtual reality assistant. Like everything else we’ve seen in this VR system the “angel” rezzes up from a point cloud, although much more quickly than the architecture: people who need help tend to be more impatient and less interested in pausing to admire special effects.
The voice assistant as it appears within VR. Disclosure (1994)
Just in case the user is now looking in the wrong direction the angel also announces “Help is here” in a very natural sounding voice.
The angel is rendered with white robe, halo, harp, and rapidly beating wings. This is horribly clichéd, but a help system needs to be reassuring in appearance as well as function. An angel appearing as a winged flying serpent or wheel of fire would be more original and authentic (yes, really: Biblically Accurate Angels) but users fleeing in terror would seriously impact the customer satisfaction scores.
Now Tom has a short but interesting conversation with the angel, beginning with a question:
Tom
Is there any way to stop these files from being deleted?
Angel
I’m sorry, you are not level five.
Tom
Angel, you’re supposed to protect the files!
Angel
Access control is restricted to level five.
Tom has made the mistake, as described in chapter 9 Anthropomorphism of the book, of ascribing more agency to this software program than it actually has. He thinks he is engaged in a conversational interface (chapter 6 Sonic Interfaces) with a fully autonomous system, which should therefore be interested in and care about the wellbeing of the entire system. Which it doesn’t, because this is just a limited-command voice interface to a guide.
Even though this is obviously scripted, rather than a genuine error I think this raises an interesting question for real world interface designers: do users expect that an interface with higher visual quality/fidelity will be more realistic in other aspects as well? If a voice interface assistant has a simple polyhedron with no attempt at photorealism (say, like Bit in Tron) or with zoomorphism (say, like the search bear in Until the End of the World) will users adjust their expectations for speech recognition downwards? I’m not aware of any research that might answer this question. Readers?
Despite Tom’s frustration, the angel has given an excellent answer – for a guide. A very simple help program would have recited the command(s) that could be used to protect files against deletion. Which would have frustrated Tom even more when he tried to use one and got some kind of permission denied error. This program has checked whether the user can actually use commands before responding.
This does contradict the earlier VR demonstration where we were told that the user had unlimited access. I would explain this as being “unlimited read access, not write”, but the presenter didn’t think it worthwhile to go into such detail for the mostly non-technical audience.
Tom is now aware that he is under even more time pressure as the Meredith avatar is still moving around the room. Realising his mistake, he uses the voice interface as a query language.
“Show me all communications with Malaysia.” “Telephone or video?” “Video.”
This brings up a more conventional looking GUI window because not everything in virtual reality needs to be three-dimensional. It’s always tempting for a 3D programmer to re-implement everything, but it’s also possible to embed 2D GUI applications into a virtual world.
Tom looks at a conventional 2D display of file icons inside VR. Disclosure (1994)
The window shows a thumbnail icon for each recorded video conference call. This isn’t very helpful, so Tom again decides that a voice query will be much faster than looking at each one in turn.
“Show me, uh, the last transmission involving Meredith.”
There’s a short 2D transition effect swapping the thumbnail icon display for the video call itself, which starts playing at just the right point for plot purposes.
Tom watches a previously recorded video call made by Meredith (right). Disclosure (1994)
While Tom is watching and listening, Meredith is still typing commands. The camera orbits around behind the video conference call window so we can see the Meredith avatar approach, which also shows us that this window is slightly three dimensional, the content floating a short distance in front of the frame. The film then cuts away briefly to show Meredith confirming her “kill all” command. The video conference recordings are deleted, including the one Tom is watching.
Tom is informed that Meredith (seen here in the background as a wireframe avatar) is deleting the video call. Disclosure (1994)
This is also the moment when the downstairs VIPs return to the hotel suite, so the scene ends with Tom managing to sneak out without being detected.
Virtual reality has saved the day for Tom. The documents and video conference calls have been deleted by Meredith, but he knows that they once existed and has a colleague retrieve the files he needs from the backup tapes. (Which is good writing: the majority of companies shown in film and TV never seem to have backups for files, no matter how vital.) Meredith doesn’t know that he knows, so he has the upper hand to expose her plot.
Analysis
How believable is the interface?
I won’t spend much time on the hardware, since our focus is on file browsing in three dimensions. From top to bottom, the virtual reality system starts as believable and becomes less so.
Hardware
The headset and glove look like real VR equipment, believable in 1994 and still so today. Having only one glove is unusual, and makes impossible some of the common gesture actions described in chapter 5 of Make It So, which require both hands.
The “laser scanners” that create the 3D geometry and texture maps for the 3D avatar and perform real time body tracking would more likely be cameras, but that would not sound as cool.
And lastly the walking platform apparently requires our user to stand on large marbles or ball bearings and stay balanced while wearing a headset. Uh…maybe…no. Apologetics fails me. To me it looks like it would be uncomfortable to walk on, almost like deterrent paving.
Software
The Corridor, unlike the 3D file browser used in Jurassic Park, is a special effect created for the film. It was a mostly-plausible, near future system in 1994, except for the photorealistic avatar. Usually this site doesn’t discuss historical context (the “new criticism” stance), but I think in this case it helps to explain how this interface would have appeared to audiences almost two decades ago.
I’ll start with the 3D graphics of the virtual building. My initial impression was that The Corridor could have been created as an interactive program in 1994, but that was my memory compressing the decade. During the 1990s 3D computer graphics, both interactive and CGI, improved at a phenomenal rate. The virtual building would not have been interactive in 1994, was possible on the most powerful systems six years later in 2000, and looks rather old-fashioned compared to what the game consoles of the 21st C can achieve.
For the voice interface I made the opposite mistake. Voice interfaces on phones and home computing appliances have become common in the second decade of the 21st C, but in reality are much older. Apple Macintosh computers in 1994 had text-to-speech synthesis with natural sounding voices and limited vocabulary voice command recognition. (And without needing an Internet connection!) So the voice interface in the scene is believable.
The multi-user aspects of The Corridor were possible in 1994. The wireframe avatars for users not in virtual reality are unflattering or perhaps creepy, but not technically difficult. As a first iteration of a prototype system it’s a good attempt to span a range of hardware capabilities.
The virtual reality avatar, though, is not believable for the 1990s and would be difficult today. Photographs of the body, made during the startup scan, could be used as a texture map for the VR avatar. But live video of the face would be much more difficult, especially when the face is partly obscured by a headset.
How well does the interface inform the narrative of the story?
The virtual reality system in itself is useful to the overall narrative because it makes the Digicom company seem high tech. Even in 1994 CD-ROM drives weren’t very interesting.
The Corridor is essential to the tension of the scene where Tom uses it to find the files, because otherwise the scene would be much shorter and really boring. If we ignore the virtual reality these are the interface actions:
Tom reads an email.
Meredith deletes the folder containing those emails.
Tom finds a folder full of recorded video calls.
Tom watches one recorded video call.
Meredith deletes the folder containing the video calls.
Imagine how this would have looked if both were using a conventional 2D GUI, such as the Macintosh Finder or MS Windows Explorer. Double click, press and drag, double click…done.
The Corridor slows down Tom’s actions and makes them far more visible and understandable. Thanks to the virtual reality avatar we don’t have to watch an actor push a mouse around. We see him moving and swiping, be surprised and react; and the voice interface adds extra emotion and some useful exposition. It also helps with the plot, giving Tom awareness of what Meredith is doing without having to actively spy on her, or look at some kind of logs or recordings later on.
Meredith, though, can’t use the VR system because then she’d be aware of Tom as well. Using a conventional workstation visually distinguishes and separates Meredith from Tom in the scene.
So overall, though the “action” is pretty mundane, it’s crucial to the plot, and the VR interface helps make this interesting and more engaging.
How well does the interface equip the character to achieve their goals?
As described in the film itself, The Corridor is a prototype for demonstrating virtual reality. As a file browser it’s awful, but since Tom has lost all his normal privileges this is the only system available, and he does manage to eventually find the files he needs.
At the start of the scene, Tom spends quite some time wandering around a vast multi-storey building without a map, room numbers, or even coordinates overlaid on his virtual view. Which seems rather pointless because all the files are in one room anyway. As previously discussed for Johnny Mnemonic, walking or flying everywhere in your file system seems like a good idea at first, but often becomes tedious over time. Many actual and some fictional 3D worlds give users the ability to teleport directly to any desired location.
Then the file drawers in each cabinet have no labels either, so Tom has to look carefully at each one in turn. There is so much more the interface could be doing to help him with his task, and even help the users of the VR demo learn and explore its technology as well.
Contrast this with Meredith, who uses her command line interface and 2D GUI to go through files like a chainsaw.
Tom becomes much more efficient with the voice interface. Which is just as well, because if he hadn’t, Meredith would have deleted the video conference recordings while he was still staring at virtual filing cabinets. However neither the voice interface nor the corresponding file display need three dimensional graphics.
There is hope for version 2.0 of The Corridor, even restricting ourselves to 1994 capabilities. The first and most obvious is to copy 2D GUI file browsers, or the 3D file browser from Jurassic Park, and show the corresponding text name next to each graphical file or folder object. The voice interface is so good that it should be turned on by default without requiring the angel. And finally add some kind of map overlay with a you are here moving dot, like the maps that players in 3D games such as Doom could display with a keystroke.
Film making challenge: VR on screen
Virtual reality (or augmented reality systems such as Hololens) provide a better viewing experience for 3D graphics by creating the illusion of real three dimensional space rather than a 2D monitor. But it is always a first person view and unlike conventional 2D monitors nobody else can usually see what the VR user is seeing without a deliberate mirroring/debugging display. This is an important difference from other advanced or speculative technologies that film makers might choose to include. Showing a character wielding a laser pistol instead of a revolver or driving a hover car instead of a wheeled car hardly changes how to stage a scene, but VR does.
So, how can we show virtual reality in film?
There’s the first-person view corresponding to what the virtual reality user is seeing themselves. (Well, half of what they see since it’s not stereographic, but it’s cinema VR, so close enough.) This is like watching a screencast of someone else playing a first person computer game, the original active experience of the user becoming passive viewing by the audience. Most people can imagine themselves in the driving seat of a car and thus make sense of the turns and changes of speed in a first person car chase, but the film audience probably won’t be familiar with the VR system depicted and will therefore have trouble understanding what is happening. There’s also the problem that viewing someone else’s first-person view, shifting and changing in response to their movements rather than your own, can make people disoriented or nauseated.
A third-person view is better for showing the audience the character and the context in which they act. But not the diegetic real-world third-person view, which would be the character wearing a geeky headset and poking at invisible objects. As seen in Disclosure, the third person view should be within the virtual reality.
But in doing that, now there is a new problem: the avatar in virtual reality representing the real character. If the avatar is too simple the audience may not identify it with the real world character and it will be difficult to show body language and emotion. More realistic CGI avatars are increasingly expensive and risk falling into the Uncanny Valley. Since these films are science fiction rather than factual, the easy solution is to declare that virtual reality has achieved the goal of being entirely photorealistic and just film real actors and sets. Adding the occasional ripple or blur to the real world footage to remind the audience that it’s meant to be virtual reality, again as seen in Disclosure, is relatively cheap and quick. So, solving all these problems results in the cinematic trope we can call Extradiegetic Avatars, which are third-person, highly-lifelike “renderings” of characters, with a telltale Hologram Projection Imperfection for audience readability, that may or may not be possible within the world of the film itself.
All of these build on the given that vibranium is a very powerful substance and that Wakanda’s scientists have managed to gain a very, very sophisticated control over it.
In the Talon
This table is about a meter square, and raised off the floor around knee-height. As Okoye and T’Challa approach the traffickers in the Sambisa Forest, T’Challa approaches the table and it springs to life, showing him real-time model of the traffickers’ vehicle train. T’Challa picks up the model of the small transport truck and with a finger, wipes off its roof, revealing that there are over a dozen people huddled within. One of the figures glows amber. (It’s Nakia.) He places the truck back into the display, and the display collapses back to inert sand.
A quick critique of this interaction. The sand highlights Nakia for T’Challa, but why did it wait for him to find her truck and wipe off the top of it to look inside? It knew his goals (find Nakia), can clearly conduct a scan into the vehicle, and understood the context (she’s in one of those trucks), it should not wait for him to pick up each car and scrape off its roof to check and see which one she was in. The interface should have drawn his attention to the truck it knew she was in. This is a “stoic guru” mistake that I’ve critiqued before. You know, the computer knows all, but only tells you when you ask it. It is much more sensible for the transport truck to be glowing from the moment the table goes live, as in the comp below.
Designers: Don’t wait for users to ask just the the right thing at the right time.
Otherwise, this is a good high-tech use of the sand table for the more common meaning of “sand table,” which is a 3-dimensional surface for understanding a theatre of conflict. It doesn’t really help him run through scenarios, testing various tactics, but T’Challa is a warrior king, he can do all that in his head.
The interaction also nicely blurs the line between display and gestural interactive tool, in the same way that the Prometheus astrometrics display did. Like that other example, it would be useful for the display to distinguish when it is representing reality, and when the display is being interrupted or modified. Also, T’Challa is nice enough to put the truck back where it “belongs,” but a design would also need to handle how to respond when T’Challa put the truck back in the wrong place, or, say, crushed the truck model with his hand in fury.
In Prometheus it was an Earth, not a truck, but still focused on Africa.
Shuri’s lab
The largest table we see in the movie is in Shuri’s lab. After Black Panther challenges Killmonger and engages in battle outside the capital city, Shuri, Nakia, and Agent Ross rush down to the lab. As they approach an edge-lit hexagonal table, the vibranium sand lowers to reveal 3D-printed armor and weaponry for Shuri and Nakia to join the fight. (Though it’s not like modern 3D printing, these are powered weapons and kimoyo beads, items with very sophisticated functionality.)
Shuri outfits Ross with kimoyo beads from the print and takes off to join the fight. In the lab, the table creates a seat for Ross to remote-pilot the Royal Talon. Up on the flight deck, Shuri throws a control bead onto the Talon, and an AI in the lab named Griot announces to Agent Ross, “Remote piloting system activated.” (Hey, Trevor Noah, we hear you there!)
Around the seat, a volumetric projection of the Talon appears around him, including a 360° display just beyond the windshield that gives him a very immersive remote flying experience. We hear Shuri’s voice explain to Ross “I made it American Style for you. Get in!”
Ross sits down, grabs joystick controls, and begins remote-chasing down the cargo ships that are carrying munitions to Killmonger’s War Dogs around the world. (The piloting controls and HUD for Ross are a separate issue, and will be handled in their own post.)
The moment that Ross pilots the Talon through the last cargo ship, the volumetric projection disappears and the piloting seat returns to sand, ungraciously plopping Ross down the floor level of the lab.
It is in this shot that we realize that the dark tiles of the lab’s floor are all recessed vibranium sand tables. I can count seven in the shot. So the lab is full of them.
Display material
Let’s talk for a bit about the display choices. Vibranium can change to display any color and a shape down to a fine level of detail. See the screen cap below for an example of perfectly lifelike (if scaled) representation.
This is a vibranium-powered volumetric display. It raises the gaze matching issues we’ve seen before.
So why would it be designed so that in most cases, the display is sparkly and black like black tourmaline? Wouldn’t the truck that T’Challa picks up be most useful if it was photographically rendered? Wouldn’t the remote piloting chair be more comfortable if it had pleather- and silicone-like surfaces?
Extradiegetically, I understand the reason is because art direction. We want Wakandan tech to be visibly different than other tech in the MCU, and having it look like vibranium dust ties it back to that key plot element.
But, per the stance of this blog, I try to look for a diegetic reason. It might be a deliberate reminder of the resource on which their technological fortunes are built. And as the Okoye VP above shows, they aren’t purists about it. When detail is needed, it’s included. So perhaps this is it. That implies a great deal of sophistication on the part of the displays to know when photorealism is needed and when it is not, but the presence of Griot there tells us that they have something approaching general AI.
Missing interactions
So, just like I had to do for the Royal Talon, I have to throw my hands up about reviewing the interactions with the sand tables, because we don’t see the interactions that would give these results.
How were the mission goals communicated to the Royal Talon table? Is it programmed to activate when someone approaches it, or did T’Challa issue a mental command? How did Shuri specify those weapons and that armor? What did she do to make the ship “American style” for Ross? Is that a template? Was it Griot’s interpretation of her intention? Why did the remote piloting seat vanish the moment the mission was complete? Was this something Shuri set up in advance, or Griot’s way of telling Agent Ross to GTFO for his own safety? How does someone in the lab instruct a floor tile to leap up and become a table and do stuff? It’s almost certainly via mental commands through the kimoyo beads, but that’s conjecture. The film really provides little evidence.
On the one hand, this is appropriate for us mere non-Wakandans observing the most technologically advanced society on earth. Much of it would feel like inexplicable magic to us.
On the other, sci-fi routinely introduces us to advanced technologies, and doesn’t always eschew the explanatory interactions, so the absence is notable here. It’s magic.
Black Lives Matter
Each post in the Black Panther review is followed by actions that you can take to support black lives.
In the last post we grieved Chadwick Boseman’s passing. This week we’re grieving the loss of Ruth Bader Ginsburg. May her memory be a blessing. With her loss, the GOP is ratcheting up its outrageous hypocrisy by reversing a precedent that they themselves established when Obama was president. The “Moscow Mitch Rule” (oh, oops, sorry) “McConnell Rule” was that new Justices should not be appointed within a year of a general election, so the people’s voice can be taken into account. Of course, the bastards are just ignoring that now and trying to ram through one of their own before election day. This Justice will certainly be a conservative, and we know with this administration that means reactionary, loyal to tiny-hand Twittler, and racist as a Jim Crow law.
There are a few arrows in citizen’s quivers to stop this. One is to convince at least 4 Republican Senators to reject this outright hypocrisy, put country over party, and adhere to the McConnell rule.
To help put pressure where it might work, you can leave voicemails with Republican Senators who may be mulling whether to put country over party. Those 6 Senators’ names and numbers are below. Here’s a script for your message:
Hello, my name is ______. In 2016, Mitch McConnell created the principle of not confirming a Supreme Court Justice in an election year until after the next inauguration. For the legitimacy of the Court in the eyes of the people, I’m asking Senator ________ to uphold that principle by refusing to confirm a new Justice until after a new President is installed. Thank you.
—You, hopefully
Lisa Murkowski, Alaska; (202) 224-6665
Mitt Romney, Utah: (202) 224-5251
Susan Collins, Maine: (202) 224-2523
Martha McSally, Arizona: (202) 224-2235
Cory Gardner, Colorado: (202) 224-5941
Chuck Grassley, Iowa: (202) 224-3744
I’ve made my calls and left my messages. Can you do the same to stop the hypocritical Trumpian power grab that would tip the Supreme Court for generations?
“Tunnel in the Sky” is the name of a 1955 Robert Heinlein novel that has nothing to do with this post. It is also the title of the following illustration by Muscovite digital artist Vladimir Manyukhin, which also has nothing to do with this post, but is gorgeous and evocative, and included here solely for visual interest.
Instead, this post is about the piloting display of the same name, and written specifically to sci-fi interface designers.
Last week in reviewing the spinners in Blade Runner, I included mention and a passing critique of the tunnel-in-the-sky display that sits in front of the pilot. While publishing, I realized that I’d seen this a handful of other times in sci-fi, and so I decided to do more focused (read: Internet) research about it. Turns out it’s a real thing, and it’s been studied and refined a lot over the past 60 years, and there are some important details to getting one right.
Though I looked at a lot of sources for this article, I must give a shout-out to Max Mulder of TU Delft. (Hallo, TU Delft!) Mulder’s PhD thesis paper from 1999 on the subject is truly a marvel of research and analysis, and it pulls in one of my favorite nerd topics: Cybernetics. Throughout this post I rely heavily on his paper, and you could go down many worse rabbit holes than cybernetics. n.b., it is not about cyborgs. Per se. Thank you, Max.
I’m going to breeze through the history, issues, and elements from the perspective of sci-fi interfaces, and then return to the three examples in the survey. If you want to go really in depth on the topic (and encounter awesome words like “psychophysics” and “egomotion” in their natural habitat), Mulder’s paper is available online for free from researchgate.net: “Cybernetics of Tunnel-in-the-Sky Displays.”
What the heck is it?
A tunnel-in-the-sky display assists pilots, helping them know where their aircraft is in relation to an ideal flight path. It consists of a set of similar shapes projected out into 3D space, circumscribing the ideal path. The pilot monitors their aircraft’s trajectory through this tunnel, and makes course corrections as they fly to keep themselves near its center.
This example comes from Michael P. Snow, as part of his “Flight Display Integration” paper, also on researchgate.net.
Please note that throughout this post, I will spell out the lengthy phrase “tunnel-in-the-sky” because the acronym is pointlessly distracting.
Quick History
In 1973, Volkmar Wilckens was a research engineer and experimental test pilot for the German Research and Testing Institute for Aerospace (now called the German Aerospace Center). He was doing a lot of thinking about flight safety in all-weather conditions, and came up with an idea. In his paper “Improvements In Pilot/Aircraft-Integration by Advanced Contact Analog Displays,” he sort of says, “Hey, it’s hard to put all the information from all the instruments together in your head and use that to fly, especially when you’re stressed out and flying conditions are crap. What if we took that data and rolled it up into a single easy-to-use display?” Figure 6 is his comp of just such a system. It was tested thoroughly in simulators and shown to improve pilot performance by making the key information (attitude, flight-path and position) perceivable rather than readable. It also enabled the pilot greater agency, by not having them just follow rules after instrument readings, but empowering them to navigate multiple variables within parameters to stay on target.
In Wilckens’ Fig. 6, above, you can see the basics of what would wind up on sci-fi screens decades later: shapes repeated into 3D space ahead of the aircraft to give the pilot a sense of an ideal path through the air. Stay in the tunnel and keep the plane safe.
Mulder notes that the next landmark developments come from the work of Arthur Grunwald & S. J. Merhav between 1976–1978. Their research illustrates the importance of augmenting the display and of including a preview of the aircraft in the display. They called this preview the Flight Path Predictor, or FPS. I’ve also seen it called the birdie in more modern papers, which is a lot more charming. It’s that plus symbol in the Grunwald illustration, below. Later in 1984, Grunwald also showed that a heads-up-display increased precision adhering to a curved path. So, HUDs good.
n.b. This is Mulder’s representation of Grunwald’s display format.
I have also seen lots of examples of—but cannot find the research provenance for—tools for helping the pilot stay centered, such as a “ghost” reticle at the center of each frame, or alternately brackets around the FPP, called the Flight Director Box, that the pilot can align to the corners of the frames. (I’ll just reference the brackets. Gestalt be damned!) The value of the birdie combined with the brackets seems very great, so though I can’t cite their inventor, and it wasn’t in Mulder’s thesis, I’ll include them as canon.
The takeaway from the history is really that these displays have a rich and studied history. The pattern has a high confidence.
Elements of an archetypical tunnel-in-the-sky display
There are lots of nuances that have been studied for these displays. Take for example the effect that angling the frames have on pilot banking, and the perfect time offset to nudge pilot behavior closer to ideal banking. For the purposes of sci-fi interfaces, however, we can reduce the critical components of the real world pattern down to four.
Square shapes (called frames) extending into the distance that describe an ideal path through space
The frame should be about five times the width of the craft. (The birdie you see below is not proportional and I don’t think it’s standard that they are.)
The distances between frames will change with speed, but be set such that the pilot encounters a new one every three seconds.
The frames should adopt perspective as if they were in the world, being perpendicular to the flight path. They should not face the display.
The frames should tilt, or bank, on curves.
The tunnel only needs to extend so far, about 20 seconds ahead in the flight path. This makes for about 6 frames visible at a time.
An aircraft reference symbol or Flight Path Predictor Symbol (FPS, or “birdie”) that predicts where the plane will be when it meets the position of the nearest frame. It can appear off-facing in relation to the cockpit.
These are often rendered as two L shapes turned base-to-base with some space between them. (See one such symbol in the Snow example above.)
Sometimes (and more intuitively, imho) as a circle with short lines extending out the sides and the top. Like a cartoon butt of a plane. (See below.)
Contour lines connect matching corners across frames
A horizon line
This comp illustrates those critical features.
There are of course lots of other bits of information that a pilot needs. Altitude and speed, for example. If you’re feeling ambitious, and want more than those four, there are other details directly related to steering that may help a pilot.
Degree-of-vertical-deviation indicator at a side edge
Degree-of-horizontal-deviation indicator at the top edge
Center-of-frame indicator, such as a reticle, appearing in the upcoming frame
A path predictor
Some sense of objects in the environment: If the display is a heads-up display, this can be a live view. If it is a separate screen, some stylized representation what the pilot would see if the display was superimposed onto their view.
What the risk is when off path: Just fuel? Passenger comfort? This is most important if that risk is imminent (collision with another craft, mountain) but then we’re starting to get agentive and I said we wouldn’t go there, so *crumbles up paper, tosses it*.
I haven’t seen a study showing efficacy of color and shading and line scale to provide additional cues, but look closely at that comp and you’ll see…
The background has been level-adjusted to increase contrast with the heads-up display
A dark outline around the white birdie and brackets to help visually distinguish them from the green lines and the clouds
A shadow under the birdie and brackets onto the frames and contours as an additional signal of 3D position
Contour lines diminishing in size as they extend into the distance, adding an additional perspective cue and limiting the amount of contour to the 20 second extents.
Some other interface elements added.
What can you play with when designing one in sci-fi?
Everything, of course. Signaling future-ness means extending known patterns, and sci-fi doesn’t answer to usability. Extend for story, extend for spectacle, extend for overwhelmedness. You know your job better than me. But if you want to keep a foot in believability, you should understand the point of each thing as you modify it and try not to lose that.
Each frame serves as a mini-game, challenging the pilot to meet its center. Once that frame passes, that game is done and the next one is the new goal. Frames describe the near term. Having corners to the frame shape helps convey banking better. Circles would hide banking.
Contour lines, if well designed, help describe the overall path and disambiguate the stack of frames. (As does lighting and shading and careful visual design, see above.) Contour lines convey the shape of the overall path and help guide steering between frames. Kind of like how you’d need to see the whole curve before drifitng your car through one, the contour lines help the pilot plan for the near future.
The birdie and brackets are what a pilot uses to know how close to the center they are. The birdie needs a center point. The brackets need to match the corners of the frame. Without these, it’s easier to drift off center.
A horizon line provides feedback for when the plane is banked.
THIS BAD: You can kill the sense of the display by altering (or in this case, omitting) too much.
Since I mentioned that each frame acts as a mini-game, a word of caution: Just as you should be skeptical when looking to sci-fi, you should be skeptical when looking to games for their interfaces. The simulator which is most known for accuracy (Microsoft Flight Simulator) doesn’t appear to have a tunnel-in-the-sky display, and other categories of games may not be optimizing for usability as much as just plain fun, with the risk of crashing your virtual craft just being part of the risk. That’s not an acceptable outcome in real-world piloting. So, be cautious considering game interfaces as models for this, either.
This clip of stall-testing in the forthcoming MSFS2020 still doesn’t appear to show one.
So now let’s look at the three examples of sci-fi tunnel-in-the-sky displays in chronological order of release, and see how they fare.
Three examples from sci-fi
So with those ideal components in mind, let’s look back at those three examples in the survey.
Quick aside on the Blade Runner interface: The spike at the top and the bottom of the frame help in straight tunnels to serve as a horizontal degree-of-deviation indicator. It would not help as much in curved tunnels, and is missing a matching vertical degree-of-deviation indicator. Unless that’s handled automatically, like a car on a road, its absence is notable.
Starship Troopers (1997) We only get 15 frames of this interface in Starship Troopers, as Ibanez pilots the escape shuttle to the surface of Planet P. It is very jarring to see as a repeating gif, so accept this still image instead.
Some obvious things we see missing from all of them are the birdie, the box, and the contour lines. Why is this? My guess is that the computational power in the 1976 was not enough to manage those extra lines, and Ridley Scott just went with the frames. Then, once the trope had been established in a blockbuster, designers just kept repeating the trope rather than looking to see how it worked in the real world, or having the time to work through the interaction logic. So let me say:
Without the birdie and box, the pilot has far too much leeway to make mistakes. And in sci-fi contexts, where the tunnel-in-the-sky display is shown mostly during critical ship maneuvers, their absence is glaring.
Also the lack of contour lines might not seem as important, since the screens typically aren’t shown for very long, but when they twist in crazy ways they should help signal the difficulty of the task ahead of the pilot very quickly.
Note that sci-fi will almost certainly encounter problems that real-world researchers will not have needed to consider, and so there’s plenty of room for imagination and additional design. Imagine helping a pilot…
Navigating the weird spacetime around a singularity
Bouncing close to a supernova while in hyperspace
Dodging chunks of spaceship, the bodies of your fallen comrades, and rising plasma bombs as you pilot shuttlecraft to safety on the planet below
AI on the ships that can predict complex flight paths and even modify them in real time, and even assist with it all
Needing to have the tunnel be occluded by objects visible in a heads up display, such as when a pilot is maneuvering amongst an impossibly-dense asteroid field.
…to name a few off my head. These things don’t happen in the real world, so would be novel design challenges for the sci-fi interface designer.
So, now we have a deeper basis for discussing, critiquing, and designing sci-fi tunnel-in-the-sky displays. If you are an aeronautic engineer, and have some more detail, let me hear it! I’d love for this to be a good general reference for sci-fi interface designers.
If you are a fan, and can provide other examples in the comments, it would be great to see other ones to compare.
Happy flying, and see you back in Blade Runner in the next post.
Perhaps the most unusual interface in the film is a game seen when Theo visits his cousin Nigel for a meal and to ask for a favor. Nigel’s son Alex sits at the table silent and distant, his attention on a strange game that it’s designer, Mark Coleran, tells me is called “Kubris,” a 3D hybrid of Tetris and Rubik’s Cube.
Alex operates the game by twitching and sliding his fingers in the air. With each twitch a small twang is heard. He suspends his hand a bit above the table to have room. His finger movements are tracked by thin black wires that extend from small plastic discs at his fingertips back to a device worn on his wrist. This device looks like a streamlined digital watch, but where the face of a clock would be are a set of multicolored LEDs arranged in rows. These LEDs flicker on and off in inscrutable patterns, but clearly showing some state of the game. There is an inset LED block that also displays an increasing score.
The game also features a small, transparent, flat screen that rests on the table in front of him. It displays a computer-generated cube, similar to a 5×5 Rubik’s Cube, made up of smaller transparent cubes that share colors with the LEDs on his wrist. As Alex plays, he changes the orientation of the cube, and positions smaller cubes along the surface of the larger.
Alex plays this game continually during the course of the scene. He is so engrossed in it that when Nigel asks him twice to take his pills, he doesnt even register the instruction. Nigel must yell at him to get Alex to comply.
Though the exact workings of the game are a mystery, it serves to illustrate in a technological way how some of the younger people in 2027 disengage from the horror of the world through games that have been designed for addiction and obsession.
Once Johnny has installed his motion detector on the door, the brain upload can begin.
3. Building it
Johnny starts by opening his briefcase and removing various components, which he connects together into the complete upload system. Some of the parts are disguised, and the whole sequence is similar to an assassin in a thriller film assembling a gun out of harmless looking pieces.
It looks strange today to see a computer system with so many external devices connected by cables. We’ve become accustomed to one piece computing devices with integrated functionality, and keyboards, mice, cameras, printers, and headphones that connect wirelessly.
Cables and other connections are not always considered as interfaces, but “all parts of a thing which enable its use” is the definition according to Chris. In the early to mid 1990s most computer user were well aware of the potential for confusion and frustration in such interfaces. A personal computer could have connections to monitor, keyboard, mouse, modem, CD drive, and joystick – and every single device would use a different type of cable. USB, while not perfect, is one of the greatest ever improvements in user interfaces.Continue reading →
The opening shot of Johnny Mnemonic is a brightly coloured 3D graphical environment. It looks like an abstract cityscape, with buildings arranged in rectangular grid and various 3D icons or avatars flying around. Text identifies this as the Internet of 2021, now cyberspace.
Strictly speaking this shot is not an interface. It is a visualization from the point of view of a calendar wake up reminder, which flies through cyberspace, then down a cable, to appear on a wall mounted screen in Johnny’s hotel suite. However, we will see later on that this is exactly the same graphical representation used by humans. As the very first scene of the film, it is important in establishing what the Internet looks like in this future world. It’s therefore worth discussing the “look” employed here, even though there isn’t any interaction.
Cyberspace is usually equated with 3D graphics and virtual reality in particular. Yet when you look into what is necessary to implement cyberspace, the graphics really aren’t that important.
MUDs and MOOs: ASCII Cyberspace
People have been building cyberspaces since the 1980s in the form of MUDs and MOOs. At first sight these look like old style games such as Adventure or Zork. To explore a MUD/MOO, you log on remotely using a terminal program. Every command and response is pure text, so typing “go north” might result in “You are in a church.” The difference between MUD/MOOs and Zork is that these are dynamic multiuser virtual worlds, not solitary-player games. Other people share the world with you and move through it, adventuring, building, or just chatting. Everyone has an avatar and every place has an appearance, but expressed in text as if you were reading a book.
guest>>@go #1914 Castle entrance A cold and dark gatehouse, with moss-covered crumbling walls. A passage gives entry to the forbidding depths of Castle Aargh. You hear a strange bubbling sound and an occasional chuckle. Obvious exits: path to Castle Aargh (#1871) enter to Bridge (#1916)
Most impressive of all, these are virtual worlds with built-in editing capabilities. All the “graphics” are plain text, and all the interactions, rules, and behaviours are programmed in a scripting language. The command line interface allows the equivalent of Emacs or VI to run, so the world and everything in itcan be modified in real time by the participants. You don’t even have to restart the program. Here a character creates a new location within a MOO, to the “south” of the existing Town Square:
laranzu>>@dig MyNewHome laranzu>> @describe here as “A large and spacious cave full of computers” laranzu>> @dig north to Town Square
The simplicity of the text interfaces leads people to think these are simple systems. They’re not. These cyberspaces have many of the legal complexities found in the real world. Can individuals be excluded from particular places? What can be done about abusive speech? How offensive can your public appearance be? Who is allowed to create new buildings, or modify existing ones? Is attacking an avatar a crime? Many 3D virtual reality system builders never progress that far, stopping when the graphics look good and the program rarely crashes. If you’re interested in cyberspace interface design, a long running textual cyberspace such as LambdaMOO or DragonMUD holds a wealth of experience about how to deal with all these messy human issues.
So why all the graphics?
So it turns out MUDs and MOOs are a rich, sprawling, complex cyberspace in text. Why then, in 1995, did we expect cyberspace to require 3D graphics anyway?
The 1980s saw two dimensional graphical user interfaces become well known with the Macintosh, and by the 1990s they were everywhere. The 1990s also saw high end 3D graphics systems becoming more common, the most prominent being from Silicon Graphics. It was clear that as prices came down personal computers would soon have similar capabilities.
At the time of Johnny Mnemonic, the world wide web had brought the Internet into everyday life. If web browsers with 2D GUIs were superior to the command line interfaces of telnet, FTP, and Gopher, surely a 3D cyberspace would be even better? Predictions of a 3D Internet were common in books such as Virtual Reality by Howard Rheingold and magazines such as Wired at the time. VRML, the Virtual Reality Markup/Modeling Language, was created in 1995 with the expectation that it would become the foundation for cyberspace, just as HTML had been the foundation of the world wide web.
Twenty years later, we know this didn’t happen. The solution to the unthinkable complexity of cyberspace was a return to the command line interface in the form of a Google search box.
Abstract or symbolic interfaces such as text command lines may look more intimidating or complicated than graphical systems. But if the graphical interface isn’t powerful enough to meet their needs, users will take the time to learn how the more complicated system works. And we’ll see later on that the cyberspace of Johnny Mnemonic is not purely graphical and does allow symbolic interaction.
Dradis is the primary system that the Galactica uses to detect friendly and enemy units beyond visual range. The console appears to have a range of at least one light second (less than the distance from Earth to the Moon), but less than one light minute (one/eighth the distance from Earth to the Sun).
How can we tell? We know that it’s less than one light minute because Galactica is shown orbiting a habitable planet around a sun-like star. Given our own solar system, we would have at least some indication of ships on the Dradis at that range and the combat happening there (which we hear over the radios). We don’t see those on the Dradis.
We know that it’s at least one light second because Galactica jumps into orbit (possibly geosynchronous) above a planet and is able to ‘clear’ the local space of that planet’s orbit with the Dradis
The sensor readings are automatically interpreted into Friendly contacts, Enemy contacts, and missiles, then displayed on a 2d screen emulating a hemisphere. A second version of the display shows a flat 2d view of the same information.
Friendly contacts are displayed in green, while enemy units (Cylons) are displayed in red. The color of the surrounding interface changes from orange to red when the Galactica moves to Alert Stations.
The Dradis is displayed on four identical displays above the Command Table, and is viewable from any point in the CIC. ‘Viewable’ here does not mean ‘readable’. The small size, type, and icons shown on the screen are barely large enough to be read by senior crew at the main table, let alone officers in the second or third tier of seating (the perspective of which we see here).
It is possible that these are simply overview screens to support more specific screens at individual officer stations, but we never see any evidence of this.
Whatever the situation, the Dradis needs to be larger in order to be readable throughout the CIC and have more specific screens at officer stations focused on interpreting the Dradis.
As soon as a contact appears on the Dradis screen, someone (who appears to be the Intelligence Officer) in the CIC calls out the contact to reiterate the information and alert the rest of the CIC to the new contact. Vipers and Raptors are seen using a similar but less powerful version of the Galactica’s sensor suite and display. Civilian ships like Colonial One have an even less powerful or distinct radar system.
2d display of 3d information
The largest failing of the Dradis system is in its representation of the hemisphere. We never appear to see the other half of the sphere. Missing half the data is pretty serious. Theoretically, the Galactica would be at the center of a bubble of information, instead of picking an arbitrary ‘ground plane’ and showing everything in a half-sphere above that (cutting out a large amount of available information).
The Dradis also suffers from a lack of context: contacts are displayed in 3 dimensions inside the view, but only have 2 dimensions of reference on the flat screen in the CIC. For a reference on an effective 3d display on a 2d screen, see Homeworld’s (PC Game, THQ and Relic) Sensor Manager:
In addition to rotation of the Sensor Manager (allowing different angles of view depending on the user’s wishes), the Sensor Manager can display reference lines down to a ‘reference plane’ to show height above, and distance from, a known point. In Homeworld, this reference point is often the center of the selected group of units, but on the Dradis it would make sense for this reference point to be the Galactica herself.
Dradis Contact
Overall, the crew of the Galactica never seems to be inhibited by this limitation. The main reasons they could be able to work around this limitation include:
Extensive training
Effective communication between crew members
Experience operating with limited information.
This relies heavily on the crew operating at peak efficiency during an entire combat encounter. That is a lot to ask from anyone. It would be better to improve the interface and lift the burden off of a possibly sleep deprived crewmember.
The Dradis itself displays information effectively about the individual contacts it sees. This isn’t visible at the distances involved in most CIC activities, but would be visible on personal screens easily. Additionally, the entire CIC doesn’t need to know every piece of information about each contact.
In any of those three cases, crew efficiency would be improved (and misunderstandings would be limited) by improving how the Dradis displayed its contacts on its screen.
The FTL Jump process on the Galactica has several safeguards, all appropriate for a ship of that size and an action of that danger (late in the series, we see that an inappropriate jump can cause major damage to nearby objects). Only senior officers can start the process, multiple teams all sign off on the calculations, and dedicated computers are used for potentially damaging computations.
Even the actual ‘jump’ requires a two stage process with an extremely secure key and button combination. It is doubtful that Lt. Gaeta’s key could be used on any other ship aside from the Galactica.
The process is so effective, and the crew is so well trained at it, that even after two decades of never actually using the FTL system, the Galactica is able to make a pinpoint jump under extreme duress (the beginning of human extinction).
Difficult Confirmation
The one apparent failure in this system is the confirmation process after the FTL jump. Lt. Gaeta has to run all the way across the CIC and personally check a small screen with less than obvious information.
Of the many problems with the nav’s confirmation screen, three stand out:
It is a 2d representation of 3d space, without any clear references to how information has been compacted
There are no ‘local zero’ showing the system’s plane or relative inclination of orbits
No labels on data
Even the most basic orbital navigation system has a bit more information about Apogee, Perigee, relative orbit, and a gimbal reading. Compare to this chart from the Kerbal Space Program:
The Galactica would need at least this much information to effectively confirm their location. For Lt. Gaeta, this isn’t a problem because of his extensive training and knowledge of the Galactica.
But the Galactica is a warship and would be expected to experience casualties during combat. Other navigation officers and crew may not be as experienced or have the same training as Lt. Gaeta. In a situation where he is incapacitated and it falls to a less experienced member of the crew, an effective visual display of location and vector is vital.
Simplicity isn’t always perfect
This is an example of where a bit more information in the right places can make an interface more legible and understandable. Some information here looks useless, but may be necessary for the Galactica’s navigation crew. With the extra information, this display could become useful for crew other than Lt. Gaeta.
Since Tony disconnected the power transmission lines, Pepper has been monitoring Stark Tower in its new, off-the-power-grid state. To do this she studies a volumetric dashboard display that floats above glowing shelves on a desktop.
Volumetric elements
The display features some volumetric elements, all rendered as wireframes in the familiar Pepper’s Ghost (I know, I know) visual style: translucent, edge-lit planes. A large component to her right shows Stark Tower, with red lines highlighting the power traveling from the large arc reactor in the basement through the core of the building.
The center of the screen has a similarly-rendered close up of the arc reactor. A cutaway shows a pulsing ring of red-tinged energy flowing through its main torus.
This component makes a good deal of sense, showing her the physical thing she’s meant to be monitoring but not in a photographic way, but a way that helps her quickly locate any problems in space. The torus cutaway is a little strange, since if she’s meant to be monitoring it, she should monitor the whole thing, not just a quarter of it that has been cut away.
Flat elements
The remaining elements in the display appear on a flat plane.Continue reading →