Perhaps the most unusual interface in the film is a game seen when Theo visits his cousin Nigel for a meal and to ask for a favor. Nigel’s son Alex sits at the table silent and distant, his attention on a strange game that it’s designer, Mark Coleran, tells me is called “Kubris,” a 3D hybrid of Tetris and Rubik’s Cube.
Alex operates the game by twitching and sliding his fingers in the air. With each twitch a small twang is heard. He suspends his hand a bit above the table to have room. His finger movements are tracked by thin black wires that extend from small plastic discs at his fingertips back to a device worn on his wrist. This device looks like a streamlined digital watch, but where the face of a clock would be are a set of multicolored LEDs arranged in rows. These LEDs flicker on and off in inscrutable patterns, but clearly showing some state of the game. There is an inset LED block that also displays an increasing score. Continue reading →
Once Johnny has installed his motion detector on the door, the brain upload can begin.
3. Building it
Johnny starts by opening his briefcase and removing various components, which he connects together into the complete upload system. Some of the parts are disguised, and the whole sequence is similar to an assassin in a thriller film assembling a gun out of harmless looking pieces.
It looks strange today to see a computer system with so many external devices connected by cables. We’ve become accustomed to one piece computing devices with integrated functionality, and keyboards, mice, cameras, printers, and headphones that connect wirelessly.
Cables and other connections are not always considered as interfaces, but “all parts of a thing which enable its use” is the definition according to Chris. In the early to mid 1990s most computer user were well aware of the potential for confusion and frustration in such interfaces. A personal computer could have connections to monitor, keyboard, mouse, modem, CD drive, and joystick – and every single device would use a different type of cable. USB, while not perfect, is one of the greatest ever improvements in user interfaces.Continue reading →
The opening shot of Johnny Mnemonic is a brightly coloured 3D graphical environment. It looks like an abstract cityscape, with buildings arranged in rectangular grid and various 3D icons or avatars flying around. Text identifies this as the Internet of 2021, now cyberspace.
Strictly speaking this shot is not an interface. It is a visualization from the point of view of a calendar wake up reminder, which flies through cyberspace, then down a cable, to appear on a wall mounted screen in Johnny’s hotel suite. However, we will see later on that this is exactly the same graphical representation used by humans. As the very first scene of the film, it is important in establishing what the Internet looks like in this future world. It’s therefore worth discussing the “look” employed here, even though there isn’t any interaction.
Cyberspace is usually equated with 3D graphics and virtual reality in particular. Yet when you look into what is necessary to implement cyberspace, the graphics really aren’t that important.
MUDs and MOOs: ASCII Cyberspace
People have been building cyberspaces since the 1980s in the form of MUDs and MOOs. At first sight these look like old style games such as Adventure or Zork. To explore a MUD/MOO, you log on remotely using a terminal program. Every command and response is pure text, so typing “go north” might result in “You are in a church.” The difference between MUD/MOOs and Zork is that these are dynamic multiuser virtual worlds, not solitary-player games. Other people share the world with you and move through it, adventuring, building, or just chatting. Everyone has an avatar and every place has an appearance, but expressed in text as if you were reading a book.
guest>>@go #1914 Castle entrance A cold and dark gatehouse, with moss-covered crumbling walls. A passage gives entry to the forbidding depths of Castle Aargh. You hear a strange bubbling sound and an occasional chuckle. Obvious exits: path to Castle Aargh (#1871) enter to Bridge (#1916)
Most impressive of all, these are virtual worlds with built-in editing capabilities. All the “graphics” are plain text, and all the interactions, rules, and behaviours are programmed in a scripting language. The command line interface allows the equivalent of Emacs or VI to run, so the world and everything in itcan be modified in real time by the participants. You don’t even have to restart the program. Here a character creates a new location within a MOO, to the “south” of the existing Town Square:
laranzu>>@dig MyNewHome laranzu>> @describe here as “A large and spacious cave full of computers” laranzu>> @dig north to Town Square
The simplicity of the text interfaces leads people to think these are simple systems. They’re not. These cyberspaces have many of the legal complexities found in the real world. Can individuals be excluded from particular places? What can be done about abusive speech? How offensive can your public appearance be? Who is allowed to create new buildings, or modify existing ones? Is attacking an avatar a crime? Many 3D virtual reality system builders never progress that far, stopping when the graphics look good and the program rarely crashes. If you’re interested in cyberspace interface design, a long running textual cyberspace such as LambdaMOO or DragonMUD holds a wealth of experience about how to deal with all these messy human issues.
So why all the graphics?
So it turns out MUDs and MOOs are a rich, sprawling, complex cyberspace in text. Why then, in 1995, did we expect cyberspace to require 3D graphics anyway?
The 1980s saw two dimensional graphical user interfaces become well known with the Macintosh, and by the 1990s they were everywhere. The 1990s also saw high end 3D graphics systems becoming more common, the most prominent being from Silicon Graphics. It was clear that as prices came down personal computers would soon have similar capabilities.
At the time of Johnny Mnemonic, the world wide web had brought the Internet into everyday life. If web browsers with 2D GUIs were superior to the command line interfaces of telnet, FTP, and Gopher, surely a 3D cyberspace would be even better? Predictions of a 3D Internet were common in books such as Virtual Reality by Howard Rheingold and magazines such as Wired at the time. VRML, the Virtual Reality Markup/Modeling Language, was created in 1995 with the expectation that it would become the foundation for cyberspace, just as HTML had been the foundation of the world wide web.
Twenty years later, we know this didn’t happen. The solution to the unthinkable complexity of cyberspace was a return to the command line interface in the form of a Google search box.
Abstract or symbolic interfaces such as text command lines may look more intimidating or complicated than graphical systems. But if the graphical interface isn’t powerful enough to meet their needs, users will take the time to learn how the more complicated system works. And we’ll see later on that the cyberspace of Johnny Mnemonic is not purely graphical and does allow symbolic interaction.
Dradis is the primary system that the Galactica uses to detect friendly and enemy units beyond visual range. The console appears to have a range of at least one light second (less than the distance from Earth to the Moon), but less than one light minute (one/eighth the distance from Earth to the Sun).
How can we tell? We know that it’s less than one light minute because Galactica is shown orbiting a habitable planet around a sun-like star. Given our own solar system, we would have at least some indication of ships on the Dradis at that range and the combat happening there (which we hear over the radios). We don’t see those on the Dradis.
We know that it’s at least one light second because Galactica jumps into orbit (possibly geosynchronous) above a planet and is able to ‘clear’ the local space of that planet’s orbit with the Dradis
The sensor readings are automatically interpreted into Friendly contacts, Enemy contacts, and missiles, then displayed on a 2d screen emulating a hemisphere. A second version of the display shows a flat 2d view of the same information.
Friendly contacts are displayed in green, while enemy units (Cylons) are displayed in red. The color of the surrounding interface changes from orange to red when the Galactica moves to Alert Stations.
The Dradis is displayed on four identical displays above the Command Table, and is viewable from any point in the CIC. ‘Viewable’ here does not mean ‘readable’. The small size, type, and icons shown on the screen are barely large enough to be read by senior crew at the main table, let alone officers in the second or third tier of seating (the perspective of which we see here).
It is possible that these are simply overview screens to support more specific screens at individual officer stations, but we never see any evidence of this.
Whatever the situation, the Dradis needs to be larger in order to be readable throughout the CIC and have more specific screens at officer stations focused on interpreting the Dradis.
As soon as a contact appears on the Dradis screen, someone (who appears to be the Intelligence Officer) in the CIC calls out the contact to reiterate the information and alert the rest of the CIC to the new contact. Vipers and Raptors are seen using a similar but less powerful version of the Galactica’s sensor suite and display. Civilian ships like Colonial One have an even less powerful or distinct radar system.
2d display of 3d information
The largest failing of the Dradis system is in its representation of the hemisphere. We never appear to see the other half of the sphere. Missing half the data is pretty serious. Theoretically, the Galactica would be at the center of a bubble of information, instead of picking an arbitrary ‘ground plane’ and showing everything in a half-sphere above that (cutting out a large amount of available information).
The Dradis also suffers from a lack of context: contacts are displayed in 3 dimensions inside the view, but only have 2 dimensions of reference on the flat screen in the CIC. For a reference on an effective 3d display on a 2d screen, see Homeworld’s (PC Game, THQ and Relic) Sensor Manager:
In addition to rotation of the Sensor Manager (allowing different angles of view depending on the user’s wishes), the Sensor Manager can display reference lines down to a ‘reference plane’ to show height above, and distance from, a known point. In Homeworld, this reference point is often the center of the selected group of units, but on the Dradis it would make sense for this reference point to be the Galactica herself.
Overall, the crew of the Galactica never seems to be inhibited by this limitation. The main reasons they could be able to work around this limitation include:
Effective communication between crew members
Experience operating with limited information.
This relies heavily on the crew operating at peak efficiency during an entire combat encounter. That is a lot to ask from anyone. It would be better to improve the interface and lift the burden off of a possibly sleep deprived crewmember.
The Dradis itself displays information effectively about the individual contacts it sees. This isn’t visible at the distances involved in most CIC activities, but would be visible on personal screens easily. Additionally, the entire CIC doesn’t need to know every piece of information about each contact.
In any of those three cases, crew efficiency would be improved (and misunderstandings would be limited) by improving how the Dradis displayed its contacts on its screen.
The FTL Jump process on the Galactica has several safeguards, all appropriate for a ship of that size and an action of that danger (late in the series, we see that an inappropriate jump can cause major damage to nearby objects). Only senior officers can start the process, multiple teams all sign off on the calculations, and dedicated computers are used for potentially damaging computations.
Even the actual ‘jump’ requires a two stage process with an extremely secure key and button combination. It is doubtful that Lt. Gaeta’s key could be used on any other ship aside from the Galactica.
The process is so effective, and the crew is so well trained at it, that even after two decades of never actually using the FTL system, the Galactica is able to make a pinpoint jump under extreme duress (the beginning of human extinction).
The one apparent failure in this system is the confirmation process after the FTL jump. Lt. Gaeta has to run all the way across the CIC and personally check a small screen with less than obvious information.
Of the many problems with the nav’s confirmation screen, three stand out:
It is a 2d representation of 3d space, without any clear references to how information has been compacted
There are no ‘local zero’ showing the system’s plane or relative inclination of orbits
No labels on data
Even the most basic orbital navigation system has a bit more information about Apogee, Perigee, relative orbit, and a gimbal reading. Compare to this chart from the Kerbal Space Program:
The Galactica would need at least this much information to effectively confirm their location. For Lt. Gaeta, this isn’t a problem because of his extensive training and knowledge of the Galactica.
But the Galactica is a warship and would be expected to experience casualties during combat. Other navigation officers and crew may not be as experienced or have the same training as Lt. Gaeta. In a situation where he is incapacitated and it falls to a less experienced member of the crew, an effective visual display of location and vector is vital.
Simplicity isn’t always perfect
This is an example of where a bit more information in the right places can make an interface more legible and understandable. Some information here looks useless, but may be necessary for the Galactica’s navigation crew. With the extra information, this display could become useful for crew other than Lt. Gaeta.
Since Tony disconnected the power transmission lines, Pepper has been monitoring Stark Tower in its new, off-the-power-grid state. To do this she studies a volumetric dashboard display that floats above glowing shelves on a desktop.
The display features some volumetric elements, all rendered as wireframes in the familiar Pepper’s Ghost (I know, I know) visual style: translucent, edge-lit planes. A large component to her right shows Stark Tower, with red lines highlighting the power traveling from the large arc reactor in the basement through the core of the building.
The center of the screen has a similarly-rendered close up of the arc reactor. A cutaway shows a pulsing ring of red-tinged energy flowing through its main torus.
This component makes a good deal of sense, showing her the physical thing she’s meant to be monitoring but not in a photographic way, but a way that helps her quickly locate any problems in space. The torus cutaway is a little strange, since if she’s meant to be monitoring it, she should monitor the whole thing, not just a quarter of it that has been cut away.
The Ford Explorer is an automated vehicle driven on an electrified track through a set route in the park. It has protective covers over its steering wheel, and a set of cameras throughout the car:
Twin cameras at the steering wheel looking out the windshield to give a remote chauffeur or computer system stereoscopic vision
A small camera on the front bumper looking down at the track right in front of the vehicle
Several cameras facing into the cab, giving park operators an opportunity to observe and interact with visitors. (See the subsequent SUV Surveillance post.)
Presumably, there are protective covers over the gas/brake pedal as well, but we never see that area of the interior; evidence comes from when Dr. Grant and Dr. Saddler want to stop and look at the triceratops they don’t even bother to try and reach for the brake pedal, but merely hop out of the SUV.
Jurassic Park’s weather prediction software sits on a dedicated computer. It pulls updates from some large government weather forecast (likely NOAA). The screen is split into three sections (clockwise from top left):
A 3D representation of the island and surrounding ocean with cloud layers shown
A plan view of the island showing cloud cover
A standard climate metrics along the bottom with data like wind direction (labeled Horizontal Direction), barometric pressure, etc.
We also see a section labeled “Sectors”, with “Island 1” currently selected (other options include “USA” and “Island 2”…which is suitably mysterious).
Using the software, they are able to pan the views to the area of ocean with an incoming tropical storm. The map does not show rainfall, wind direction, wind speed, or distance; but the control room seems to have another source of information for that. They discuss the projected path of the storm while looking at the map.
The park staff relies on the data from weather services of America and Costa Rica, but doesn’t trust their conclusions (Muldoon asks if this storm will swing out of the way at the last second despite projections, “like the last one”). But the team at Jurassic Park doesn’t have any information on what’s actually happening with the storm.
Unlike local weather stations here in the U.S., or sites like NOAA weather maps, there is in this interface a lack of basic forecasting information like, say, precipitation amount, precipitation type, individual wind speeds inside the storm, direction, etc… Given the deadly, deadly risks inherent in the park, this seems like a significant oversight.
The software has spent a great deal of time rendering a realistic-ish cloud (which, we should note looks foreshadowingly like a human skull), but neglects to give information that is taken for granted by common weather information systems.
When the park meteorologist isn’t on duty, or isn’t awake, or has his attention on the Utahraptor trying to smash its way into the control room, the software should provide some basic information to everyone on staff:
What does the weather forecast look like over the next few hours and days?
When the weather is likely to be severe, there’s more information, and it needs to urgently get the attention of the park staff.
What’s the prediction?
Which parts of the park will be hit hardest?
Which tours and staff are in the most dangerous areas?
How long will the storm be over the island?
If this information tied into mobile apps or Jurassic Park’s wider systems, it could provide alerts to individual staff, tourists, and tours about where they could take shelter.
Make the Information Usable
Reorienting information that is stuck on the bottom bar and shifting it into the 3d visual would lower the cognitive load required to understand everything that’s going on. Adding in visuals for other weather data (taken for granted in weather systems now) would bring it at least up to standard.
Finally, putting it up on the big monitor either on demand or when it is urgent would make it available to everyone in the control room, instead of just whoever happened to be at the weather monitor. Modern systems would push the information information out to staff and visitors on their mobile devices as well.
With those changes, everyone could see weather in real time to adjust their behavior appropriately (like, say, delaying the tour when there’s a tropical storm an hour south), the programmer could check the systems and paddocks that are going to get hit, and the inactive consoles could do whatever they needed to do.
The first computer interface we see in the film occurs at 3:55. It’s an interface for housing and monitoring the tesseract, a cube that is described in the film as “an energy source” that S.H.I.E.L.D. plans to use to “harness energy from space.” We join the cube after it has unexpectedly and erratically begun to throw off low levels of gamma radiation.
The harnessing interface consists of a housing, a dais at the end of a runway, and a monitoring screen.
Fury walks past the dais they erected just because.
The housing & dais
The harness consists of a large circular housing that holds the cube and exposes one face of it towards a long runway that ends in a dais. Diegetically this is meant to be read more as engineering than interface, but it does raise questions. For instance, if they didn’t already know it was going to teleport someone here, why was there a dais there at all, at that exact distance, with stairs leading up to it? How’s that harnessing energy? Wouldn’t you expect a battery at the far end? If they did expect a person as it seems they did, then the whole destroying swaths of New York City thing might have been avoided if the runway had ended instead in the Hulk-holding cage that we see later in the film. So…you know…a considerable flaw in their unknown-passenger teleportation landing strip design. Anyhoo, the housing is also notable for keeping part of the cube visible to users near it, and holding it at a particular orientation, which plays into the other component of the harness—the monitor.
After the security ‘bot brings Eve across the ship (with Wall-e in tow), he arrives at the gatekeeper to the bridge. The Gatekeeper has the job of entering information about ‘bots, or activating and deactivating systems (labeled with “1”s and “0”s) into a pedestal keyboard with two small manipulator arms. It’s mounted on a large, suspended shaft, and once it sees the security ‘bot and confirms his clearance, it lets the ‘bot and the pallet through by clicking another, specific button on the keyboard.
The Gatekeeper is large. Larger than most of the other robots we see on the Axiom. It’s casing is a white shell around an inner hardware. This casing looks like it’s meant to protect or shield the internal components from light impacts or basic problems like dust. From the looks of the inner housing, the Gatekeeper should be able to move its ‘head’ up and down to point its eye in different directions, but while Wall-e and the security ‘bot are in the room, we only ever see it rotating around its suspension pole and using the glowing pinpoint in its red eye to track the objects its paying attention to.
When it lets the sled through, it sees Wall-e on the back of the sled, who waves to the Gatekeeper. In response, the Gatekeeper waves back with its jointed manipulator arm. After waving, the Gatekeeper looks at its arm. It looks surprised at the arm movement, as if it hadn’t considered the ability to use those actuators before. There is a pause that gives the distinct impression that the Gatekeeper is thinking hard about this new ability, then we see it waving the arm a couple more times to itself to confirm its new abilities.
The Gatekeeper seems to exist solely to enter information into that pedestal. From what we can see, it doesn’t move and likely (considering the rest of the ship) has been there since the Axiom’s construction. We don’t see any other actions from the pedestal keys, but considering that one of them opens a door temporarily, it’s possible that the other buttons have some other, more permanent functions like deactivating the door security completely, or allowing a non-authorized ‘bot (or even a human) into the space.
An unutilized sentience
The robot is a sentient being, with a tedious and repetitive job, who doesn’t even know he can wave his arm until Wall-e introduces the Gatekeeper to the concept. This fits with the other technology on board the Axiom, with intelligence lacking any correlation to the robot’s function. Thankfully for the robot, he (she?) doesn’t realize their lack of a larger world until that moment.
So what’s the pedestal for?
It still leaves open the question of what the pedestal controls actually do. If they’re all connected to security doors throughout the ship, then the Gatekeeper would have to be tied into the ship’s systems somehow to see who was entering or leaving each secure area.
The pedestal itself acts as a two-stage authentication system. The Gatekeeper has a powerful sentience, and must decide if the people or robots in front of it are allowed to enter the room or rooms it guards. Then, after that decision, it must make a physical action to unlock the door to enter the secure area. This implies a high level of security, which feels appropriate given that the elevator accesses the bridge of the Axiom.
Since we’ve seen the robots have different vision modes, and improvements based on their function, it’s likely that the Gatekeeper can see more into the pedestal interface than the audience can, possibly including which doors each key links to. If not, then as a computer it would have perfect recall on what each button was for. This does not afford a human presence stepping in to take control in case the Gatekeeper has issues (like the robots seen soon after this in the ‘medbay’). But, considering Buy-N-Large’s desire to leave humans out of the loop at each possible point, this seems like a reasonable design direction for the company to take if they wanted to continue that trend.
It’s possible that the pedestal was intended for a human security guard that was replaced after the first generation of spacefarers retired. Another possibility is that Buy-N-Large wanted an obvious sign of security to comfort passengers.
We learn after this scene that the security ‘bot is Otto’s ‘muscle’ and affords some protection. Given that the Security ‘bot and others might be needed at random times, it feels like he would want a way to gain access to the bridge in an emergency. Something like an integrated biometric scanner on the door that could be manually activated (eye scanner, palm scanner, RFID tags, etc.), or even a physical key device on the door that only someone like the Captain or trusted security officers would be given. Though that assumes there is more than one entrance to the bridge.
This is a great showcase system for tours and commercials of an all-access luxury hotel and lifeboat. It looks impressive, and the Gatekeeper would be an effective way to make sure only people who are really supposed to get into the bridge are allowed past the barriers. But, Buy-N-Large seems to have gone too far in their quest for intelligent robots and has created something that could be easily replaced by a simpler, hard-wired security system.