Once Johnny has installed his motion detector on the door, the brain upload can begin.
3. Building it
Johnny starts by opening his briefcase and removing various components, which he connects together into the complete upload system. Some of the parts are disguised, and the whole sequence is similar to an assassin in a thriller film assembling a gun out of harmless looking pieces.
It looks strange today to see a computer system with so many external devices connected by cables. We’ve become accustomed to one piece computing devices with integrated functionality, and keyboards, mice, cameras, printers, and headphones that connect wirelessly.
Cables and other connections are not always considered as interfaces, but “all parts of a thing which enable its use” is the definition according to Chris. In the early to mid 1990s most computer user were well aware of the potential for confusion and frustration in such interfaces. A personal computer could have connections to monitor, keyboard, mouse, modem, CD drive, and joystick – and every single device would use a different type of cable. USB, while not perfect, is one of the greatest ever improvements in user interfaces.Continue reading →
The opening shot of Johnny Mnemonic is a brightly coloured 3D graphical environment. It looks like an abstract cityscape, with buildings arranged in rectangular grid and various 3D icons or avatars flying around. Text identifies this as the Internet of 2021, now cyberspace.
Strictly speaking this shot is not an interface. It is a visualization from the point of view of a calendar wake up reminder, which flies through cyberspace, then down a cable, to appear on a wall mounted screen in Johnny’s hotel suite. However, we will see later on that this is exactly the same graphical representation used by humans. As the very first scene of the film, it is important in establishing what the Internet looks like in this future world. It’s therefore worth discussing the “look” employed here, even though there isn’t any interaction.
Cyberspace is usually equated with 3D graphics and virtual reality in particular. Yet when you look into what is necessary to implement cyberspace, the graphics really aren’t that important.
MUDs and MOOs: ASCII Cyberspace
People have been building cyberspaces since the 1980s in the form of MUDs and MOOs. At first sight these look like old style games such as Adventure or Zork. To explore a MUD/MOO, you log on remotely using a terminal program. Every command and response is pure text, so typing “go north” might result in “You are in a church.” The difference between MUD/MOOs and Zork is that these are dynamic multiuser virtual worlds, not solitary-player games. Other people share the world with you and move through it, adventuring, building, or just chatting. Everyone has an avatar and every place has an appearance, but expressed in text as if you were reading a book.
guest>>@go #1914 Castle entrance A cold and dark gatehouse, with moss-covered crumbling walls. A passage gives entry to the forbidding depths of Castle Aargh. You hear a strange bubbling sound and an occasional chuckle. Obvious exits: path to Castle Aargh (#1871) enter to Bridge (#1916)
Most impressive of all, these are virtual worlds with built-in editing capabilities. All the “graphics” are plain text, and all the interactions, rules, and behaviours are programmed in a scripting language. The command line interface allows the equivalent of Emacs or VI to run, so the world and everything in itcan be modified in real time by the participants. You don’t even have to restart the program. Here a character creates a new location within a MOO, to the “south” of the existing Town Square:
laranzu>>@dig MyNewHome laranzu>> @describe here as “A large and spacious cave full of computers” laranzu>> @dig north to Town Square
The simplicity of the text interfaces leads people to think these are simple systems. They’re not. These cyberspaces have many of the legal complexities found in the real world. Can individuals be excluded from particular places? What can be done about abusive speech? How offensive can your public appearance be? Who is allowed to create new buildings, or modify existing ones? Is attacking an avatar a crime? Many 3D virtual reality system builders never progress that far, stopping when the graphics look good and the program rarely crashes. If you’re interested in cyberspace interface design, a long running textual cyberspace such as LambdaMOO or DragonMUD holds a wealth of experience about how to deal with all these messy human issues.
So why all the graphics?
So it turns out MUDs and MOOs are a rich, sprawling, complex cyberspace in text. Why then, in 1995, did we expect cyberspace to require 3D graphics anyway?
The 1980s saw two dimensional graphical user interfaces become well known with the Macintosh, and by the 1990s they were everywhere. The 1990s also saw high end 3D graphics systems becoming more common, the most prominent being from Silicon Graphics. It was clear that as prices came down personal computers would soon have similar capabilities.
At the time of Johnny Mnemonic, the world wide web had brought the Internet into everyday life. If web browsers with 2D GUIs were superior to the command line interfaces of telnet, FTP, and Gopher, surely a 3D cyberspace would be even better? Predictions of a 3D Internet were common in books such as Virtual Reality by Howard Rheingold and magazines such as Wired at the time. VRML, the Virtual Reality Markup/Modeling Language, was created in 1995 with the expectation that it would become the foundation for cyberspace, just as HTML had been the foundation of the world wide web.
Twenty years later, we know this didn’t happen. The solution to the unthinkable complexity of cyberspace was a return to the command line interface in the form of a Google search box.
Abstract or symbolic interfaces such as text command lines may look more intimidating or complicated than graphical systems. But if the graphical interface isn’t powerful enough to meet their needs, users will take the time to learn how the more complicated system works. And we’ll see later on that the cyberspace of Johnny Mnemonic is not purely graphical and does allow symbolic interaction.
There is one display on the bike to discuss, some audio features, and a whole lot of things missing.
The bike display is a small screen near the front of the handlebars that displays a limited set of information to Jack as he’s riding. It is seen used as a radar system. The display is circular, with main content in the middle, a turquoise sweep, and a turquoise ring just inside the bezel. We never see Jack touch the screen, but we do see him work a small, unlabeled knob at the bottom left of the bike’s plates. It is not obvious what this knob does, but Jack does fiddle with it. Continue reading →
When Ibanez and Barcalow enter the atmosphere in the escape pod, we see a brief, shaky glimpse of the COURSE OPTION ANALYSIS interface. In the screen grab below, you can see it has a large, yellow, all-caps label at the top. The middle shows the TERRAIN PROFILE. This consists of a real-time, topography map as a grid of screen-green dots that produce a shaded relief map.
On the right is a column of text that includes:
The title, i.e., TERRAIN PROFILE
The location data: Planet P, Scylla Charybdis (which I don’t think is mentioned in the film, but a fun detail. Is this the star system?)
Coordinates in 3D: XCOORD, YCOORD, and ELEVATION. (Sadly these don’t appear to change, despite the implied precision of 5 decimal places)
Three unknown variables: NOMINAL, R DIST, HAZARD Q (these also don’t change)
The lowest part of the block reads that the SITE ASSESSMENT (at 74.28%, which—does it need to be said at this point—also does not change.)
Two inscrutable green blobs extend out past the left and bottom white line that borders this box. (Seriously what the glob are these meant to be?)
At the bottom is SCAN M and PLACE wrapped in the same purple “NV” wrappers seen throughout the Federation spaceship interfaces. At the bottom is an array of inscrutable numbers in white.
Since that animated gif is a little crazy to stare at, have this serene, still screen cap to reference for the remainder of the article.
Three things to note in the analysis.
1. Yes, fuigetry
I’ll declare everything on the bottom to be filler unless someone out there can pull some apologetics to make sense of it. But even if an array of numbers was ever meant to be helpful, an emergency landing sequence does not appear to be the time. If it needs to be said, emergency interfaces should include only the information needed to manage the crisis.
2. The visual style of the topography
I have before blasted the floating pollen displays of Prometheus for not describing the topography well, but the escape pod display works while using similar pointillist tactics. Why does this work when the floating pollen does not? First, note that the points here are in a grid. This makes the relationship of adjacent points easy to understand. The randomness of the Promethean displays confounds this. Second, note the angle of the “light” in the scene, which appears to come from the horizon directly ahead of the ship. This creates a strong shaded relief effect, a tried and true method of conveying the shape of a terrain.
3. How does this interface even help?
Let’s get this out of the way: What’s Ibanez’ goal here? To land the pod safely. Agreed? Agreed.
Certainly the terrain view is helpful to understand the terrain in the flight path, especially in low visibility. But similar to the prior interface in this pod, there is no signal to indicate how the ship’s position and path relate to it. Are these hills kilometers below (not a problem) or meters (take some real care there, Ibanez.) This interface should have some indication of the pod. (Show me me.)
Additionally, if any of the peaks pose threats, she can avoid them tactically, but adjusting long before they’re a problem will probably help more than veering once she’s right upon them. Best is to show the optimal path, and highlight any threats that would explain the path. Doing so in color (presuming pilots who can see it) would make the information instantly recognizable.
Finally the big label quantifies a “site assessment,” which seems to relay some important information about the landing location. Presumably pilots know what this number represents (process indicator? structural integrity? deviation from an ideal landing strip? danger from bugs?) but putting it here does not help her. So what? If this is a warning, why doesn’t it look like one? Or is there another landing site that she can get to with a better assessment? Why isn’t it helping her find that by default? If this is the best site, why bother her with the number at all? Or the label at all? She can’t do anything with this information, and it takes up a majority of the screen. Better is just to get that noise off the screen along with all the fuigetry. Replace it with a marker for where the ideal landing site is, its distance, and update it live if her path makes that original site no longer viable.
Of course it must be said that this would work better as a HUD which would avoid splitting her attention from the viewport, but HUDs or augmented reality aren’t really a thing in the diegesis.
The next scene shows them crashing through the side of a mountain, so despite this more helpful design, better for the scene might be to design a warning mode that reads SAFE SITE: NOT FOUND. SEARCHING… and let that blink manically while real-time, failing site assessments blink all over the terrain map. Then the next scene makes much more sense as they skip off a hill and into a mountain.
To travel to Jupiter, navigator Zander must engage the Star Drive, a faster than light travel mechanism. Sadly, we only see the output screens and not his input mechanism.
Captain Deladier tells Ibanez, "Steady as she goes, Number 2. Prepare for warp."
She dutifully replies, "Yes m’am."
Deladier turns to Barcalow and tells him, "Number 1, design for Jupiter orbit."
In response, he turns to his interface. We hear some soft bleeping as he does something off screen, and then we see his display. It’s a plan view of the Solar system with orbits of the planets described with blue circles. A slow-blink yellow legend at the top reads DESIGNATING INTRASYSTEM ORBITAL, with a purple highlight ring around Earth. As he accesses "STARNAV" (below) the display zooms slowly in to frame just Jupiter and Earth.
As the zoom starts, a small box in the lower right hand corner displays a still image of Mars with a label LOCAL PRESET. In the lower left hand corner text reads STARNAV-0031 / ATLAS, MARS. After a moment these disappear replaced with STARNAV-3490 / ATLAS, NEPTUNE, STARNAV-149.58 / ATLAS URANUS, STARNAV-498.48 / ATLAS, SATURN, and finally STARNAV-4910.43 / ATLAS JUPITER. The Jupiter information blinks furiously for a bit confirming a selection just as the zoom completes, and DESIGNATING INTRASYSTEM ORBIT is replaced with the simpler legend COURSE. Jupiter has a yellow/orange ring focus in on it as part of the confirmation.
Some things that may be obvious, but ought to be said:
How about "Destination" instead of "Local preset"? The latter is an implementation model. The former matches the navigator’s goals.
Serial options are a waste here. Why force him to move through each one, read it to see if that’s the right one, and then move on? Wouldn’t an eight-part selection menu be much, much faster?
The serial presentation is made worse in that the list is in some arbitrary order. It’s not alphabetical: MNUSJ? It’s not distance-order either. He starts at 4, he jumps to 8, 7, and 6 before reaching 5, which is Jupiter. Better for most default navigation purposes would be distance order. Sure, that would have meant only one stop between Earth and Jupiter. If you really needed more stops for the time, start at Mercury.
What are those numbers after "STARNAV-"? It’s not planet size, since Uranus and Neptune should be similar, as should Saturn and Jupiter. And it’s not distance, since Jupiter has the largest number but is not the fathest out. Of course it could be some arbitrary file number, but it’s really unclear why the navigator would need to know this when using the screen. If a number had to be there, perhaps a ranking like Sol-V Best would be to get rid of any information that didn’t help him with the microinteraction.
How about showing the course when the system has determined the course?
NUI would be better. When he looks at that first screen, he should be able to touch Jupiter or its orbit ring.
Agentive would be best. For instance, if the system monitors the conversation on the bridge, when it heard "design for Jupiter," it could prepare that course, and let the navigator confirm it.
Regular readers of my writing know that agentive tech is a favorite of mine, but in this case there is some clue that this is actually what happened. Note that the zoom to frame Earth and Jupiter happens at the same time as he’s selecting Jupiter. How did it know ahead of time that he wanted Jupiter? He hadn’t selected it yet. How did it know to go and frame these two planets? Should he select first and this zoom happen afterward? Did it actually listen to Deladier and start heading there anyway?
It would be prescient if this throwaway interface was some secret agentive thing, but sadly, given that the rest of the interfaces in the film are ofttimes goofy, powered controls, it’s quite likely that the cause and effect were mashed together to save time.
Though I can’t quite make sense of them (and they don’t change in the sequence), for the sake of completeness, I should list the tabs that fill the top and bottom of the screen, in case its meaning becomes clear later. Along the top they have green tab strokes, and read from left to right POS, ROLL, LINE, NOR, PIVOT, LAY. Tabs at the bottom have orange and purple strokes and read SCAN M, PLACE, ANALYZE, PREF, DIAG-1 on the first row. The second row reads SERIAL [fitting -Ed.], CHART, DECODE, OVER-M, and DIAG-2.
In biology class, the (unnamed) professor points her walking stick (she’s blind) at a volumetric projector. The tip flashes for a second, and a volumetric display comes to life. It illustrates for the class what one of the bugs looks like. The projection device is a cylinder with a large lens atop a rolling base. A large black plug connects it to the wall.
The display of the arachnid appears floating in midair, a highly saturated screen-green wireframe that spins. It has very slight projection rays at the cylinder and a "waver" of a scan line that slowly rises up the display. When it initially illuminates, the channels are offset and only unify after a second.
The top and bottom of the projection are ringed with tick lines, and several tick lines runs vertically along the height of the bug for scale. A large, lavender label at the bottom identifies this as an ARACHNID WARRIOR CLASS. There is another lavendar key too small for us to read.The arachnid in the display is still, though the display slowly rotates around its y-axis clockwise from above. The instructor uses this as a backdrop for discussing arachnid evolution and "virtues."
After the display continues for 14 seconds, it shuts down automatically.
It’s nice that it can be activated with her walking stick, an item we can presume isn’t common, since she’s the only apparently blind character in the movie. It’s essentially gestural, though what a blind user needs with a flash for feedback is questionable. Maybe that signal is somehow for the students? What happens for sighted teachers? Do they need a walking stick? Or would a hand do? What’s the point of the flash then?
That it ends automatically seems pointlessly limited. Why wouldn’t it continue to spin until it’s dismissed? Maybe the way she activated it indicated it should only play for a short while, but it didn’t seem like that precise a gesture.
Of course it’s only one example of interaction, but there are so many other questions to answer. Are there different models that can be displayed? How would she select a different one? How would she zoom in and out? Can it display aimations? How would she control playback? There are quite a lot of unaddressed details for an imaginative designer to ponder.
The display itself is more questionable.
Scale is tough to tell on it. How big is that thing? Students would have seen video of it for years, so maybe it’s not such an issue. But a human for scale in the display would have been more immediately recognizable. Or better yet, no scale: Show the thing at 1:1 in the space so its scale is immediately apparent to all the students. And more appropriately, terrifying.
And why the green wireframe? The bugs don’t look like that. If it was showing some important detail, like carapice density, maybe, but this looks pretty even. How about some realistic color instead? Do they think it would scare kids? (More than the “gee-whiz!” girl already is?)
And lastly there’s the title. Yes, having it rotate accomodates viewers in 360 degrees, but it only reads right for half the time. Copy it, flip it 180º on the y-axis, and stack it, and you’ve got the most important textual information readable at most any time from the display.
Better of course is more personal interaction, individual displays or augmented reality where a student can turn it to examine the arachnid themselves, control the zoom, or follow up on more information. (Wnat to know more?) But the school budget in the world of Starship Troopers was undoubtedly stripped to increase military budget (what a crappy world that would be amirite?), and this single mass display might be more cost effective.
Later in the scene General Staedert orders a “thermonucleatic imaging.” The planet swallows it up. Then Staedert orders an “upfront loading of a 120-ZR missile” and in response to the order, the planet takes a preparatory defensive stance, armoring up like a pillbug. The scanner screens reflect this with a monitoring display.
In contrast to the prior screen for the Gravity (?) Scan, these screens make some sense. They show:
A moving pattern on the surface of a sphere slowing down
clear Big Label indications when those variables hit an important threshold, which is in this case 0
A summary assessment, “ZERO SURFACE ACTIVITY”
A key on the left identifying what the colors and patterns mean
Some sciency scatter plots on the right
The majority of these would directly help someone monitoring the planet for its key variables.
Though these are useful, it would be even more useful if the system would help track these variables not just when they hit a threshold, but how they are trending. Waveforms like the type used in medical monitoring of the “MOVEMENT LOCK,” “DYNAMIC FLOW,” and “DATA S C A T” might help the operator see a bit into the future rather than respond after the fact.