The secret of the tera-keyboard

GitS-Hands-01

Many characters in Ghost in the Shell have a particular cybernetic augmentation that lets them use specially-designed keyboards for input.

Triple-hands

To control this input device, the user’s hands are replaced with cybernetic ones. Normally they look and behave like normal human hands. But when needed, the fingers of these each split into three separate mini-fingers, which can move independently. These 30 spidery fingerlets triple the number of digits at play, dancing across the keyboard at a blinding 24 positions per second.

GitS-Hands-02

The tera-keyboard

The keyboards for which these hands were built have eight rows. The five rows nearest the user have single symbols. (QWERTY English?) Three rows farthest from the user have keys labeled with individual words. Six other keys at the top right are unlabeled. Each key glows cyan when pressed and is flush with the board itself. In this sense it works more like a touch panel than a keyboard. The board has around 100 keys in total.

GitS-Hands-03

What’s nifty about the keyboard itself is not the number of keys. Modern keyboards have about that many. What’s nifty is that you can see these keyboards are massively chorded, with screen captures from the film showing nine keys being pressed at once.

GitS-Hands-04

Let’s compare. (And here I owe a great mathematical debt of thanks to Nate Clinton for his mastery of combinatorics.) The keyboard I’m typing this blog post on has 104 keys, and can handle five keys being pressed at once, i.e, a base key like “S” and up to four modifier keys: shift, control, option, and command. If you do the math, this allows for 1600 different keypresses. That’s quite a large range of momentary inputs.

But on the tera-keyboard you’re able to press nine keys at once, and more importantly, it looks like any key can be chorded with any other key. If we’re conservative in the interpretation and presume that 9 keys must be pressed at once—leaving 6 fingerlets free to move into position for the next bit of input—that still adds up to a possible 2,747,472,247,520 possible keypresses (≈2.7 trillion). That’s about nine orders of magnitude more than our measley 1600. At 24 keypresses per second, that’s a data rate of 6.5939334e+13 per second.

GitS-Hands-05

So, ok, yes, fast, but it only raises the question:

What exactly is being input?

It’s certainly more than just characters. Unicode‘s 110,000 characters is a fraction of a fraction of this amount of data, and it covers most of the world’s scripts.

Is it words? Steven Pinker in his book The Language Instinct cites sources estimating the number of words in an educated person’s vocabulary is around 60,000. This excludes proper names, numbers, foreign words, any scientific terms, and acronyms, so it’s pretty conservative. Even if we double it, we’re still around the number of characters in Unicode. So even if the keyboard had one keypress for every word the user could possibly know and be thinking at any particular moment, the typist would only be using a fragment of its capacity.

typing

The only thing that nears this level of data on a human scale is the human brain. With a common estimate of 100 billion neurons, the keyboard could be expressing the state of it’s users brain, 24 times a second, distinguishing between 10 different states of each neuron.

This also bypasses one of the concerns of introducing an input mechanism like this that requires active manipulation: The human brain doesn’t have the mechanisms to manage 30 digits and 9-key-chording at this rate. To get it to where it could manage this kind of task would need fairly massive rewiring of the brain of the user. (And if you could do that, why bother with the computer?)

But if it’s a passive device, simply taking “pictures” of the brain and sharing those pictures with the computer, it doesn’t require that the human be reengineered, just re-equipped. It requires a very smart computer system able to cope with and respond to that kind of input, but we see that exact kind of artificial intelligence elsewhere in the film.

The “secret”

Because of the form factor of hands and keyboard, it looks like a manual input device. But looking at the data throughput, the evidence suggests that it’s actually a brain interface, meant to keep the computer up to date with whatever the user is thinking at that exact moment and responding appropriately. For all the futurism seen in this film, this is perhaps the most futuristic, and perhaps the most surprising.

GitS-Hands-06

Section No9’s crappy security

GitS-Sec9_security-01

The heavily-mulleted Togusa is heading to a company car when he sees two suspicious cars in the parking basement. After sizing them up for a moment, he gets into his car and without doing anything else, says,

"Security, whose official vehicles are parked in the basement garage?"

It seems the cabin of the car is equipped to continuously monitor for sound, and either an agent from security is always waiting, listening at the other end, or by addressing a particular department by name, a voice recognition system instantly routs him to an operator in that department, who is able to immediately respond:

"They belong to Chief Nakamura of the treaties bureau and a Dr. Willis."

"Give me the video record of their entering the building."

In response, a panel automatically flips out of the dashboard to reveal a monitor, where he can watch the the security footage. He watches it, and says,

"Replay, infrared view"

After watching the replay, he says,

"Send me the pressure sensor records for basement garage spaces B-7 and 8."

The screen then does several things at once. It shows a login screen, for which his username is already supplied. He mentally supplies his password. Next a menu appears on a green background with five options: NET-WORK [sic], OPTICAL, PRESSURE, THERMO, and SOUND. "PRESSURE" highlights twice with two beeps. Then after a screen-green 3D rendering of Section 9 headquarters builds, the camera zooms around the building and through floorplans to the parking lot to focus on the spaces, labeled appropriately. Togusa watches as pea green bars on radial dials bounce clockwise, twice, with a few seconds between.

The login

Sci-fi logins often fail for basic multifactor authentication, and at first it appears that this screen only has two parts: a username and password. But given that Togusa connects to the system first vocally and then mentally, it’s likely that one of these other channels supplies a third level of authentication. Also it seems odd to have him supply a set of characters as the mental input. Requiring Togusa to think a certain concept might make more sense, like a mental captcha.

The zoom

Given that seconds can make a life-or-death difference and that the stakes at Section 9 are so high, the time that the system spends zooming a camera around the building all the way to the locations is a waste. It should be faster. It does provide context to the information, but it doesn’t have to be distributed in time. Remove the meaningless and unlabeled dial in the lower right to gain real estate, and replace it with a small version of the map that highlights the area of detail. Since Togusa requested this information, the system should jump here immediately and let him zoom out for more detail only if he wants it or if the system wants him to see suspect information.

The radial graphs

The radial graphs imply some maximum to the data, and that Nakamura’s contingent hits some 75% of it. What happens if the pressure exceeds 37 ticks? Does the floor break? (If so, it should have sent off structural warning alarms at the gate independently of the security question.) But presumably Section 9 is made of stronger stuff than this, and so a different style of diagram is called for. Perhaps remove the dial entirely and just leave the parking spot labels and the weight. Admittedly, the radial dial is unusual and might be there for consistency with other, unseen parts of the system.

Moreover, Togusa is interested in several things: how the data has changed over time, when it surpassed an expected maximum, and by how much. This diagram only addresses one of them, and requires Togusa to notice and remember it himself. A better diagram would trace this pressure reading across time, highlighting the moments when it passed a threshold. (This parallels the issues of medical monitoring highlighted in the book, Chapter 12, Medicine.)

SECURITY_redo

Even better would be to show this data over time alongside or overlaid with any of the other feeds, like a video feed, such that Togusa doesn’t have to make correlations between different feeds in his head. (I’d have added it to the comp but didn’t have source video from the movie.)

The ultimately crappy Section No9 security system

Aside from all these details of the interface and interaction design, I have to marvel at the broader failings of the system. This is meant to be the same bleeding-edge bureau that creates cyborgs and transfers consciousnesses between them? If the security system is recording all of this information, why is it not being analyzed continuously, automatically? We can presume that object recognition is common in the world from a later scene in which a spider tank is able to track Kunasagi. So as the security system was humming along, recording everything, it should have also been analyzing that data, noting the discrepancy between of the number of people it counted in any of the video feeds, the number of people it counted passing through the door, and the unusual weight of these "two" people. It should have sent a warning to security at the gate of the garage, not relied on the happenstance of Togusa’s hunch and good timing.

This points to a larger problem that Hollywood has with technology being part of its stories. It needs heroes to be smart and heroic, and having them simply respond to warnings passed along by smart system can seem pointedly unheroic. But as technology gets smarter and more agentive, these kinds of discrepancies are going to break believability and get embarassing.

Ghost-hacking by public terminal

GitS-phonecall-01

The garbage collector who is inadvertently working for Ghost Hacker takes a break during his work to access the network by public terminal. The terminal is a small device, about a third of a meter across, mounted on a pole about a meter high and surrounded by translucent casing to protect it from the elements and keep the screen private. Parts are painted red to make it identifiable in the visual chaos of the alleyway.

GitS-phonecall-02

After pressing a series of buttons and hearing corresponding DTMF, or Touch-Tones, he inserts a card into a horizontal slot labeled “DATA” in illuminated green letters. The card is translucent with printed circuitry and a few buttons. The motorized card reader pulls the card in, and then slides it horizontally along a wide slot while an illuminated green label flashes that it is INSPECTING the card. When it is halfway along this horizontal track, a label on the left illuminates COMPRESS.

On a multilayer, high-resolution LCD screen above, graphics announce that it is trying to CONNECT and then providing ACCESS, running a section of the “cracking software” that the garbage collector wishes to run. After he is done with ACCESS, he removes the card and gets back to work.

GitS-phonecall-06

From a certain perspective, there’s nothing wrong with this interaction. He’s able to enter some anonymous information up front, and then process the instructions on the card. It’s pretty ergonomic for a public device. It provides him prompts and feedback of process and status. He manages its affordances and though the language is cryptic to us, he seems to have no problem.

Where the terminal fails is that it gives him no idea that it’s doing something more than he realizes, and that something more is quite a bit more illegal than he’s willing to risk. Had it given him some visualization of what was being undertaken, he might have stopped immediately, or at least have returned to his “friend” to ask what was going on. Of course the Ghost Hacker is, as his name says, a powerful hacker, and might have been able to override the visualization. But with no output, even novice hackers could dupe the unknowing because they are uninformed.

REAL TIME FULL SCAN HACKING

GitS-cybrain-06

When Section 9 monitors a cyborg’s brain for real-time evidence of hacking, we see a monitoring scan. It shows a screen-green wireframe brain floating at an oblique angle in a black space. A 2D rectangle repeatedly builds it with a “wipe” from front to back, which leaves a dim 3D trail in its passing that describes the brain shape. Fans of the National Library of Medicine’s The Visible Human Project may see similarities, though the project’s visualizations would not be available until a year after the film’s release.

In the upper left is a legend reading, “REAL TIME FULL SCAN HACKING” with some numbers, with another unintelligible legend in the lower right. The values in the upper left never change, and the values in the lower legend change too rapidly to read them. After a beat, a text overlay appears on the right hand side of the screen with vaguely-medical terms listed in all capital letters, flying by too quickly to read*. There is an additional device seen in the corner of the frame, with progress-bar-like displays with thick green lines that wobble left and right. Two waveforms hang above this, their labels off screen. Yellow “fireworks” appear near the “temples” of the brain, indicating the parts under attack.

A question of usefulness

If data doesn’t change or changes too fast to read, it is worth asking if the data should be shown at all. If it’s moving too fast, other representations might work better, like a progress bar, a map, or sparkline. Of course, we know that many programmers may use this kind of output during the run of a program so that if the program stops, the last few activities may be immediately known, so this may be more code than interface.

*Vaguely-medical terms

If you’re the sort of nerd who obsesses over details, following is the text that flashes on the right hand side of the display. There’s nothing in it that is really helpful or informative to a review. It’s mostly internal organs or parts of the brain augmented with “CHECKS” and “CONNECTS”. There’s one exception, about halfway through the 5-second sequence, where it reads “M.YGODDESS CHECK.” Diegetically, it could be a programmers slang for a body part. More likely it’s a reference to Oh! My Goddess!, a manga by Kosuke Fujishima that’s been in print since 1988.

GitS-cybrain-07

ACCESS
CHECK CONNECT
MOTOR FIBERS CHECK
CONNECT POINT NCL
NCL. AMBIGUOUS
SEARCH AN ARTFICIAL B
NCL. AMBIGUOUS CHECK
AN ARTIFICIAL BODY’S PO
GANGLION SUPERIUS CHECK
NO REJECTION
FORAMEN JUGULARE PAG
GANGLION INFERIUS
GANGLION INFERIUS
PROPER VOLTAGE
RAMIPHARMNGEI CAL.L.D
N. LARYNGEUS SUPERIOR
RAMIPHARYNGI CHECK
PLEXUS PHARYNGEUS CHECK
PLEXUS PHARYNGEUS CHECK
NEXT
M.LEVATOR VELI PALAT
MM.CONSTRICTORES PHA
CALLING…
M.LEVATOR VELI PALAT
MM.CONSTRICTORES PHA
CONNECT
N.LARYNGEUS SUPERIOR
N.LARYNGEUS RECURRE
RAMUS EXTERNUS CHECK
NEXT
M.CIRCOTHYROIDEUS
RAMIESOPHAGEI CALLIN
N.LARYNGEUS RECURRED
NO REJECTION
CHECK FEEDBACK TO
NCL. AMBIGUUS
RAMITRACHEALES CHEC
FEEDBACK TO NCL. AMBI
RAMIESOPHAGEI CHECK
NEXT
N.LARYNGEUS INFERIOR
CONNECT N.VAGUS MOTOR
CHECK OVER
EXTEROCEPTIVE SENSOR
CHECK STRAT
CONNECT POINT NCL
NCL. SPINALIS N TRIG
SEARCH AN ARTIFICAL B
NCL.SPINALIS N.TRIGG
CHECK
AN ARTIFICIAL BODY’S PO
TR.SPINALIS N. TIGGER
NO REJECTION
TR.SPINALIS N.TRIGE
CANALICULUS MASTOID
VISCEROMOTOR FIBERS
CANALICULUS MASTOIDS
CONNECT POINT NCL
NCL. DORSALIS N. VAGI
RAMUS AURICULARIS CH
CHECK FEEDBACK TO
NCL. SPINALIS N. TRIGEG
SEARCH AN ARTIFICIAL B
N. VAGUS ENERROCEPTIN
FEEDBACK TO
NCL. SPINALIS TRIGER
CHECK OVER
ANARTIFICAL BODY’S PO
NCL.DORSAL IS N. VAGI
GANGLION SUPERIUS
NO REJECTION
GANGLION SUPERIUS CH
FORAMEN JUGULARE PAS
GANGLION INFERIUS CHE
SAFETY CONNECT PROGR
RAMICORDIACICERVICA
CALLING…
RAMICORDIACICERVICA
NO REJECTION
NEXT
RAMICORDIACICERVICA
CALLING…
PLESUS CARDIACUS CAL
RAMICORDIACICERVICA
PLESUS CARDIACUS CHE
M. ATSUMO TOKAORU CHE
ATOMIC DISPOSITION C
M.YGODDESS CHECK
CHECK OVER
GUSTATORY FIBERS
CHECK STRAT
CONNECT POINT NCL.
NCL. SOLITARIUS
SEARCH AN ARTIFICAIAL B
NCL. SOLITARIUS CHECK
AN ARTIFICIAL BODY’S PO
GANGLION SUPERIUS
NO NOIZE
NEXT
GANGLION SUPERIUS CH
FORAMEN JUGULARE PRE
GANGLION INFERIUS CHE
GANGLION INFERIUS CHE
RAMIPHARYNGEI CALLING
RAMIPHARYNGEI CHECK
PLEXUS PHARYNGEUS CA
NO REJECTION
PLEXUS PHARYNGEUS CH
TASTE BUDS CALLING
CHECK FEEDBACK TO
NCL. SOLITARIUS
TASTE BUDS CONNECT
FEEDBACK NCL. SOLITAR
CHECK OVER
VISCEPOSENSORY FIBER
CHECK STRAT
CONNECT POINT NCL
NCL SOLITARIUS
SEACH AN ARTIFICIAL B
NCL. SOLITARIUS CHECK
AN ARTIFICIAL BODY’S PO
TRACTUS SOLITARIUS C
NO NOIZE
TRACTUS SOLITARIUS C
GANGLION SUPERIUS CA
NO REJECTION
GANGLION SUPERIUS CH
FORAMEN JUGULARE PAS
GANGLION INFERIUS CA
N.LARYNGEUS SUPERIOR
N.LARYNGEUS RECURRED
PLEXUS PULMONAL IS CA
N. LARYNGEUS RECURRED
RAMIESOPHAGUI CALLI
N. LARYNGEYS INFERIOR
RAMITRACHEALES SUPERIOR
RAMUS INTERNUS CALLI
PLEXUS INTERNUS CALLI
PLEXUS PULMONALIS CH
PLEXUS ESOPHAGEUS CA
RAMIESOPHAGEI CHECK
N.LARYNGEUS INFERIOR
PLEXUS EXOPHAGEUS CH
TRUNCUS VAGALIS POST
RAMITRACHEALES CHEC
TRUNCUS VAGALIS ANTE
RAMUS INTERNUS CHECK
VOCAL CORO CALLING
TRUNCUS VAGALIS POST
RAMICOEL CALLING
RAMIRENALES CALLING
TRUNCUS VAGALIS ANTE
RAMIHEPATICI CHECK
PLEXUS HAPATICUS CAL
RAMIGASTRICIPOSTER
RAMIRENALES CHECK
PLEXUS RENALIS CALLI
RAMICOELIACI CHECK
PLEXUS COELICUS CALL
RAMIHEPATICI CHECK
PLEXUSHEPATICUS CALL
RAMIGASTRICI ANTERIO
PLEXUS COELICUS CHEC
RAMI GASTRICIPOSTER
PLEXUS RENALIS CHECK
RAMIGASTRICI ANTERIO
CHECK FEEDBACK TO
BCL. SOLITARUS
PLEXUS HEPATICUS CHE
FEEDBACK TO NCL. SOLIT
VOCAL CORD CHECK
CHECK OVER
CHECK CONNECT
MOTOR FIBERS CHECK
CONNECT POINT NCL
NCL. AMBIGUUS
SEARCH AN ARTIFICAL B
NCL.AMBIGUOUS CHECK
AN ARTIFICAL BODY’S
GANGLION SUPERIUS CA
GANGLION SUPERIUS CH
NO REJECTION
FORAMEN JUGULARE PAS
GANGLION INFERIUS CAL
GANGLION INFERIUS CHE
PROPER VOLTAGE

Perpvision

GitS-heatvision-01

The core of interaction design is the see-think-do loop that describes the outputs, human cognition, and inputs of an interactive system. A film or TV show spends time showing inputs without describing some output, only when these users are in the background and unrelated to the plot. But there are a few examples of outputs with no apparent inputs. These are hard to evaluate in a standard way because it’s such a giant piece of the puzzle. Is it a brain input? Is the technology agentive? Is it some hidden input like Myo‘s muscle sensing? Not knowing the input, a regular review is kind of pointless. All I can do is list its effects and perhaps evaluate the outputs in terms of the apparent goals. Ghost in the Shell has several of these types of inputless systems. Today’s is Kusanagi’s heat vision.

Early in the film, Kusanagi sits atop a skyscraper, jacked in, wearing dark goggles, and eavesdropping on a conversation taking place in a building far below. As she looks down, she sees through the walls of the building in a scanline screen-green view that shows the people as bright green and furniture as a dim green, with everything else being black.

She adjusts the view by steps to zoom closer and closer until her field of vision is filled with the two men conversing in her earpiece. When she hears mention of Project 2501 she thinks the message, “Major, Section 6 is ready to move in.” She reaches up to her right temple and clicks a button, to turn the goggles off before removing them.

That’s nifty. But how did she set the depth of field and the extents (the frustum) of the display so that she only sees these people, and not everyone in the building below this? How does she tell the algorithm that she wants to see furniture and not floor? (Is it thermography? Is the furniture all slightly warm?) What is she doing to increase the zoom? If it’s jacked into her head, why must she activate it several times rather than just focusing on the object with her eyes, or specifying “that person there?” How did she set the audio? Why does the audio not change with each successive zoom? If they’re from separate systems, how did she combine them?

Squint gestures

If I had to speculate what the mechanism should be, I would try to use the natural mechanisms of the eye itself. Let Kusanagi use a slight squint gesture to zoom in, and a slight widening of the eyelids to zoom out. This would let her maintain her gaze, maintain her silence, keep her body still, and keep her hands free.

The scene implies that her tools provide a set amount of zoom for each activation, but for very long distances that seems like it would be a pain. I would have the zoom automatically set itself to make the object on which she is focusing fill her field of vision less some border, and then use squint-gestures to change the zoom to the next logical thing. For instance, if she focused on a person, that person would fill her field of vision. A single widening might zoom out to show the couch on which they are sitting. Another the room. This algorithm wouldn’t be perfect, so you’d need some mechanism for arbitrary zoom. I’d say a squint or wide-eyed gesture held for a third of a second or so would trigger arbitrary zoom for as long as the gesture was maintained, with the zoom increasing logarithmically.

As for the frustum, use the same smart algorithm to watch her gaze, and set the extents to include the whole of the subject and the context in which it sits.

Thermoptic camouflage

GitS-thermoptic-03

Kusanagi is able to mentally activate a feature of her skintight bodysuit and hair(?!) that renders her mostly invisible. It does not seem to affect her face by default. After her suit has activated, she waves her hand over her face to hide it. We do not see how she activates or deactivates the suit in the first place. She seems to be able to do so at will. Since this is not based on any existing human biological capacity, a manual control mechanism would need some biological or cultural referent. The gesture she uses—covering her face with open-fingered hands—makes the most sense, since even with a hand it means, “I can see you but you can’t see me.”

In the film we see Ghost Hacker using the same technology embedded in a hooded coat he wears. He activates it by pulling the hood over his head. This gesture makes a great deal of physical sense, similar to the face-hiding gesture. Donning a hood would hide your most salient physical identifier, your face, so having it activate the camouflage is a simple synechdochic extension.

GitS-thermoptics-30

The spider tank also features this same technology on its surface, where we learn it is a delicate surface. It is disabled from a rain of glass falling on it.

GitS-spidertank-01

This tech less than perfect, distorting the background behind it, and occasionally flashing with vigorous physical activity. And of course it cannot hide the effects that the wearer is creating in the environment, as we see with splashes the water and citizens in a crowd being bumped aside.

Since this imperfection runs counter to the wearer’s goal, I’d design a silent, perhaps haptic feedback, to let the wearer know when they’re moving too fast for the suit’s processors to keep up, as a reinforcement to whatever visual effects they themselves are seeing.

UPDATE: When this was originally posted, I used the incorrect concept “metonym” to describe these gestures. The correct term is “synechdoche” and the post has been updated to reflect that.

Headrest jack

GitS-3Dscanner-010

The jack mechanism in the intercept van is worth noting for its industrial design. Kusanagi has four jacks on the back of her neck in a square pattern. Four plugs sit on the headrest of her seat. To jack in, she simply leans back, and they seat perfectly. She leans forward, and the cables extend from the seat. Given the simple back and forward motion, it takes all of a second. Seems simple enough. But I’ve committed a blog post to it, so of course you can guess it’s not really that simple. I can see two issues with this interface.

How do the jacks and plugs meet so perfectly?

Of course, she’s a super cyborg, so we can presume she can be quite precise in her movements. But does she have eyes/cameras on the back of her head, or precision kinesthetics and a perfect body memory for position? Even if she does, it would be better would be to accommodate some margin of error to account for bumpy roads or action-packed driving maneuvers.

How to do this? One way would be a countersink so that a sloppy approach is corrected by shape. The popular (and difficult-to-source) keyhole for drunk people uses this same principle. Unfortunately, in the case of this headrest jack, the base object is Kusanagi’s neck, which is functionally a cylinder. The cones on the back of her neck would have to be unsightly large or a miss would splay the plugs and force her to retry. Fortunately, the second issue leads us to another solution.

keyhole

How does she genuinely rest against the seat when she doesn’t want to jack in?

Is that even an option here? How does she simply lean back for a road trip nap without being blasted awake by a neon green 3D Google Map?

If it was a magnetic connection, like Apple’s MagSafe power connectors, the jacks and plugs could be designed such that magnetic forces pull them together. But unlike MagSafe, these jacks could be electromagnets controlled by Kusanagi. This would not only ensure intended connections, but also help deal with the precision issues raised above. The electromagnets would snap the plugs into place even if they were misaligned.

MagSafe

An electromagnetic interface would also answer the question of how this works for taller or shorter cyborgs hoping to use the same headrest jack.

An automated solution

This solution does require complex mechanics in the body of the rider. That’s no problem for the Ghost in the Shell diegesis, but if we were facing a challenge like this in the real world, implanting users with tech isn’t a viable solution. Instead, we could push the technology back on the van by letting it do the aiming. In the half a second she leans back, the van itself can look through a camera in the headrest to gauge the fit, and position the plugs correctly with, say, linear actuators. This solution lets human users stay human, but would ensure a precision fit where it was needed.

Virtual 3D Scanner

GitS-3Dscanner-001

Visualization

The film opens as a camera moves through an abstract, screen-green 3D projection of a cityscape. A police dispatch voice says,

“To all patrolling air units. A 208 is in progress in the C-13 district of Newport City. The airspace over this area will be closed. Repeat:…”

The camera floats to focus on two white triangles, which become two numbers, 267 and 268. The thuck-thuck sounds of a helicopter rotor appear in the background. The camera continues to drop below the numbers, but turns and points back up at them. When the view abruptly shifts to the real world, we see that 267 and 268 represent two police helicopters on patrol.

GitS-3Dscanner-008

Color

The roads on the map of the city are a slightly yellower green, and the buildings are a brighter and more saturated green. Having all of the colors on the display be so similar certainly sets a mood for the visualization, but it doesn’t do a lot for its readability. Working with broader color harmonies would help a reader distinguish the elements and scan for particular things.

colorharmonies

Perspective

The perspective of the projection is quite exaggerated. This serves partly as a modal cue to let the audience know that it’s not looking at some sort of emerald city, but also hinders readability. The buildings are tall enough to obscure information behind them, and the extreme perspective makes it hard to understand their comparative heights or their relation to the helicopters, which is the erstwhile point of the screen.

perspectives

There are two ways to access and control this display. The first is direct brain access. The second is by a screen and keyboard.

Brain Access

Kusanagi and other cyborgs can jack in to the network and access this display. The jacks are in the back of their neck and as with most brain interfaces, there is no indication about what they’re doing with their thoughts to control the display. She also uses this jack interface to take control of the intercept van and drive it to the destination indicated on the map.

During this sequence the visual display is slightly different, removing any 3D information so that the route can be unobscured. This makes sense for wayfinding tasks, though 3D might help with a first-person navigation tasks.

GitS-3Dscanner-010

Screen and keyboard access

While Kusanagi is piloting an intercept van, she is in contact with a Section 9 control center. Though the 3D visualization might have been disregarded up to this point as a film conceit, here see that it is the actual visualization seen by people in the diegesis. The information workers at Section 9 Control communicate with agents in the field through headsets, type onto specialized keyboards, and watch a screen that displays the visualization.

GitS-3Dscanner-036

Their use is again a different mode of the visualization. The information workers are using it to locate the garbage truck. The first screens they see show a large globe with a white graticule and an overlay reading “Global Positioning System Ver 3.27sp.” Dots of different sizes are positioned around the globe. Triangles then appear along with an overlay listing latitude, longitude, and altitude. Three other options appear in the lower-right, “Hunting, Navigation, and Auto.” The “Hunting” option is highlighted with a translucent kelley green rectangle.

After a few seconds the system switches to focus on the large yellow triangle as it moves along screen-green roads. Important features of the road, like “Gate 13” are labeled in a white, rare serif font, floating above the road, in 3D but mostly facing the user, casting a shadow on the road below. The projected path of the truck is drawn in a pea green. A kelley green rectangle bears the legend “Game 121 mile/h / Hunter->00:05:22 ->Game.” The speed indicator changes over time, and the time indicator counts down. As the intercept van approaches the garbage truck, the screen displays an all-caps label in the lower-left corner reading, somewhat cryptically, “FULL COURSE CAUTION !!!”

The most usable mode

Despite the unfamiliar language and unclear labeling, this “Hunter” mode looks to be the most functional. The color is better, replacing the green background with a black one to create a clearer foreground and background for better focus. No 3D buildings are shown, and the camera angle is similar to a real-time-strategy angle of around 30 degrees from the ground, with a mild perspective that hints at the 3D but doesn’t distort. Otherwise the 3D information of the roads’ relationship to other roads is shown with shape and shadow. No 3D buildings are shown, letting the user keep her focus on the target and the path of intercept.

GitS-3Dscanner-035

Ghost in the Shell: Overview

Release Date: 18 November 1995, Japan

GitS_title

Sometime in the “near future,” (June 13, 2029, as the interfaces will tell you) a cybernetic assault team who works for a mysterious government agency called Section 9 is hot on the trail of a hacker known as the Puppet Master. Their lead officer, Motoko Kusanagi, leads the team to chase down a garbage man who has had contact with the Puppet Master. Unfortunately after the garbage man is captured, they learn his memories have been erased.

Elsewhere, a facility is hacked to produce a robotic female body, a gynoid. A consciousness is downloaded into the gynoid “shell” and it escapes the facility. When it is accidentally run down by a truck, Section 9 recovers the gynoid to learn if it contains the Puppet Master. Before they can do so, a competing agency called Section 6 storms in and takes it, explaining they had the shell made to lure the Puppet Master in. Section 6 agents load the gynoid into a car and speed away.

Suspicious, Section 9 investigates, only to discover that the Puppet Master is not a person but an artificial intelligence created by Section 6 to conduct illegal activities across the internet, including “ghost-hacking” into people’s minds. Kusanagi follows the Section 6 car to a hangar, where she confronts a powerful R-3000 “spider tank” guardian. She is almost killed, but survives to face the Puppet Master via a brain-to-brain link. In conversation with the Puppet Master, Kusanagi learns that it envies human mortality and the ability to reproduce, and that it wants to merge with her to create a new being. As they begin the process, Section 6 assaults the hangar, killing most everyone inside. One of Kusanagi’s team, Batou, manages to survive, rescue Kusanagi’s severed head, and escape, later attaching the head to a new robotic body, that of a female child. In the final scene, Kusanagi tells Batou that she has become, in fact, a blend of her former self and the Puppet Master AI.

GitS-009

NOTE: In the book Make It So the authors had deliberately eschewed reviewing hand-drawn interfaces for reasons that are explained in the first chapter. Though a full-scale foray into anime is not yet planned, this is a first step towards branching out from live-action and 3D-animated sci-fi to include more of it here in the online database.