How can direct manipulation work on objects that are too large to be directly manipulated?
Sci-fi University critically examines interfaces in sci-fi that illustrate core design concepts. In this 3:30 minute episode, Christopher discusses how the interfaces of Ghost in the Shell introduces synecdoche to our gestural language.
If you know someone who likes anime, and is interested in natural user interfaces—especially gesture—please share this video with them.
San Francisco Bay Area folks may have been wondering what was up with the Ghost in the Shell 20th anniversary movie night. Well, some bad news.
There weren’t enough pre-sales to rent the cinema. We might have just run it as a public showing, but the cinema could not find a way to secure the rights for a public showing despite best efforts and the use of Google Translate on promising Japanese sites. You might think in that case that you could just show it anyway, but the owners cited a story in which independent filmmakers once had to fork over a cool $8000 for an unauthorized showing of a film, even when the normal licensing was only $200. So, without licensing, no public showing. But that doesn’t have to stop us. We have technology.
A synchronozed home viewing of Ghost in the Shell
I’m watching Ghost in the Shell on Saturday, 28 March 2015, starting at 20:30 PDT. I may have a few friends over. Want to join? Well, my couch will likely be full, but get a copy of the movie yourself on Blu-Ray, or Amazon instant video, or Netflix DVD (not available at this time streaming through Netflix), and we can live tweet the event. I’ve just launched the twitter handle @SFImovienight, where
Let’s celebrate the 20th anniversary of this awesome, hand-drawn anime title that features some amazingly foresightful wearable tech. The show will be at the New Parkway cinema in Oakland, California on Thursday March 26th at 7PM. As usual there will be an awesome preshow with an analysis of one of the interfaces, a mobile-phone trivia contest to win GitS t-shirts, a possible 30-finger race (if we get enough people and I can make the apparatus), and your ticket includes you in a raffle for one of the year-long Creative Cloud subscriptions (a $600 value) provided from my in-kind sponsor Adobe. Join Major Motoko Kusanagi in her mind expanding search for the Puppet Master, and please spread the word to your friends and mid-1990s anime fans!
The movie Ghost in the Shell came out a full 6 years after Masamune Shirow’s cyberpunk manga “Mobile Armored Riot Police” was serialized, and preceded the two anime television series Ghost in the Shell: Stand Alone Complex, and Ghost in the Shell: S.A.C. 2nd GIG. The entire franchise is massive and massively influential. It’s fair enough to say that this film might be the worst in the series to evaluate, but I have to start somewhere, and the start is the most logical place for this.
Sci: B+ (3 of 4) How believable are the interfaces?
The main answer to this question has to do with whether you believe that artificial intelligence and its related concept machine sentience is possible. Several concepts rely on that given.
When Section 9 launches an assault on the Puppet Master’s headquarters, Department Chief Aramaki watches via a portable computer. It looks and behaves much like a modern laptop, with a heavy base that connects via a hinge to a thin screen. This shows him a live video feed.
The scan lines on the feed tell us that the cameras are diegetic, something Aramaki is watching, rather than the "camera" of the movie we as the audience are watching. These cameras must be placed in many places around the compound: behind the helicopter, following the Puppet Master, to the far right of the Puppet Master, and even floating far overhead. That seems a bit far-fetched until you remember that there are agents all around the compound, and Section 9 has the resources to outfitted all of them with small cameras. Even the overhead camera could be an unoticed helicopter equipped with a high-powered telephoto lens. So stretching believability, but not beyond the bounds of possibility. My main question is, given these cameras, who is doing the live editing? Aramaki’s view switches dramatically between these views as he’s watching with no apparent interaction.
A clue comes from his singular interaction with the system. When a helicopter lands in the lawn of the building, Aramaki says, "Begin recording," and a blinking REC overlay appears in the upper left and a timecode overlay appears in the lower right. If you look at the first shot in the scene, there is a soldier next to him hunched over a different terminal, so we can presume that he’s the hands-on guy, executing orders that Aramaki calls out. That same tech can be doing the live camera switching and editing to show Aramaki the feed that’s most important and relevant.
That idea makes even more sense knowing that Aramaki is a chief, and his station warrants spending money on an everpresent human technician.
Sometimes, as in this case, the human is the best interface.
Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.
The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.
These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.
These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.
This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?
Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.
This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.
When trying to understand the Puppet Master, Kusanagi’s team consults with their staff Cyberneticist, who displays for them in his office a volumetric projection of the cyborg’s brain. The brain floats free of any surrounding tissue, underlit in a screen-green translucent monochrome. The edge of the projection is a sphere that extends a few centimeters out from the edge of the brain. A pattern of concentric lines routinely passes along the surface of this sphere. Otherwise, the "content" of the VP, that is, the brain itself, does not appear to move or change.
The Cyberneticist explains, while the team looks at the VP, "It isn’t unlike the virtual ghost-line you get when a real ghost is dubbed off. But it shows none of the data degradation dubbing would produce. Well, until we map the barrier perimeter and dive in there, we won’t know anyting for sure."
The VP does not appear to be interactive, it’s just an output. In fact, it’s just an output of the surface features of a brain. There’s no other information called out, no measurements, or augmenting data. Just a brain. Which raises the question of what purpose does this projection serve? Narratively, of course, it tells us that the Cyberneticist is getting deep into neurobiology of the cyborg. But he doesn’t need that information. Kunasagi’s team doesn’t even need that information. Is this some sort of screen saver?
And what’s up with the little ripples? It’s possible that these little waves are more than just an artifact of the speculative technology’s refresh. Perhaps they’re helping to convey that a process is currently underway, perhaps "mapping the barrier perimeter." But if that was the case, the Cyberneticist would want to see some sense of progress against a goal. At the very least there should be some basic sense of progress: How much time is estimated before the mapping is complete, and how much time has elapsed?
Of course any trained brain specialist would gain more information from looking at the surface features of a brain than us laypersons could understand. But if he’s really using this to do such an examination, the translucency and peaked, saturated color makes that task prohibitively harder than just looking at the real thing an office away or a photograph, not to mention the routine rippling occlusion of the material being studied.
Unless there’s something I’m not seeing, this VP seems as useless as an electric paperweight.
When the anonymous Section 6 operatives infiltrate and attack Section 9 to abduct what remains of the cybernetic body housing Project 2501, you’d think the last thing on their mind would be courteous driving. Yet when they are fleeing Togusa’s mighty mullet-fueled pistol rage, we see a surprisingly polite feature of their car.
Speeding along, they come to a cross-alley where they nearly run into a passing garbage truck. They slam on their brakes, and reverse the car to give the truck some room. When they’re reversing, a broad red panel on the back of the vehicle illuminates the English word “BACK.”
The signal disappears when the brake is pressed and the entire panel glows the bright red.
We see the rear end of other vehicles throughout the movie, and none even have the display surface to present such a signal. Even Batou’s ride—and he’s a badpass—lacks anything like a large display surface.
This is unique in the film to this vehicle. It seems that yes, Section 6 is not only trying to cover the tracks that lead to the artificial intelligence that they have created, but are driving the most polite getaway car ever while doing it.
To be clear: This is a bad idea
First of course, driving around in a unique vehicle goes against the whole plan of trying to get away. So, there’s that.
Secondly, why is it in English? We see a lot of signage in the movie, and it’s all Chinese (tip-o-the hat to commenter Don for helping me identify the characters), so this is another conspicuous signal. We do see broken-English labels on the virtual 3D scanner, but this “interface” English in software is not unheard of.
Lastly, it’s unsafe. In traffic accidents, split-seconds of delay can be deadly, and reading is a slower process than just seeing. The more common white (or amber in the antipodes) reversing lamps is a much more arresting, immediate, and safe signal to the drivers behind you, and so would make a much better choice.
Section 6 stations a spider tank, hidden under thermoptic camouflage, to guard Project 2501. When Kunasagi confronts the tank, we see a glimpse of the video feed from its creepy, metal, recessed eye. This view is a screen green image, overlaid with two reticles. The larger one with radial ticks shows where the weapon is pointing while the smaller one tracks the target.
I have often used the discrepancy between a weapon- and target-reticle to point out how far behind Hollywood is on the notion of agentive systems in the real world, but for the spider tank it’s very appropriate.The image processing is likely to be much faster than the actuators controlling the tank’s position and orientation. The two reticles illustrate what the tank’s AI is working on. This said, I cannot work out why there is only one weapon reticle when the tank has two barrels that move independently.
When the spider tank expends all of its ammunition, Kunasagi activates her thermoptic camouflage, and the tank begins to search for her. It switches from its protected white camera to a big-lens blue camera. On its processing screen, the targeting reticle disappears, and a smaller reticle appears with concentric, blinking white arcs. As Kunasagi strains to wrench open plating on the tank, her camouflage is compromised, allowing the tank to focus on her (though curiously, not to do anything like try and shake her off or slam her into the wall or something). As its confidence grows, more arcs appear, become thicker, and circle the center, indicating its confidence.
The amount of information on the augmentation layer is arbitrary, since it’s a machine using it and there are certainly other processes going on than what is visualized. If this was for a human user, there might be more or less augmentation necessary, depending on the amount of training they have and the goal awareness of the system. Certainly an actual crosshairs in the weapon reticle would help aim it very precisely.