Sci-fi University Episode 2: Synecdoche & The Ghost in the Shell

How can direct manipulation work on objects that are too large to be directly manipulated?

Sci-fi University critically examines interfaces in sci-fi that illustrate core design concepts. In this 3:30 minute episode, Christopher discusses how the interfaces of Ghost in the Shell introduces synecdoche to our gestural language.

If you know someone who likes anime, and is interested in natural user interfaces—especially gesture—please share this video with them.

Special ありがとう to Tom Parker for his editing.

Ghost in the Shell: Home viewing

San Francisco Bay Area folks may have been wondering what was up with the Ghost in the Shell 20th anniversary movie night. Well, some bad news.

GitS-Aramaki-11

There weren’t enough pre-sales to rent the cinema. We might have just run it as a public showing, but the cinema could not find a way to secure the rights for a public showing despite best efforts and the use of Google Translate on promising Japanese sites. You might think in that case that you could just show it anyway, but the owners cited a story in which independent filmmakers once had to fork over a cool $8000 for an unauthorized showing of a film, even when the normal licensing was only $200. So, without licensing, no public showing. But that doesn’t have to stop us. We have technology.

GitS-Hands-06

A synchronozed home viewing of Ghost in the Shell

I’m watching Ghost in the Shell on Saturday, 28 March 2015, starting at 20:30 PDT. I may have a few friends over. Want to join? Well, my couch will likely be full, but get a copy of the movie yourself on Blu-Ray, or Amazon instant video, or Netflix DVD (not available at this time streaming through Netflix), and we can live tweet the event. I’ve just launched the twitter handle @SFImovienight, where

  • I will announce upcoming movie nights
  • I will track movie night requests
  • I will live tweet movies as we’re watching
  • Anyone else watching along can join in

The hashtag for this viewing will be #gits20.

Yes a contest

Since this won’t be a live event, let’s shake the contest up a bit. No trivia. Whatever tweet that:

  • Includes #gits20
  • Tags @SFImovienight
  • Gets the most retweets

…between now and 28 March 2015 23:00 PDT will win an Adobe Creative Cloud license for 1 year, a $600 value, as an offer by in-kind sponsor Adobe.

Has anyone tried this before? Have suggestions?

Scifiinterfaces.com presents the 20th anniversary of Ghost in the Shell at the New Parkway

GitS-heatvision-01

UPDATE (21 MAR): Owing to some licensing complications, the event can not be held publicly. But we’re nerds. That doesn’t need to stop us.

Let’s celebrate the 20th anniversary of this awesome, hand-drawn anime title that features some amazingly foresightful wearable tech. The show will be at the New Parkway cinema in Oakland, California on Thursday March 26th at 7PM. As usual there will be an awesome preshow with an analysis of one of the interfaces, a mobile-phone trivia contest to win GitS t-shirts, a possible 30-finger race (if we get enough people and I can make the apparatus), and your ticket includes you in a raffle for one of the year-long Creative Cloud subscriptions (a $600 value) provided from my in-kind sponsor Adobe. Join Major Motoko Kusanagi in her mind expanding search for the Puppet Master, and please spread the word to your friends and mid-1990s anime fans!

Report Card: Ghost in the Shell

GitS-Report-Card

The movie Ghost in the Shell came out a full 6 years after Masamune Shirow’s cyberpunk manga “Mobile Armored Riot Police” was serialized, and preceded the two anime television series Ghost in the Shell: Stand Alone Complex, and Ghost in the Shell: S.A.C. 2nd GIG. The entire franchise is massive and massively influential. It’s fair enough to say that this film might be the worst in the series to evaluate, but I have to start somewhere, and the start is the most logical place for this.

Sci: B+ (3 of 4) How believable are the interfaces?

The main answer to this question has to do with whether you believe that artificial intelligence and its related concept machine sentience is possible. Several concepts rely on that given.

Continue reading

Siege Support

GitS-Aramaki-03

When Section 9 launches an assault on the Puppet Master’s headquarters, Department Chief Aramaki watches via a portable computer. It looks and behaves much like a modern laptop, with a heavy base that connects via a hinge to a thin screen. This shows him a live video feed.

The scan lines on the feed tell us that the cameras are diegetic, something Aramaki is watching, rather than the "camera" of the movie we as the audience are watching. These cameras must be placed in many places around the compound: behind the helicopter, following the Puppet Master, to the far right of the Puppet Master, and even floating far overhead. That seems a bit far-fetched until you remember that there are agents all around the compound, and Section 9 has the resources to outfitted all of them with small cameras. Even the overhead camera could be an unoticed helicopter equipped with a high-powered telephoto lens. So stretching believability, but not beyond the bounds of possibility. My main question is, given these cameras, who is doing the live editing? Aramaki’s view switches dramatically between these views as he’s watching with no apparent interaction.

A clue comes from his singular interaction with the system. When a helicopter lands in the lawn of the building, Aramaki says, "Begin recording," and a blinking REC overlay appears in the upper left and a timecode overlay appears in the lower right. If you look at the first shot in the scene, there is a soldier next to him hunched over a different terminal, so we can presume that he’s the hands-on guy, executing orders that Aramaki calls out. That same tech can be doing the live camera switching and editing to show Aramaki the feed that’s most important and relevant.

GitS-Aramaki-11

That idea makes even more sense knowing that Aramaki is a chief, and his station warrants spending money on an everpresent human technician.

Sometimes, as in this case, the human is the best interface.

Section No6’s crappy sniper tech

GitS-Drone_gunner-01

GitS-Drone_gunner-12

Section 6 sends helicopters to assassinate Kunasagi and her team before they can learn the truth about Project 2501. We get a brief glimpse of the snipers, who wear full-immersion helmets with a large lens to the front of one side, connected by thick cables to ports in the roof of the helicopter. The snipers have their hands on long barrel rifles mounted to posts. In these helmets they have full audio access to a command and control center that gives orders and recieves confirmations.

GitS-profile-06

The helmets feature fully immersive displays that can show abstract data, such as the profiles and portraits of their targets.

GitS-Drone_gunner-06

GitS-Drone_gunner-07

These helmets also provide the snipers an augmented reality display that grants high powered magnification views overlaid with complex reticles for targeting. The reticles feature a spiraling indicator of "gyroscopic stabilization" and a red dot that appears in the crosshairs when the target has been held for a full second. The reticles do not provide any "layman" information in text, but rely solely on simple shapes that a well-trained sniper can see rather than read. The whole system has the ability to suppress the cardiovascular interference of the snipers, though no details are given as to how.

These features seem provocative, and a pretty sweet setup for a sniper: heightened vision, supression of interference, aiming guides, and signals indicating a key status. But then, we see a camera on the bottom of the helicopter, mounted with actuators that allow it to move with a high (though not full) freedom of movement and precision. What’s this there for? It wouldn’t make sense for the snipers to be using it to aim. Their eyes are in the direction of their weapons.

GitS-Drone_gunner-02

This could be used for general surveillance of course, but the collection of technologies that we see here raise the question: If Section 9 has the technology to precisely-control a camera, why doesn’t it apply that to the barrel of the weapon? And if it has the technology to know when the weapon is aimed at its target (showing a red dot) why does it let humans do the targeting?

Of course you want a human to make the choice to pull a trigger/activate a weapon, because we should not leave such a terrible, ethical, and deadly decision to an algorithm, but the other activities of targeting could clearly be handled, and handled better, by technology.

This again illustrates a problem that sci-fi has had with tech, one we saw in Section 6’s security details: How are heroes heroic if the machines can do the hard work? This interface retreats to simple augmentation rather than an agentive solution to bypass the conflict. Real-world designers will have to answer it more directly.