Kusanagi is able to mentally activate a feature of her skintight bodysuit and hair(?!) that renders her mostly invisible. It does not seem to affect her face by default. After her suit has activated, she waves her hand over her face to hide it. We do not see how she activates or deactivates the suit in the first place. She seems to be able to do so at will. Since this is not based on any existing human biological capacity, a manual control mechanism would need some biological or cultural referent. The gesture she uses—covering her face with open-fingered hands—makes the most sense, since even with a hand it means, “I can see you but you can’t see me.”
In the film we see Ghost Hacker using the same technology embedded in a hooded coat he wears. He activates it by pulling the hood over his head. This gesture makes a great deal of physical sense, similar to the face-hiding gesture. Donning a hood would hide your most salient physical identifier, your face, so having it activate the camouflage is a simple synechdochic extension.
The spider tank also features this same technology on its surface, where we learn it is a delicate surface. It is disabled from a rain of glass falling on it.
This tech less than perfect, distorting the background behind it, and occasionally flashing with vigorous physical activity. And of course it cannot hide the effects that the wearer is creating in the environment, as we see with splashes the water and citizens in a crowd being bumped aside.
Since this imperfection runs counter to the wearer’s goal, I’d design a silent, perhaps haptic feedback, to let the wearer know when they’re moving too fast for the suit’s processors to keep up, as a reinforcement to whatever visual effects they themselves are seeing.
UPDATE: When this was originally posted, I used the incorrect concept “metonym” to describe these gestures. The correct term is “synechdoche” and the post has been updated to reflect that.
Is there any information coming out of research like mind-operated arms, or the brain-wave to text systems in labs, about obvious thought patterns that can be reliably picked out of otherwise typical brain activity?
I’m thinking about something like a taught mnemonic device that would activate certain implants or external devices (like the sweatshirt/cloak). They would be similar to a macro button on a computer or keyboard, with six or seven easy to remember patterns that you focus on to activate something.
Why unlock your phone, navigate to your mail, click the icon, then wait for it to open; when you could think “Red-Dog-Z” and activate your ‘mail’ macro for your phone?
There is. I worked with one at Rock Health a few years ago, and DARPA has a prosthetic that’s brain controlled. http://www.theverge.com/2013/5/31/4382366/darpa-tmr-mind-controlled-prosthetics-sensory-feedback. Wikipedia only shows six brainwaves that might be used for activation (look up alpha waves, see the list at the bottom), but the brain must be more plastic than that.
Years ago (probably when I was in high school, so about 10+ years ago?) I read an extremely interesting article on Wired on interfacing computer directly to human brain via direct link to nerve system. It was surreal. They went as far as letting the wearer, who was completely paralyzed, manipulate mouse cursor on screen to type on-screen keyboard. It was like The Matrix. Only, in real life.
I expected to see a lot more development in this area, as I thought this kind of *no-UI* interface is the ultimate future, but I’m not seeing much progress… The last thing I heard was manipulating prosthetic arm or hand to clasp a ball with electric waves sent from brain down through an arm…
I think future will be almost like what’s imagined in Ghost in the Shell–people will interact with computer directly by plugging themselves into machines. Hell, why physical plug? Everyone will be wired without wires :d
Pingback: Report Card: Ghost in the Shell | Make It So