Mordo wears the Vaulting Boots of Valtor throughout the movie and first demonstrates their use to Dr. Strange when they are sparring. The Boots allow the user to walk, run, or jump on air as if it were solid ground.
When activated, the sole of each boot creates a circular field of force in anticipation of a footfall in midair, as if creating free-floating stepping stones.
How might this work as tech?
The main interaction design challenge is how the wearer indicates where he wants a stepping-stone to appear. The best solution is to let Mordo’s footfall location and motion inform the boots when and where he expects there to be a solid surface. (Anyone who has stumbled while misjudging the height or location of a step on a stairway knows how differently you treat a step where you expect there to be solid footing.)
If this were a technological device, sensors within the boots would retain a detailed history of the wearer’s stride for all possible speeds and distances of movement. The boots would detect muscle tension and flexion combined with the owner’s direction and velocity to accurately predict the placement of each step and then insert an appropriately elevated and angled stepping stone. The boots would know the difference between each of these styles of movement, walking, running, and sprinting and behave accordingly.
As a result, Mordo could always remain upright and stable regardless of his intended direction or how high he had climbed. And while Mordo may be a sorcerer with exceptional physical training, he isn’t superhuman. With the power of the boots he is only able to run and step as high as he could normally if for example he was taking a set of stairs two or three at a time.
As a magical device, the intelligence imbued in the boots is limited to the awareness of the intent of the sorcerer and knows where to place each force-field stepping-stone.
As this site nears its 5-year anniversary, it’s time to take a measure of who the hell you are out there, and what you dig/don’t dig/want on the site. It’s 18 questions, and at most 15 minutes of your time, and by gosh, I would appreciate it.
This staff appears to be made of wood and is approximately a meter long when in its normal form. When activated by Mordo it has several powers. With a strong pull on both ends, the staff expands into a jointed energy nunchaku. It can also extend to an even greater length like a bullwhip. When it impacts a solid object such as a floor, it seems to release a crack of loud energy. Too bad we only ever see it in demo mode.
How might this work as technology?
The staff is composed of concentric rings within rings of material similar to a collapsing travel cup. This allows the device to expand and contract in length. The handle would likely contain the artificial intelligence and a power source that activates when Mordo gives it a gestural command, or if we’re thinking far future, a mental one. There might also be an additional control for energy discharge.
In the movie, sadly, Mordo does not use the Staff to its best effect, especially when Kaecilius returns to the New York sanctum. Mordo could easily disrupt the spell being cast by the disciples using the staff like a whip, but instead he leaps off the balcony to physically attack them. Dude, you’re the franchise’s next Big Bad? But let’s put down the character’s missteps to look at the interface.
Mode switching and inline meta-signals
Any time you design a thing with modes, you have to design the state changes between those modes. Let’s look at how Mordo moves between staff, nunchaku, and whip in this short demonstration scene. Continue reading →
In homage to the wrap of Children of Men, this post I’m sharing an interview with Mark Coleran, a sci-fi interface designer who worked on the film. He also coined the term FUI, which is no small feat. He’s had a fascinating trajectory from FUI, to real world design here in the Bay Area, and very soon, back to FUI again. Or maybe games.
I’d interviewed Mark way back in 2011 for a segment of the Make It So book that got edited out of the final book, so it’s great to be able to talk to him again for a forum where I know it will be published, scifiinterfaces.com.
This interview has been edited for clarity and length.
Tell us a bit about yourself.
So obviously my background is in sci-fi interfaces, the movies. I spent around 10 years doing that from 1997 to 2007. Worked on a variety of projects ranging from the first one, which was Tomb Raider, through to finishing off the last Bourne film, Bourne Ultimatum.
My experience of working in films has been coming at it from the angle of loving the technology, loving the way machines work. And trying to expose it, to make it quite genuine. That’s what I got a name for in the industry was to try and create a more realistic side of interfaces.
Why is it hard to create FUI that would also work in the real world?
It’s because most people have no idea what an interface is, or what it’s supposed to be. From the person watching, for the actor using, the person designing, the person writing, the person directing, they don’t really know why it is there. This is the fundamental problem of the idea of sci-fi interfaces, they’re not interfaces. What they are are plot visualizations. They’re there to illustrate, or demonstrate something happening, or something that has happened. Or connect two people together in space.
So the work of the FUI designer is, working quickly, to fulfill the script, the plot point. Secondarily you consider the style of set design, context, story segment, things like that. That’s not the way things get made in the real world. Film UX and film UI are very much two separate things.
Consider this. If we made things that worked for actors to use on set, the second that actor starts using something, they stop performing, they stop acting. So we can’t make something they actually use during filming. We have to play man behind the curtain, controlling the interface, matching their performance. That allows us to tell the actors, “Do not think about it, just do it. Just do your acting.” So when you see incoherent mashing on the keys and senseless clicking or mouse movement, it’s because we told them to do that.
Imagine how dull it would be to watch a film of a real person trying to figure out real software. There’s a line of realism you can’t cross. You don’t want a genuine database lookup of a police suspect. It’s a user experience problem wrapped in a user experience problem.
Let’s talk specifically about Children of Men. It’s now 10 years old. What do you think of when you look back on that work?
It was a really brief job, I only spent two weeks on the entire thing. It was a subcontract by a company called the Foreign Office. And the lead director was Frederick Norbeck, I think. So their commission was to design all of the advertisements in the film.
They did a lot of the backgrounding and the signage and they brought me in for the technology side of it, and also to create kind of brief world guide. For that I would just draw a timeline. Here’s what it’s like now, here’s where this unknown fertility event happens in five, six years time, and then the story in the film happens 20 years after that. Then I asked, “Okay, what is it like there? What were the systems like?”
As a result of the fertility event, all major technological advancement stops, so half the job was looking at just roughly where we’re gonna be in a couple of years and predicting how that technology will decay.
That’s why the paper has moving images, but they’ve got black lines and those things. It’s decaying.
In addition to the world book, I did a music player for the Forest House. I did all the office computers at the beginning. The signage for the Tate. And the game Kubris.
The step-through security gate & intuitive design
I liked the signage we did just for the step-through security gate. There’s a level of paranoia in that shot. On the side are four icons, like, “Radiation, weapons, explosives, biohazard.” Tiny, hard even to notice, but they tell of the scope of the problems they’re facing. Or expecting to face.
It gets at a larger issue with a lot of these things. When you and I first spoke [for the book Make It So], I was kind of dismissive about a lot of the background of what we do, and what I do. It’s just like, stuff, I’d said. Make It So made me stop and ask, “What am I doing in my design?” There’s not a lot of time in any of these jobs. You have to work with your intuitive sense of design, with your vision based on your experience. Everything you’ve ever played, everything you’ve ever watched. It all has to go in. You have time to reflect later.
The Kubris Game
There’s a great lack of reflection at the front edge really. With the Kubris game all I got was, “It’s a game in a cube.”
“Okay,” I thought, “It’s space, let’s have him manipulate the space of the cube.” Maybe he’s pulling it, and it’s tumbling. But why is it tumbling? “Okay, let’s have pieces sliding down and if they go too far they’ll slide off the face, so he has to keep all these more and more pieces moving, sliding.” At a certain point you feel, “Oh that could be an interesting little game.” And it would play well in the scene.
It took me two days to go from that idea to having it on screen.
What made that project particularly challenging and unique?
The vast majority of films are just reflections of what we have right now, but Children of Men actually felt like it was trying to step ahead and show how things might really be. The temptation in a lot of technology to do the shiny thing, and this world is anything but shiny. So how does this technology reflect this real environment. But in this film, the interfaces aren’t the focus of any scene. It’s all there, but it’s just low-key texture.
What’s the worst FUI trope?
I want to say translucent screens, but I see why that’s become a trope. Having them transparent makes them feel like they’re part of the scene, rather than an object on a desk. Plus you get to see the actor’s faces. There’s an interesting connection to your crossover concept here [that is, that sci-fi and the real world mutually influence each other, see the talk about it at the O’Reilly recording here, or the post about transparent screens]. About 2–3 years ago I started to see translucent screens on the market, and I suspect the idea to create them came from sci-fi. The problem is, none of them could do true black, so they never really looked right.
No, a true trope vortex are spinning 3D globes and “flying” to information. I remember the original Ghost in the Shell. When Togusa looks at section 9 security, he says, “Show me something.” In response you it takes like three seconds for this building to spin just to show him the thing he just asked for. I’m like, “Uh…WHY?” [laughter] And FUI designers just keep going back to it, building on it, making it worse every time. It’s like it’s faster, and faster, and faster, and it just breaks apart.
Going from FUI to real-world design and back again
I was called to do motion graphics and some interface work on…I’m not even gonna say which film it was. But I worked with through one of the most brilliant crews you can imagine. And despite all our incredible work, this film just…sucked, really bad. And I recall thinking, “It doesn’t matter who you are and what you do on a movie, you have no control whatsoever as to the outcome.”
So I thought I’d shift to work in the real world. Did some stuff in Canada, some really progressive stuff about file management and projects, how we visualize those things and work on them. Then I came to Silicon Valley, doing more work here, only to learn the lie of Silicon Valley: Designers believe they’re doing something positive and good. Really, you’re just subsuming whatever vision you have to somebody else’s idea of minimum viable product. Which in itself is fundamentally wrong, they should be minimum valuable product.
There’s also the horrible trade off between being an in-house designer, and having your ideas ignored by the higher ups, or being an external consultant, and having a very limited quality assurance in the execution of your ideas.
Hilariously, I once worked in-house on a TV project (again, I won’t mention names) and the team had some beautiful ideas. We presented them, and while we were waiting for the response of the higher ups, one of them decided “We need to get some external company to do this.” So they contacted an external firm, and two days later, I get a phone call from that company asking if I’m available to do the work as a subcontractor. It was very surreal. In reflecting on this I realized that I had a lot more influence on technology trends when I was working in the movies.
So now I’m heading back to that world.
What are your favorite Sci-Fi interfaces? Either that you or somebody else has created.
There’s a couple of them, one was the comlock from Space 1999. I loved the simplicity of that idea. It was a small thing, but it had an actual television screen, two inches wide. The characters pick it up off their belts, and look into it. So it all looks like they’re doing a kind of video karaoke. The best thing was it was all working display technology. They did some fancy camera work to hide the wires to the airstream next door with all the equipment that made these little things work. It was Graham Car’s work, and it was phenomenal.
Secondarily, I’d say the lap gun lasers from Aliens. [Seen in director’s cut, or unedited versions of the movie.] It’s just a laptop with a countdown of remaining ammunition. It was a simple, beautiful way of telling a piece of story. It was so elegantly done, and yet such attention to it. I really, really liked that.
One thing that stood in my mind recently, was Arrival. All the mundane use of technology was really nice. It’s still a background, a way characters are trying to tackle the problem, but it shows how they think. Like on the tablets, you draw or reselect pieces, build a structure from them. Beautifully done.
Then a surprising one is Assassin’s Creed.They changed the interface from the games. Look for the screens in the background, which are beautiful. Really different than a lot of people have done. Black and white. Very subtle in a lot of ways. There were all those little squares, doing things, very busy. It almost feels like it could’ve suddenly made something. It’s elegantly done.
If you could have any Sci-Fi tech made real, what would it be?
I want The Hitchhiker’s Guide to the Galaxy. I love the idea of having a guide for everything. A snarky guide for everything. It would probably get you into trouble, but at least make life interesting. Google Maps is just too damn good at what it does, it’s like, you need some variety in life. It’s the idea of an imperfect piece of technology could make your life interesting, or at least fun.
Depending on how you count, there are only 9 interfaces in Children of Men. This makes sense because it’s not one of those Gee-Whiz-Can-You-Believe-the-Future technofests like Forbidden Planet or Minority Report.Children of Men is a social story about the hopelessness of a world without children, so the small number of interfaces—and even the way they are underplayed—is wholly appropriate to the theme. Given such a small number, you would not expect them to be as spectacular as they are. Or maybe you would. I don’t know how you roll.
Sci: A (4 of 4) How believable are the interfaces?
The interfaces are wholly believable, given the diegesis. Technology is focused on security, transportation, and distracting entertainment, which is exactly what you’d expect. Nothing breaks physics or reason.
The only ding is that Quietus could have included some nod to its reimbursement promise, and that’s so minor it only reveals itself as a problem after deep consideration. It doesn’t break the flow of the film.
Fi: A (4 of 4) How well do the interfaces inform the narrative of the story?
All of the interfaces point back in some way to the world that created them and help move the story along. Security is everywhere. Jasper cobbles together technology to help his resistance. Suicide is a government-sanctioned option.
Interfaces: A- (4 of 4) How well do the interfaces equip the characters to achieve their goals?
Luke’s HUD is a little slow, considering that its job is to help avoid collisions.
Jasper’s Home Alarm could do more to help its occupants respond effectively to the alarm.
The Music Player isn’t very readable at a distance.
These are the main three issues that mar an otherwise very well-considered set of interfaces and technologies.
Final Grade A (12 of 12), Blockbuster.
It’s rare that a film’s interfaces get a full blockbuster rating on this site. The only other one at the time of publication is The Fifth Element. And while I take pains to rate the interfaces as distinct from the movie, I’m pleased when such a brilliant (yet, ironically, dark) film includes brilliant interfaces as well.
There are no more interfaces within the film to analyse. But before moving on to the grades, some final (and brief – I promise) discussion about cyberpunk and virtual reality.
As stated at the beginning, Johnny Mnemonic is a cyberpunk film, with the screenplay written by noted cyberpunk author William Gibson, loosely based on one of his short stories of the same name. Why would user interface designers care? Because the cyberpunk authors were the first to write extensively about personal computing technologies, world wide networks, 2D/3D GUIs, and AI. Cyberspace, both the idea and the word itself, comes from cyberpunk fiction. Just as Star Trek inspired NASA engineers and astronauts, the cyberspace depicted by the authors inspired virtual reality programmers and designers. In the first virtual reality wave of the mid to late 1990s, it seemed that everybody working in the field had read Neuromancer.
If you’ve never read any cyberpunk and are now curious, Neuromancer by William Gibson is still the classic work. For a visual interpretation, the most cyberpunk of all films, in style and tone rather than plot, is Blade Runner. Cyberpunk founder Bruce Sterling, who wrote the foreword for “Make It So”, often writes about design; and sometimes cyberpunk author Neal Stephenson has also written interesting and thought provoking non-fiction about computers and user experience. It’s beyond the scope of this post to outline all their ideas for you, but if you are interested start with:
Johnny Mnemonic also includes scenes set in virtual reality, a trend that began with Tron (although that particular film did not use the term). These virtual reality scenes with their colorful graphics were most likely included to make computer systems less boring and more comprehensible to a general audience. However, these films from Tron onwards have never been successful. (If you work around computer people you’ll hear otherwise from plenty of fans, but computer geeks are not a representative sample.)
In the more recent Iron Man films, Tony Stark in his workshop uses a gestural interface, voice commands, and large volumetric projections. This could easily have been depicted as a VR system, but wasn’t. Could there be a usability problem when virtual reality interfaces are used in film?
The most common reason given for not using VR is that such sequences remind the audience that they’re watching an artificial experience, thus breaking suspension of disbelief. Evidence for this is the one financially successful VR film, The Matrix, which very carefully made its virtual reality identical to the real world. The lesson is that in film, just like most fields, user interfaces should not draw too much attention to themselves.
Sci: B (3 of 4) How believable are the interfaces?
Johnny Mnemonic is a near-future film that takes itself seriously, comparable in intention if not result to Blade Runner. The title also identified it as a cyberpunk film, implying a background setting and technology for the tiny proportion of the audience who’d actually read anything by William Gibson. For those who hadn’t, the film opened with a lengthy crawl, typeset in dense caps/small caps text with red and white color shifting, which probably didn’t help.
The everyday electronics in Johnny Mnemonic include the hotel wall screen and remote, the image grabber, the fax machine. They’re all believable (we’ll pass judgement on those unmarked buttons later!) within the world depicted. The more specialised interfaces such as the motion detector, door bomb, and binoculars likewise fit the design aesthetic and style of technology used.
The airport security scanner without human staff present wasn’t very believable even in the more relaxed era of 1995, but for how it is used rather than what it does. As a scanner and projector it’s fine.
The most important interfaces in the film are the phone system, brain technology, and cyberspace.
Of these the phone system is almost always awesome, with visible cameras and familiar controls. The photorealistic puppet avatar used by Takahashi is a little beyond today’s capabilities, but not greatly so. And it’s nicely foreshadowed by the stylized image filter that Strike uses in the bulletin board conversation. Johnny hacking a phone booth with a swipe card is the one glaring exception to believability, but even here the only effect is that Johnny can talk to someone he otherwise would have trouble reaching. I would have been happier if he’d flipped up a panel to reveal a diagnostic port to hack, but it’s not a major problem.
The cyberspace sequence was awesome in 1995 and holds up well today. The datagloves look dated to someone like me who follows virtual reality technology, but I doubt they bother anyone else. The Johnny Mnemonic cyberspace has a lot of “flashy graphics” but these don’t seem to interfere with getting work done. At the time of writing Swiss Modern minimalism is the preferred style for user interfaces, but more playful and colorful graphics have been used in the past and no doubt will be again in the future.
Lastly we have the brain technology, which starts well. The MemDoubler and Johnny’s uploading kit both look like consumer electronic devices designed for a single function. Spider and the hospital have bigger and clunkier medical gear, but this fits with their need for scavenged and multifunction technology.
Johnny Mnemonic fails when we meet Jones the cyborg dolphin and the neural interface that Johnny uses to “hack his own brain.” Now, I found these believable when I saw the original release, and when I re-watched it on DVD, but that’s because I had read all the books. It’s only when I started writing this report card that I noticed there is absolutely no indication that such interfaces are even possible before this point in the film. Contrast this with Blade Runner, which as well as replicants was careful to show us an artificial owl, a forensic analyst who could identify an artificial snake scale, and a workshop where artificial eyes were designed. If neither the evil megacorporation nor the consumer electronics industry can build a neural interface in the world of Johnny Mnemonic, it’s hard to believe the LoTeks could get their hands on one.
For believability Johnny Mnemonic is mostly awesome, but let down by the neural interfaces. I’m therefore giving it a B.
Fi: D (1 of 4) How well do the interfaces inform the narrative of the story?
The interfaces in Johnny Mnemonic have varied roles within the story.
I’ll start with those that support the story by working as advertised. The video phone system, from the first hotel room call on, has the narrative function of allowing characters to communicate expressively with voice and facial expressions rather than, say, email. The phones works flawlessly without getting in the way.
The early brain technology devices also support the story. The MemDoubler explains what it does and its operation is clear. The data upload kit clearly shows the original data disk and the start, progress, and end of the upload process. The image grabber and fax machine, like the video phones, work without distracting the characters or audience.
The door bomb allows Johnny to escape from heavily armed thugs, using brains and technology rather than brute force. It fits well with his character.
The cyberspace search sequence serves two purposes. It shows Johnny being clever and figuring out where to go next, and it shows the audience that this is really a cyberpunk film with advanced computer technology. The interface performs both functions beautifully. Meanwhile the Pharmakom tracker who is also in cyberspace is performing the equivalent of “tracing the phone call” in a current day action film. His standing interface visually distinguishes him from Johnny.
However, the bulletin board conversation in cyberspace is not so good. Strike doesn’t have any useful information to give Johnny, and then he gets wiped out by a virus attack for no apparent reason as the Yakuza have already located where Johnny and Jane are.
The airport security scanner and the LoTek binoculars have the narrative function of telling us something about the characters being viewed rather than providing information to be acted on. The airport scanner and the first use of the LoTek binoculars remind us that Johnny has an implant which is important to the plot. These help the audience since said implant is otherwise invisible and rarely causes him any difficulty. The second use of the LoTek binoculars is to tell us that Street Preacher is dangerous, which we can already figure out from the trail of bodies he leaves behind him.
Themotion detector is the first of the interfaces which support the narrative by not working. If it had given the alarm, the access codes might have been saved and the scientists might escape or defend themselves. The scene is structured so that only Johnny gets away because he is in the bathroom, but it could just as easily have played out with the same results if the motion detector had been missing altogether.
The brain scanners at Spider’s place and then the hospital don’t work either. The intent is presumably to emphasise how difficult it is to retrieve “the data” and increase the tension as Johnny’s time runs out. The problem is that both scanners are very obviously cobbled together from ancient junk. Instead of impressing us with how fiendishly difficult it is to crack the encryption, these instead suggest that Johnny would be much better off getting help from someone else.
And lastly, the LoTek bug dropper again functions by being a terrible interface. Nearly killing the lead characters gives Johnny an excuse for an epic rant, and a reason for tension in the subsequent debate between Johnny and the LoTeks over the download. However, again I have to wonder why Johnny didn’t immediately head back into town. These people are meant to be the only hope against the evil corporate overlords? Seriously?
Overall, the interfaces in Johnny Mnemonic are a mixed bag when it comes to the narrative, from awesome to awful. I’m giving it a D.
Interfaces: A (4 of 4) How well do the interfaces equip the characters to achieve their goals?
While the interfaces in Johnny Mnemonic aren’t always good for story telling, they are mostly good models for real world design. I’ll go from worst to best for an upbeat ending.
Worst
The undisputed worst interface in the film is the LoTek bug dropper. Don’t do this.
The LoTek brain scanner and decryption hardware is clunky and difficult to use. So difficult in fact that it appears only Jones the cyborg dolphin can operate it successfully. Not at all ideal for a movement devoted to making information free and available to all. But as cryptography is not my field, I’m willing to accept that perhaps there is no better interface. (If the codebreaking division of the NSA is notorious for marine odours leaking into the air conditioning and suspiciously high levels of tuna consumption, please let us know via the comments.)
The motion detector has a simple interface, but the too quiet audio alarm makes it dangerously ineffective. Easily improved, but only if someone survives and is able to post a review. The watch triggered bomb is a useful starting point for thinking about controllers for real world devices.
Numerous electronic gadgets in Johnny Mnemonic have grids of unmarked buttons, which is horrible design for consumer electronics. Fortunately they are only briefly used and not important to the plot. That said, the image grabber used as part of the upload process, with labels added, would be great for writing SciFiInterfaces reviews.
The two different brain scanners used by Spider are difficult to judge, since they’re apparently designed for specialists rather than consumers. But like the MemDoubler and uploader we saw earlier, Spider can quickly perform a diagnostic and interpret the results.
The airport security scanner appears better suited to being used by actual human beings rather than by itself. The scanner is impressive, but suppose it did detect that Johnny was carrying an illegal weapon or device? If Johnny keeps walking, there’s no evidence that it could actually stop him.
The MemDoubler is a neat piece of electronics that does one job easily and efficiently. It’s a bit chatty for something that is possibly illegal and probably meant for covert use, but it was Johnny who decided that a hotel lift was the appropriate place to use it.
The New Darwin hotel room wall screen and remote are not intrusive, simple to use, and don’t require the guest to be fully attentive. Later the Beijing hotel wall screen is equally easy to use, and the bathroom shows off context awareness.
The data uploader, once assembled, has better labels than the consumer electronics. The controls are simple and allow a novice user to carry out the upload and access code generation without problems.
The LoTek binoculars are an excellent design for a group that needs to keep an eye on who is wandering around the neighbourhood.
The various video phones, from wall screen to portable, all Just Work. The various characters use them so effortlessly that it’s easy to overlook that this is in fact awesome.
Best
And finally the cyberspace interface was and remains my favourite, and an excellent model for any real world designers. (OK, the second layer of security that requires reshaping a pyramid could use a little work, but even that is not a bad interface.)
There are enough good designs here to outweigh the few disasters, so my rating is A.
Final Grade: B- (8 of 12)
Related lessons from the book
Zoomrects in the LoTek binoculars and the continuous perspective streaming of imagery during the data upload are both examples of using motion to create meaning (page 64)
Bright colors are used during the data upload and download, even for the presentation of scientific research and data, because Sci-Fi glows (page 40)
The data uploader gives off a regular chirping sound in addition to a numeric counter, conveying ambient system state with ambient sound (page 112)
The LoTek binoculars, and to a lesser extent the Yakuza binoculars too, place a visual signal in the user’s path (page 210) by directly overlaying text onto the image rather than having a separate display
Although the film does not use this excessively, the airport scanner and the brain scanner screens are mostly blue (page 42)
Surprisingly, the phone system mostly relies on numbers rather than names, even though the goal is to contact a person, not use an interface (page 207). Only Takahashi contacts someone by name
Takahashi is an example in the book of an interface that can handle emotional inputs (page 214), and we can add Johnny’s threatening gesture and Strike’s “retreat” during the bulletin board conversation
Takahashi is also an example of letting users alter their appearance (page 221), and we can add Jones the dolphin with a custom avatar in cyberspace
In the cyberspace of Johnny Mnemonic various areas and individual buildings have their own visual style, because the visual design is a fundamental part of the interface (page 31) and creative combinations of even common stylistic choices create a unique appearance (page 73)
Navigation within the three-dimensional cyberspace simulates physically flying to make use of users’ spatial memory (page 62), but allows “teleporting” directly to a desired location because being useful is more important than looking impressive (page 264)
The cyberspace interface for the hotel and copyshop both use gesture for simple, physical manipulations and use language for abstractions (page 104)
New lesson
Not everything in virtual reality needs to be three-dimensional
The cyberspace sequence shows windows, usually in full screen mode, with two dimensional spreadsheet interfaces for tabular data. There’s no need to represent these in 3D. This rule is a combination of build on what users already know (page 19) and don’t get caught up in the new for its own sake (page 25).
The transition from Beijing to the Newark copyshop is more involved. After he travels around a bit, he realizes he needs to be looking back in Newark. He “rewinds” using a pull gesture and sees the copyshop’s pyramid. First there is a predominantly blue window that unfolds as if it were paper.
And then the copyshop initial window expands. Like the Beijing hotel, this is a floor plan view, but unlike the hotel it stays two dimensional. It appears that cyberspace works like the current world wide web, with individual servers for each location that can choose what appearance to present to visitors.
Johnny again selects data records, but not with a voice command. The first transition is a window that not only expands but spins as it does so, and makes a strange jump at the end from the centre to the upper left.
Once again Johnny uses the two-handed expansion gesture to see the table view of the records.Continue reading →
While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.
Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.
But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.
Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.
Fooling Tulsa
Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.
Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBot, https://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.
Training the bot
So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)
Launching the bot
GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.
Buying time
If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:
Ask her to answer the same question first, probing into details to understand rationale and buy more time
Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal
Example
TULSA
OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?
GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…
(you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
(related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
(new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
(story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”
Lagged-realtime training
Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.
To the stalling GARDNERBOT…
GARDNER
For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
At a natural break in the conversation…
GARDNERBOT
OK. I think I finally have an answer to your earlier question. How about…India?
TULSA
India?
GARDNERBOT
Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?
Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.
That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.
Gotta go
If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.
GARDNERBOT
Oh crap. Will you be online later? I’ve got chores I have to do.
Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.
In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.
So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?
From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.
How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.
An honest version: bot envoy
So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?
Would it be too fake?
I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.
GARDNER
Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
TULSABOT
I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
GARDNER
I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?
Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.
TULSA
GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.
Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.
Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.
As mentioned, Johnny in the last phone conversation in the van is not talking to the person he thinks he is. The film reveals Takahashi at his desk, using his hand as if he were a sock puppeteer—but there is no puppet. His desk is emitting a grid of green light to track the movement of his hand and arm.
The Make It So chapter on gestural interfaces suggests Takahashi is using his hand to control the mouth movements of the avatar. I’d clarify this a bit. Lip synching by human animators is difficult even when not done in real time, and while it might be possible to control the upper lip with four fingers, one thumb is not enough to provide realistic motion of the lower lip.Continue reading →
OMG y’all. We totally got asked on a date and we should totally go.
So I happen to be in NYC for the Interaction17 conference this week, and agreed with the guys from the Decipher SciFi podcast that we should hang out. So it’s late notice, but we have a plan: Join us at 7:25 P.M. to watch The Space Between Us, and then hangout and chat about it afterward? There may even be podcast recording and interface redesigning, it’s hard to say. Providing you’re not into The Big Game.