Earlier this year I made two presentations called Gorgeous+Catastrophic, in which I show six sci-fi interfaces that are both beautiful to behold and that would be disastrous if implemented in the real world, all to illustrate why we should keep interfaces in sci-fi at arm’s length and evaluate them with a critical eye. It’s a fun talk to give. You should totally ask me to come present it at your local conference.
But in one of the talks, when I introduced the first of the six examples—the medpod interface from Prometheus, starting around 06:35 in the video—I misattributed the whole design to Territory Studio. This was oversimplifying the team on a couple of levels, so let me make the formal correction and apology here.
Territory Studio did work on Prometheus, and even did work on the medpod: They did the VFX interfaces shown around 07:50, and were joined by teams from Fuel VFX and Compuhire. They did not do the on-set touchscreen that sits on the side of the medpod that Noomi Rapace/Elizabeth Shaw touches directly starting around 07:28. That was designed by Shaun Yue, working as an individual contractor. An additional complication is that George Simons, who was graphics supervisor on the film, is now with Territory, but was not then. And then, there’s the credits, which only list names, not companies, and not full teams.
Mea culpa. I should have known better, since I even have an interview with Shaun Yue on this blog about that movie. It’s a small competitive field, and proper credit is hard to get. It’s functionally advertising, so this mistake isn’t minor. My apologies to Shaun, George Simons, Rheea Aranha, John Hill, Paul Roberts, Daniel Burke, Mark Jordan, Eliot Eveson, and Adam Stevenson. If I give the same talk again I won’t make the same mistake.
The trickiness of attribution
It doesn’t excuse the mistake (and I’m glad I have this forum to right the wrong) but I will note how very difficult it is to get attribution of scifiinterfaces correctly.
Who is the designer?
A sci-fi interface is a pie with a lot of fingers in it. If it’s central to the plot, then the writer will have described what it does in the script. The director will have had (in this case) his opinions as well, well before shooting, and actors may have input after reading the script and during shooting. If it’s not central to the plot, it may have been handed to someone elsewhere in the hierarchy. Then there’s art directors, production designers, and editors all directly touching the end result of what we see on screen.
With interfaces becoming more and more part of sci-fi movie making, the teams are getting larger and more specialized. There may be one team whose responsibility are on-set interfaces that the actors see and touch. Another team might be handling the post-production interfaces that are built after principal shooting. One individual might do the graphics as static elements and another do the motion design. Final assets produced by designers may be cut up and remixed by editors (without consulting the original designers) to meet the narrative needs of the flow of the story, so what winds up on screen may not be what was originally designed.
Given all of these people, where is the line of who is and isn’t the designer? Or even the design team? Is it everyone? Is it just “the designer?” Who is that in this complicated case? Who gets the credit?
Midway through the reviews of the Prometheus interfaces, I was delighted to receive an email from the lead designer for the on-set graphics on the movie, Shaun Yue. Since I must evaluate television shows and films as an outsider, it was great to have Shaun’s insider perspective on how and why things get done the way they are. What follows is an email interview conducted with Shaun about his work on the film. Shaun was also kind enough to share some larger images of screens in development, which are included throughout.
What was your role with the Prometheus sci-fi interfaces?
Shaun: I led the visual design of the on-set physical interface graphics. Based at Pinewood Studios for principal photography of the Prometheus ship interiors, I developed the design templates for the set graphics and helped oversee the design team of five which were based remotely around London.
The overall on-set graphics supervisor was George Simons, who managed the logistics, determined the deliverables based on the script, specified hardware requirements and was the key liaison between the production departments. We were both working for set-decorator Sonja Klaus who along with production designer Arthur Max are long time collaborators with Ridley Scott.
Could you describe the creative process with Ridley Scott?
Sonja and Ridley were quite keen that we incorporate novel colours and shapes into the screen design. Sonja’s reference point wasn’t computer interfaces, but rather more broad visual references such as luminescent underwater wildlife and astrological photography. They were keen that the visual language be so futuristic that the technology appeared almost foreign and unrecognisable to contemporary viewers.
It was quite a challenging brief, as basing sci-fi graphics on reality is a powerful method for making designs more believable to the audience. Regardless of how far in the future we speculate, usability and functionality are key, especially when the script requires the audience to read the design immediately for storytelling. As a fan of the original Alien’s robust, utilitarian screen design, I thought it would be a shame to completely disregard it.
However, in meetings with Ridley, he always made reference to visual artists, such as the constructivist works of Rodchenko, rather than objectively predicting the future. I think the key to responding to this challenge was to embrace that Ridley has an intuitive and artistic visual approach to filmmaking. Essentially he saw screen design as an extension of this sensibility.
For our design team, the process was all about trying to loosen up the design rules, not being too rigid with grids, and especially playing around with negative space. We layered shades of transparent gradient windows on top of each other and really just approached the design in an impressionistic way. We saw the screens as the equivalent of moving artworks, I self-rationalised it almost like an AI reconfiguring the design bespokely to its context!
To try and keep things sympathetic to the design of the ship, which was robustly industrial and structured, we overlaid some more defined graphic elements to hold the design together and make it a little more functional, the single line “holding” bracket, header and tab structure, recognisable data elements and button iconography.
In the end the design process on a film is largely about facilitating a collaboration between the various production creatives to reach a goal that satisfies the director.
How are decisions made over the course of production? How did you collaborate with other departments?
Working concurrently as sets were being designed and built meant we had to be flexible in responding to the changing iterations leading up to shooting.
A prime example was the bridge. Ridley envisaged a lot of holograms throughout the set, but the CGI proved cost prohibitive. We did camera tests with the DOP, Dariusz Wolski, to project onto perspex panels. The images were a bit soft, but the advantage was the realistic light spill and live images cast onto the actors and set (a bit reminiscent of the opening scene in Alien). In the end it fit with Ridley’s style to shoot as much for real on set.
The art department had to design full size mockups of the pilot consoles to house projectors and a mirror to bounce the projection back onto the perspex. Also a foam core mock up console with fitted with functioning displays helped present the animated designs to Ridley in context.
Also on the bridge, Ridley was wanted a visual representation of the descent to LV-223 depicted on the screens. He could describe and sketch in great detail the Prometheus’ trajectory to the surface and its surrounding terrain. The visual effects department had explored some options with pre-viz of the two merged locations being used for the planet exteriors (Wadi Ramm in Jordan and Iceland). We used their merged geo-data to define the terrain and map out a descent visualization from the live perspective of the Prometheus. It resulted in a 4 minute long animation from atmosphere to the surface. It was vastly more than required for the final film but preparing material to be shot on set required a lot of extra redundancy for shooting coverage, and also gave something for the actors to respond to.
A vital part of on-set screen design is collaborating with the playback technicians to produce animations which are technically feasible to playback and control on-set for shooting. Sonja was quite keen on touch screen interactivity so we worked with Mark Jordan’s team at Compuhire to create interactive door and control panels which the actors could press and have reactive animation. This was most prominent in the medi-pod cesarean sequence, which had several interactive stages determined by the script. All the buttons were highlightable and controllable, but the activation was quite simple so that Noomi Rapace did not have to memorise complicated controls or gestures when delivering her performance.
What is your background? Are you a designer or an SFX artist by training?
I studied Multimedia Design at Swinburne University in Melbourne, Australia. It was a mix of digital media, web, animation, film and graphic design. I briefly worked as a web designer before moving to animated and live-action commercials, and then was a lead designer at the Australian Centre for the Moving Image (ACMI).
In 2006 I moved to London and have been lucky enough to work on The Dark Knight, Call of Duty: Modern Warfare 3, Crysis 2 in addition to numerous commercials and music projects.
What were the biggest challenges working on the film?
Other than balancing interface functionality with Ridley’s aesthetic sensibilities described above, the biggest challenge would have been achieving the amount of work within a really tight schedule. We went from a blank slate to shooting in 12 weeks eventually completing around 250 screen designs.
The other major challenge was responding to requests for design changes or even completely new designs during shooting. Some of the screens were shot were designed and animated the same day they were shot!
What interfaces are you most proud of in the film? (And, of course, why?)
The entire bridge was really satisfying as it was a massive set with so many screens, over a hundred designs working together. It was a great testament to the efforts of the design team so props must go out to our supervisor George Simons and the rest of the design team David Sheldon-Hicks, John Hill, Paul Roberts and Rheea Aranha. Also thanks must go to Sonja Klaus and Karen Wakefield for their guidance and integrating us into the set-decorating department.
I’m personally quite fond of the medi-pod activation screen as it encapsulated all of the design challenges of an on-set graphic: it was detailed enough to be filmed close, it responded very specifically to the script narrative, and it was programmed to be interactive for the actor to perform with.
The last thing which was quite fun was trying to squeeze in references to Alien. From the nondescript numerals measuring chemicals on the spacesuits, little references to Muthur, to the warning cross motif when Prometheus sets itself for collision, it was our way of trying to pay respect.
What’s your favorite sci-fi interface outside of Prometheus?
Kubrick’s 2001 for its consideration and relentless practical execution, and anything Dan O’Bannon’s designed for its narrative clarity and ingenuity.
What’s next for you?
I wasn’t sure what could compare to working on Ridley Scott’s first sci-fi for almost 30 years, but I was lucky enough to spend most of last year working on Sam Mendes’ Skyfall. To be part of the 50th year of Bond and revisiting Q for the modern age through computer interfaces was pretty amazing.
However, I’m interested in exploring some more speculative design ideas beyond the narrative and practical constraints of feature film production, so we’ll see what the future holds.
Images Copyright 20th Century Fox
Production Credits for the images above:
Directed by Ridley Scott
Production Designer: Arthur Max
Set Decorator: Sonja Klaus
Senior Art Director: Karen Wakefield
Screen Interface Designer: George Simons
Screen Graphics Designers: Shaun Yue, David Sheldon-Hicks, John Hill, Paul Roberts, Rheea Aranha
On-Set Playback: Compuhire
Technicians: Mark Jordan, Adam Stevenson, Eliot Evesons
Even cutting it a bit of slack for these massive challenges, it was quite a letdown for its ofttimes inexplicable plot, wan characters, science-iness, and getting so caught up in its own grandoise themes it forgot about being a movie. But here at scifiinterfaces.com, reviews must be of interfaces, and to that end I’ll bypass much of these script objections, to focus in on the tech.
Sci: D+ (1 of 4) How believable are the interfaces given the science of the day?
I’ll go out on a prediction limb and say that 50 years in the future is, given Moore’s Law, enough time to account for much of the human technology we see in the film. Artificial intelligence and genetics are hot areas of research and might even get to David levels of cyborg in five decades.There are some physics questions around free-floating volumetric projections, but that’s enough of a sci-fi trope to get grandfathered along.
The alien interfaces are of course meant to be vastly superior to our own, and so get a special pass. But even still, the glowing pollen displays are conceivable and are used consistently. You can imagine the touch walls and energy-arc interfaces. The in-your-face alien flight controls have some ergonomic sense to them.
But these are interrupted by frequent speed bumps of design. Access panels across Prometheus shift position, layout, and security requirements at almost every door. 3D maps can be transmitted through a mountain to the ship but not to the nearby people who can use it most. A science ship has a single button that throws it into ramming mode, replete with an audio countdown. These dissolve credulity.
Fi: B (3 of 4)
How well do the interfaces inform the narrative of the story?
Of our categories, this is where Prometheus’ interfaces shine the most. For example, the choice of materials for the alien interfaces are not only beautiful, but offer a great deal of affordance for users and audiences alike. And of course the visual designs of the interfaces is luscious. As a whole they are unique, engaging, and at times a spectacular pageant for the eyes.
The interaction design functions admirably for the narrative as well. The ship keeps its steward uninformed in order to tell the audience what’s happening dramatically. The audio syringe reinforces the body horror of assaultive medicine. The escape pod’s crimes against usability make sense to build tension around Vicker’s escape. The stupid, stupid MedPod fulfills its role of building Snakes on a Plane claustrophobia. (Perhaps this is a clue to the reason the film fails in terms of our other categories: It treats its technology solely as narrative tools.)
If they didn’t shirk believability so badly, the interfaces would get full marks for narrative.
Interfaces: D- (1 of 4)
How well do the interfaces equip the characters to achieve their goals?
I want to call attention to the film’s brilliant interfaces first. The alien astrometrics sit perfectly between passive and active sensemaking modes. The decontamination gesture is simple and memorable. The visual design of the on-ship interfaces is exquisite in its look and feel. The language learning interface combines the best of human- and computer-based teaching techniques. Each of these embodies some forward-looking technological ideas with solid interaction design.
The movie’s transgressions against basic interaction design principles drag its brilliant moments way, way down. Take great care when looking at the film’s interfaces for lessons for your own real world design.
Final Grade C- (5 of 12), MATINEE
Related lessons from the book
The HYPSP>S020 interface might have instead augmented the periphery of vision, as described in Chapter 8, Augmented Reality.
The volumteric maps conform to the wireframe Pepper’s Ghost style, as described in Chapter 4.
The Flight Controls remind us of the importance of grouping controls, as described in Chapter 2.
The MedPod forgets a number of the lessons (show waveforms, be useful) in Chapter 12, which is all about medical technology.
David reminds us why Anthropomorphism (Chapter 9) is comfortable. When asked why he needs to wear a helmet, he replies, “I was designed like this because you people are more comfortable interacting with your own kind. If I didn’t wear the suit, it would defeat the purpose.”
When Vickers realizes Janek is really mutinying and going to ram the alien ship with the Prometheus, she has only 40 seconds to flee to an escape pod. She races to the escape room and slams her hand on a waist-high box mounted on the wall next to one of the pods. The clear, protective door over the pod lifts. She hurriedly dons her environment suit and throws herself into one of the coffin-shaped alcoves. She reaches to her right and on a pad we can’t see, she presses five buttons in sequence, shouting, “Come on!” The pod is sealed and shot away from the ship to land on the planet below.
The transparent cover gives a clear view into the pod. That’s great, since if there were multiple people trying to escape, it would be easier to target an empty one. The the shape inside is unmistakeable. That’s great because at a glance even an untrained passenger could figure out what this is. The bright orange stripes are appropriately intense and attention-getting as well.
Viewers might have questions about the placement of the back-lit button panels inside the pod, seeing as how they’re in a very awkward place for Vickers to see and operate. I presume she has some other interface facing her, and the panels we see in the scene are for operating when the pod is resting on the planet’s surface and its lid opened. From that position, these buttons make more sense.
I think that’s where the greatness ends. The main consideration for an escape pod is that it is used in dire emergencies. Fractions of a second might mean the difference between safety and disintegration, and so though the cinematic tension in the scene is built up by these designed-in delays, an ideal system shouldn’t work the same way. How could it be improved? There are three delays, and each of them could be improved or removed.
Delay 1: Opening a pod
Why should she have to open the pod with a button or handprint reader or whatever that thing is? The pods should be open at all times.
If pods had to be sealed for some biological or mechanical reason, then a pod should open up for her immediately when she enters the room. Simple motion detectors are all that is needed.
If she has to authorize for some dystopian, only-certain-people-can-be-saved corporate reason, then a voice print could work, allowing her to shout in the hallway as she runs for the pod.
Some passive recognition would be even better since it wouldn’t cost her even the time of shouting: Face-recognition or fast retinal scan through cameras mounted in the room or the pod. Run a quick laser line across her face and she’s authorized.
Delay 2: Suiting Up
Can she put on the environment suit in the pod? Yes, the pod is cramped, but that’s the biggest delay she experiences. Increase the size of the pod slightly to allow for that kind of maneuvering, and then she can just grab the suit as she’s running by and put it on in the pod’s relative safety.
Even cooler would be if the pod was the environment suit: All she would have to do is throw herself in the pod, activate it, and the minute she landed on LV223, the capsule transformed, Autobot-style, into an exosuit, giving her more protection and more enhancement for survival on an alien planet. Plus, you can imagine the awesomeness of letting Vickers fight the zombified Weyland Ripley-style.
Delay 3: Activation
This is one delay that I’m pretty sure can’t be either automatic or passive. The cost of making a mistake is too dire. Accidentally pressed a button? Sorry, you’re now being shot away from the mother ship faster than it can travel. Still, why does she have to hit a series of buttons here? Is it a (shudder) password, as a corporate cost-control measure? Yes, that spells dystopia, but it should be faster and more intuitive, and we’ve already authorized her, above.
Fortunately this doesn’t need much rethinking. Since we’ve already seen a great interaction used for an emergency procedure, i.e. the 5-finger touch-twist used for emergency decontamination on the MedPod, I’d suggest using that. Crew would only have to be trained once.
The total interaction, then, should be that Vickers:
Runs to the escape room
Is identified passively (and notified of it by voice)
Grabs a suit (this is optional if you go with the awesome exosuit idea)
Throws herself into an open pod
Performs a simple gesture on a touch pad
Is shot away from the soon-to-explode ship
Bam! You have saved massive amounts of time, a crewmember removed from the immediate danger, and you have the setup for an awesome ending where Vickers in her exosuit can just punch the falling alien spaceship out of the the way rather than running from it like a moron.
On the side of the valley in which the first complex is found, there is a giant skull carved into the overlooking crag. It’s easy—given the other transgressions in the film—to dismiss this as spookhouse attempt at being scary. But what if (stay with me here) it’s a warning sign, an alien Mr. Yuk, put there for other sentient humanoids to understand that this place is deadly with a capital D? This explains why the outpost hasn’t been disturbed by rescuers of their own race. They were smart enough to see the warning and turn right back around. (Why they didn’t nuke it from orbit is another question.)
Seeing this as a warning label raises other questions. Why wouldn’t a warning be technological or linguistic, like most of the interfaces inside the complex? The black infection material is still deadly after 2000 years. Who knows how much longer it will be viable? So where the interfaces inside are for immediate use, the warning outside needs to be effective for millennia, outlasting both the power reserves that would drive technology and the persistent semantics that would cement linguistic understanding. Rock, in contrast, lasts a very, very long time. Even during the erosion the shape and its clear meaning will simply lose clarity, not wink out altogether.
Similarly, this shape is a clear symbol of death that is tied to biology, which changes on evolutionary timeframes, guaranteeing its readability for—hopefully—longer than the xenomorph liquid would be a danger.
For these reasons, this is labeling that is more than a Castle Grayskull set dressing attempt at scaaaarrrry, but a reasonable choice at providing an effective warning that will last as long as the danger. You know, providing visiting scientists actually pay attention to such things.
The reawakened alien places his hand in the green display and holds it there for a few seconds. This summons a massive pilot seat. If the small green sphere is meant to be a map to the large cyan astrometric sphere, the mapping is questionable. Better perhaps would be to touch where the seat would appear and lift upwards through the sphere.
He climbs into the seat and presses some of the “egg buttons” arrayed on the armrests and on an oval panel above his head. The buttons illuminate in response, blinking individually from within. The blink pattern for each is regular, so it’s difficult to understand what information this visual noise conveys. A few more egg presses re-illuminate the cyan astrometric display.
A few more presses on the overhead panel revs up the spaceship’s engines and seals him in an organic spacesuit. The overhead panel slowly advances towards his face. The purpose for this seems inexplicable. If it was meant to hold the alien in place, why would it do so with controls? Even if they’re just navigation controls that no longer matter since he is on autopilot, he wouldn’t be able to take back sudden navigation control in a crisis. If the armrest panels also let him navigate, why are the controls split between the two parts?
On automatic at this point, the VP traces a thin green arc from the chair to the VP earth and adds highlight graphics around it. Then the ceiling opens and the spaceships lifts up into the air.
The alien stasis chambers have recessed, backlit touch controls. The shape of each looks like a letterform. (Perhaps in Proto-Indo-European language that David was studying at the start of the film?) David is able to run his fingers along and tap these character shapes in particular sequences to awaken the alien sleeping within.
The writing/controls take up quite a bit of room, on both the left and right sides of the chamber near the occupant’s head. It might seem a strange decision to have controls placed this way, since a single user might have to walk around the chamber to perform tasks. But a comparison of the left and right side shows that the controls are identical, and so are actually purposefully redundant. This way it doesn’t matter which side of the chamber a caretaker was on, he could still operate the controls. Two caretakers might have challenges “walking over” each other’s commands, especially with the missing feedback (see below).
Having the writing/controls spread over such a large area does seem error prone. In fact in the image above, you can see that David’s left hand is resting with two fingers “accidentally” in the controls. (His other hand was doing the button pressing.) Of course this could be written off as “the technology is not made for us, it’s made for an alien race,” but the movie insists these aliens and humans share matching DNA, so apart from being larger in stature, they’re not all that different.
Two things seem missing in the interface. The first is simple feedback. When David touches the buttons, they do not provide any signal that his touch has been received. If he didn’t apply enough pressure to register his touch, he wouldn’t have any feedback to know that until an error occurred. The touch walls had this feedback, so it seems oddly missing here.
The second thing missing is some status indicator for the occupant. Unless that information is available on wearable displays, having it hidden forces a caretaker to go elsewhere for the information or rely solely on observation, which seems far beneath the technological capabilities seen so far in the complex. See the Monitoring section in Chapter 12 of Make it So for other examples of medical monitoring.
When David is exploring the ancient alien navigation interfaces, he surveys a panel, and presses three buttons whose bulbous tops have the appearance of soft-boiled eggs. As he presses them in order, electronic clucks echo in in the cavern. After a beat, one of the eggs flickers, and glows from an internal light. He presses this one, and a seat glides out for a user to sit in. He does so, and a glowing pollen volumetric projection of several aliens appears. The one before David takes a seat in the chair, which repositions itself in the semicircular indentation of the large circular table.
The material selection of the egg buttons could not be a better example of affordance. The part that’s meant to be touched looks soft and pliable, smooth and cool to the touch. The part that’s not meant to be touched looks rough, like immovable stone. At a glance, it’s clear what is interactive and what isn’t. Among the egg buttons there are some variations in orientation, size, and even surface texture. It is the bumpy-surfaced one that draws David’s attention to touch first that ultimately activates the seat.
The VP alien picks up and blows a few notes on a simple flute, which brings that seat’s interface fully to life. The eggs glow green and emit green glowing plasma arcs between certain of them. David is able to place his hand in the path of one of the arcs and change its shape as the plasma steers around him, but it does not appear to affect the display. The arcs themselves appear to be a status display, but not a control.
After the alien manipulates these controls for a bit, a massive, cyan volumetric projection appears and fills the chamber. It depicts a fluid node network mapped to the outside of a sphere. Other node network clouds appear floating everywhere in the room along with objects that look like old Bohr models of atoms, but with galaxies at their center. Within the sphere three-dimensional astronomical charts appear. Additionally huge rings appear and surround the main sphere, rotating slowly. After a few inputs from the VP alien at the interface, the whole display reconfigures, putting one of the small orbiting Bohr models at the center, illuminating emerald green lines that point to it and a faint sphere of emerald green lines that surround it. The total effect of this display is beautiful and spectacular, even for David, who is an unfeeling replicant cyborg.
At the center of the display, David observes that the green-highlighted sphere is the planet Earth. He reaches out towards it, and it falls to his hand. When it is within reach, he plucks it from its orbit, at which point the green highlights disappear with an electronic glitch sound. He marvels at it for a bit, turning it in his hands, looking at Africa. Then after he opens his hands, the VP Earth gently returns to its rightful position in the display, where it is once again highlighted with emerald, volumetric graphics.
Finally, in a blinding flash, the display suddenly quits, leaving David back in the darkness of the abandoned room, with the exception of the small Earth display, which is floating over a small pyramid-shaped protrusion before flickering away.
After the Earth fades, david notices the stasis chambers around the outside of the room. He realizes that what he has just seen (and interacted with) is a memory from one of the aliens still present.
Hilarious and insightful Youtube poster CinemaSins asks in the video “Everything Wrong with Prometheus in 4 minutes or Less,” “How the f*ck is he holding the memory of a hologram?” Fair question, but not unanswerable. The critique only stands if you presume that the display must be passive and must play uninterrupted like a television show or movie. But it certainly doesn’t have to be that way.
Imagine if this is less like a YouTube video, and more like a playback through a game engine like a holodeck StarCraft. Of course it’s entirely possible to pause the action in the middle of playback and investigate parts of the display, before pressing play again and letting it resume its course. But that playback is a live system. It would be possible to run it afresh from the paused point with changed parameters as well. This sort of interrupt-and-play model would be a fantastic learning tool for sensemaking of 4D information. Want to pause playback of the signing of the Magna Carta and pick up the document to read it? That’s a “learning moment” and one that a system should take advantage of. I’d be surprised if—once such a display were possible—it wouldn’t be the norm.
The only thing I see that’s missing in the scene is a clear signal about the different state of the playback:
As it happened
Paused for investigation
Playing with new parameters (if it was actually available)
David moves from 1 to 2, but the only change of state is the appearance and disappearance of the green highlight VP graphics around the Earth. This is a signal that could easily be missed, and wasn’t present at the start of the display. Better would be some global change, like a global shift in color to indicate the different state. A separate signal might compare As it Happened with the results of Playing with new parameters, but that’s a speculative requirement of a speculative technology. Best to put it down for now and return to what this interface is: One of the most rich, lovely, and promising examples of sensemaking interactions seen on screen. (See what I did there?)
For more about how VP might be more than a passive playback, see the lesson in Chapter 4 of Make It So, page 84, VP Systems Should Interpret, Not Just Report.
When exploring the complex, David espies a few cuneiform-like characters high up on a stone wall. He is able to climb a ladder, decipher the language quickly, ascertain that it is an interface rather than an inscription, and figure out how to surreptitiously operate it. To do so, he puts his finger at the top of one of the grooves and drags downward. The groove illuminates briefly in response, and then fades. He does this to another groove, then presses a dot, and presses another dot not near the first one at all. Finally he presses a horizontal triangle firmly, which after a beat plays a 1:1 scale glowing-pollen volumetric projection.
The material and feedback of this interaction are lovely. The grooves provide a nice, tactile, physical affordance for the gesture. A groove is for dragging. A dot or a shape is for pressing. But I cannot imagine what kind of affordances are available to this language such that David can suss out the order of operation on two undifferentiated grooves. Of course presuming that the meaning of the dot and triangle are somehow self-evident to speakers of Architect, David has a 50% chance of getting the order of the grooves right. So we might be able to cut this scene some slack.
But a few scenes later, this is stretched beyond credulity. When David encounters a similarly high-up interface, he is able to ascertain on sight that chording—pressing two controls at once—is possible and necessary for operation. For this interface, he presses and drags 14 different chords flawlessly to open the ancient alien door. This is a much longer sequence involving an interaction that has no affordance.
Looking at the design of the command, an evaluation depends if it’s just a command or a password. If it’s just a control that means “open the door,” why would it take 14 characters’ worth of a command? Is there that much that this door can do? Otherwise a simple press-to-open seems like a more usable design.
If it’s a door security system then the 14 part input is a security password. This would be the more likely interpretation since the chamber beyond contains the deadly, deadly xenomorph liquid. With this in mind it’s a good design to have a 14-part password that includes a required interaction with no affordance. I’m no statistician, but I think the likelihood of guessing the correct password to be 14 factorial, or around 87,178,291,200 to 1. I have no idea what the odds are for guessing the correct operation of an interaction with zero affordance. We’d have to show some aliens MS-DOS to get some hard numbers, but that seems pretty damned secure. Unfortunately, it also stretches the believability of the scene way past the breaking point, to presume that David can just observe the alien login screen and guess the giant password.