If you’d like to read the reviews for Logan’s Run in chronological order, use this WordPress-hack link: http://scifiinterfaces.com/category/logans-run-1976/?order=asc
For our purposes, Dome City is a service. Provided by the city’s ancestors to provide a “good life” for their cloned descendants in a sustainable way, i.e., a way that does not risk the problems of overpopulation. The “good life” in this case is a particular hedonistic vision full of fashion, time at the spa, and easy casual sex.
There’s an ethical, philosophical, and anthropological question on whether this is the “right” sort of life one would want to structure a service around. I suspect it’s a good conversation that will last at least a few beers. Fascinating as that question may be, looking into the interaction design requires us to accept those as a given and see how well the touchpoints help these personas address their goal in this framework.
Sci: F (0 of 4) How believable are the interfaces?
The Fade Out drug is the only, only interface that’s perfectly believable. And while I can make up some reasons the Clean Up Rig is cool, that’s clearly what I’m bringing to it, and the rest of the bunch, to an interface, has massive problems with fundamental believability and usability. Seriously, the movie is a study in bad design.
Fi: A (4 of 4)
How well do the interfaces inform the narrative of the story?
The interfaces help tell the story of this bizarre dystopia, help paint the “vast, silly spectacle” that Roger Ebert criticized when he write his original review in 1976.
Interfaces: D (1 of 4)
How well do the interfaces equip the characters to achieve their goals?
Sure, if you ignore all the usability problems and handwaving the movie does, the characters are getting what they want on a surface level. But ultimately, the service design of Dome City fails for every reason it could fail.
- The system was poorly implemented.
- Its touchpoints are unusable.
- Its touchpoints don’t let its users achieve the system goals.
But the main reason it fails is that it fails to take into account some fundamental aspects of human nature, such as
- The (entirely questionable) tendency towards punctuated serial monogamy in pair bonds
- A desire for self-determination
- Basic self-preservation.
If you don’t understand the goals of your users, you really have no hope of designing for them. And if you’re designing an entire, all-consuming world for those same users, misjudging the human universals puts your entire project—and their world—at risk.
Final Grade C- (5 of 12), MATINEE
Related lessons from the book
- The Übercomputer’s all caps and fixed-width evoke “that look” of early computer interfaces (page 33), as does its OCR sans-serif typeface (page 37) and blue color (page 42).
- The SandPhone would have been much more useful as Augmented Reality (chapter 8, page 157)
- The Aesculaptor could use a complete revamp from the Medical Chapter (chapter 9, page 258), most notably using waveforms (page 263) and making it feel humane (page 281).
- The Evidence Tray reminds us of multifactor authentication (page 118).
- Of course The Circuit appears in the Sex chapter (chapter 13, page 293) and as my redesign showed, needed to modernize its matchmaking (page 295) use more subtle cues (page 301). Certainly Jessica-5 could have used a safeword (page 303).
- The Lifeclock reminds us to keep meaningful colors distinguishable.
- The Circuit shows why a serial presentation democritizes options.
- The Circuit also shows us that matchmaking must account for compatability, availability, and interest.
- The Aesculaptor tells why a system should never fail into a worse state.
- Carrousel implies that we don’t hide the worst of a system, but instead cover it in a dazzle pattern.
- The improvements I suggested for the SandPhone imply that solving problems higher up the goal chain are much harder but more disruptive.
- The Evidence Tray gives us the opposite of the “small interfaces” lesson (page 296), too large an interface can overpromise for small interactions.
I grew up in Texas, and had the chance to visit the Fort Worth Water Gardens and Market Center where some of the scenes were shot. So I have a weirdly personal connection to this movie. Despite that, on review, the interfaces just suck, bless their little interactive hearts. Use them as fodder for apologetics and perhaps as a cautionary tale, but little, little else.
Such a cool collection of interactive voice response systems, with high fives out to everyone who thought up great (and ofttimes obscure) “talkie computers” from decades of sci-fi from the 1950s to the 2000-teens. By name…
- kedamono x7
- Joe Bloch x10
- Burning x4
- Kelley Strang
- Pixel I/O
- pavellishin x2
- Steve Silvas x2
- Matt Sheehe
- Matt Sheehe
- Joe Bloch
- Matt Sheehe x2
- Lela x2
- Clayton x2
The list of talkie computers we collected is “Robby the Robot, Adam Link, Jupiter 2, Landru, M-5, Nomad probe, The Oracle, Beta-V, HAL, Colossus, BOXX, Thermostellar Triggering Device, IRAC, the Übercomputer, C-3PO, Alex 7000, Proteus IV, Zen, Orac, Slave, V-Ger, Artificial persons, Dr. Theopolis and TWKE-4, MU-TH-UR 6000, KITT, Replicants, Image Machine, MCP, SAL, Max, Holly, Kryten!, L7, 790, Sphere, Ship [sic], AMEE, Ship, Andromeda Ascendant, Zero, S.A.R.A.H., Andy the Deputy AI, Icarus, KITT, Otto, Gerty, and Jarvis.” Think you could name the movies and TV shows these are from just from these names?
The next step is to build a collection of the scripts of these interactions, since we’ll be analyzing any peculiar, non-standard-English that we find. I’m down to provide these scripts myself, but it would be easier if we crowdsource it. If you’re up to it, head to the following form to add the metadata and line-by-line script of the interaction. You can often find the scripts with a simple Google Search, or by (popping in the VHS/DVD/Blu-Ray you own, finding a video of the scene on some online video service and transcribing it from there. We are interested in word-perfect transcriptions. Don’t sweat it if you don’t have the time yourself. As of Thanksgiving weekend, I’ll manually complete any unfinished ones that I find.
The form to add scripts: https://docs.google.com/forms/d/
Logan’s life is changed when he surrenders an ankh found on a particular runner. Instead being asked to identify, the central computer merely stays quiet a long while as it scans the objects. Then its lights shut off, and Logan has a discussion with the computer he has never had before.
The computer asks him to “approach and identify.” The computer gives him, by name, explicit instructions to sit facing the screen. Lights below the seat illuminate. He identifies in this chair by positioning his lifeclock in a recess in the chair’s arm, and a light above him illuminates. Then a conversation ensues between Logan and the computer.
The computer communicates through a combination of voice and screen, on which it shows blue text and occasional illustrative shapes. The computer’s voice is emotionless and soothing. For the most part it speaks in complete sentences. In contrast, Logan’s responses are stilted and constrained, saying “negative” instead of “no,” and prefacing all questions with the word, “Question,” as in, “Question: What is it?”
On the one hand it’s linguistically sophisticated
Speech recognition and generation would not have a commercially released product for four years after the release of Logan’s Run, but there is an odd inconsistency here even for those unfamiliar with the actual constraints of the technology. The computer is sophisticated enough to generate speech with demonstrative pronouns, referring to the picture of the ankh as “this object” and the label as “that is the name of the object.” It can even communicate with pragmatic meaning. When Logan says,
“Question: Nobody reached renewal,”
…and receives nothing but silence, the computer doesn’t object to the fact that his question is not a question. It infers the most reasonable interpretation, as we see when Logan is cut off during his following objection by the computer’s saying,…
“The question has been answered.”
Despite these linguistic sophistications, it cannot parse anything but the most awkwardly structured inputs? Sadly, this is just an introduction to the silliness that is this interface.
Logan undergoes procedure “033-03,” in which his lifeclock is artificially set to blinking. He is then instructed to become a runner himself and discover where “sanctuary” is. After his adventure in the outside performing the assignment he was forced to accept, he is brought in as a prisoner. The computer traps him in a ring of bars demanding to know the location of sanctuary. Logan reports (correctly) that Santuary doesn’t exist.
On the other hand, it explodes
This freaks the computer out. Seriously. Now, the crazy thing is that the computer actually understands Logan’s answer, because it comments on it. It says, “Unacceptable. The answer does not program [sic].” That means that it’s not a data-type error, as if it got the wrong kind of input. No, the thing heard what Logan was saying. It’s just unsatisfied, and the programmer decided that the best response to dissatisfaction was to engage the heretofore unused red and green pixels in the display, randomly delete letters from the text—and explode. That’s right. He decided that in addition to the Dissatisfaction() subroutine calling the FreakOut(Seriously) subroutine, the FreakOut(Seriously) subroutine in its turn calls Explode(Yourself), Release(The Prisoner), and the WhileYoureAtItRuinAllStructuralIntegrityoftheSurroundingArcitecture() subroutines.
Frankly, if this is the kind of coding that this entire society was built upon, this whole social collapse thing was less deep social commentary and really just a matter of technical debt.
Hey small slice of the internet. I’m working with an awesome linguist, Anthony Stone of operativewords.com, on a project and since I don’t know everything but you do, I’m wondering if you can help. We’re collecting examples of scenes from more serious movies and TV shows where a human is interacting with a artificial intelligence primarily through speech.
In the ST:TOS episode "Mirror, Mirror" Captain Kirk speaks with his computer to learn if the ship could be used to get him back in the "good universe." (This dialogue was featured in the Learning chapter of the book.)
Example 2: In the movie "Logan’s Run" Logan speaks with the Übercomputer twice: once for questioning about the ankh, and once to report his findings about Sanctuary.
There are others, but we’d like to collect as many examples as we can to get a good "corpus" to work from on this sooper secret thingy. But of course it’s in the service of a blog post, so contribute away, and we’ll thank you in the post once it finally comes out. What do you think: Can you name any?
Sandmen surrender any physical objects recovered from the bodies of runners to the Übercomputer for evaluation via a strange device I’m calling The Evidence Tray.
As a Sandman enters the large interrogation chamber, a transparent cylinder lowers from the ceiling. At the top of this cylinder an arm continuously rotates bearing four pin lights. A chrome cone sits in the center of the base. The Sandman can access the interior of the cylinder through a large oblong opening in the side the top of which is just taller than Sandmen (who seem to be a near-uniform height).
The Sandman puts any evidence he has found into the bottom of this cylinder. (What if the evidence was too large to fit? What if the critical evidence is not physical, or ephemeral? But I digress.) In response to his placing the objects, lights on the rotating arm illuminate, scanning them. The voice of the Übercomputer prompts the Sandman to “identify,” a request that is repeated on a large screen mounted on the wall in view through the transparent backing of the Evidence Tray.
The Sandman identifies himself by placing his palm on a cone in the cylinder’s center, positioning his lifeclock in the small indention in its tip. The base section of the cylinder illuminates, and after a pause, the voice and screen confirm that his identity has been “affirmed.” Logan removes his hand, and in a flash of blue light the objects in the tray disappear. The film gives no clue as to whether the objects are teleported somewhere or disintegrated into thin air.
There are of course the usual objections to the authentication. The lifeclock check is really a biometric check, something that Logan “is” (since he can’t remove the lifeclock) and—per the principles of multifactor authentication—should need to provide an additional factor, such as something he has (like a key) and something he knows (like a password).
There’s another objection there to the fact that the authentication requires that his hand be put into a teleport/distingration chamber. Perhaps narratively this shows the audence the insane levels of trust citizens have in their Nanny Program, but for the real world let’s just say it’s best that you don’t require police to submit to a Flash Gordon Wood Beast just to hand over exhibit A.
There’s a nice touch to the transparent walls allowing him to see the computer screen through it, to get the visual confirmation of what he’s hearing. But I suspect the curved surface also adds a bit of distortion to his view that doesn’t help readability. So the industrial design aspects of the interface sort of even out. Unless I’m missing something. Any industrial designers want to weigh in?
A final objection is the unnecessarily vast architecture that is part of the workflow. Why this giant room with a thin cylinder in the middle of it? Sure there are narrative reasons for it (welcome to this digital heart of darkness) but it seems like something that Sandmen would be doing routinely, and this giant ritual just makes a creepy, big deal about it.
Better might be a wide, waist-high cubby off to the side of their offices, whatever those are, with a wide tray and computer screen. Sandmen could drop the evidence into the tray and place their hands into an authenticator outside the tray, initiating the scan. This would save them the awkward time of waiting for the computer to order them to authenticate, and tightly couple the objects with their identity. The improved semiotics say, “I, Logan, found these and am surrendering them to you.” Then if the computer needed to speak more about it, it could summon them to an interlocution room, or something with a similarly awkward 70s name.
At dispatch for the central computer, Sandmen monitor a large screen that displays a wireframe plan of the city, including architectural detail and even plants, all color coded using saturated reds, greens, and blues. When a Sandman has accepted the case of a runner, he appears as a yellow dot on the screen. The runner appears as a red dot. Weapons fire can even be seen as a bright flash of blue. The red dots of terminated runners fades from view.
Using the small screens and unlabeled arrays of red and yellow lit buttons situated on an angled panel in front of them, the seated Sandman can send a call out to catch runners, listen to any spoken communications, and respond with text and images.
*UXsigh* What are we going to do with this thing? With an artificial intelligence literally steps behind them, why rely on a slow bunch of humans at all for answering questions and transmitting data? It might be better to just let the Sandmen do what they’re good at, and let the AI handle what it’s good at.
But OK, if it’s really that limited of an Übercomputer and can only focus on whatever is occupying it at the moment, at least make the controls usable by people. Let’s do the hard work of reducing the total number of controls, so they can be clustered all within easy reach rather than spread out so you have to move around just to operate them all. Or use your feet or whatever. Differentiate the controls so they are easy to tell apart by sight and touch rather than this undifferentiated mess. Let’s take out a paint pen and actually label the buttons. Do…do something.
This display could use some rethinking as well. It’s nice that it’s overhead, so that dispatch can be thinking about field strategy rather than ground tactics. But if that’s the case, it could use some design help and some strategic information. How about downplaying the saturation on the things that don’t matter that much, like walls and plants? Then the Sandmen can focus more on the interplay of the Runner and his assailants. Next you could augment the display with information about the runner, and perhaps a best-guess prediction of where they’re likely to run, maybe the health of individuals, or the amount of ammunitition they have.
Which makes me realize that more than anything, this screen could use the hand of a real-time strategy game user interface designer, because that’s what they’re doing. The Sandmen are playing a deadly, deadly video game right here in this room, and they’re using a crappy interface to try and win it.
Sandmen have a clean-up crew to quickly rid the city’s floors of the unsightly corpses they create when they terminate runners. Logan summons one through the CB function of his SandPhone, telling dispatch, “Runner terminated, 0.31, ready for cleanup.”
Minutes later, Cleanup arrives. This crew floats around the city in a slow-moving hover platforms, that look a little like a vertical knee raise machine with anti-gravity pads and a faulty muffler. The controls aren’t apparent, but the operator maneuvers the platform over the cadaver to spray it with a fast-acting solvent that emits out the base.
This is the surface question of the Cleanup platform interface. If the operators don’t move, how are they controlling the platform? Of course it could be a brain interface, but that’s an easy answer. There are at least three alternative types of input that could explain what we see on screen.
- Force-sensing resistors or strain gauge that read the amount of force being applied to a stationary surface and act accordingly. The grips could be outfitted with force strips for each finger giving a high degree of complex input.
- Gaze interactions, where eye tracking equipment registers glances, blinks, pupil dilation, and eyelid spread over time as controls.
- Subvocal recognition allow a user to move their throat and mouth as if they were speaking, and even without actually producing any sound, register it as speech input.
Each of these technologies permit input via movements that are difficult to detect through observation, but facilitate rich enough input to pilot a personal vehicle through 3D space. Sadly, this level of sensory sophistication is not in evidence anywhere else in the film, so we’ll just have to chalk it up to a nifty tech for our real world toolboxes.
And though these technologies are cool, they don’t answer any of the experience or service design questions from the perspective of the Übercomputer. Why it is good for the operator to appear perfectly still in the first place? Is there some reason why they need to be dehumanized or robotic? It can’t be that they’re doing something horrible. Sandmen do the actual killing (and as we see do it gleefully, cruelly) and are highly visible, clearly human participants in the system?