The TET is far enough away from Earth that the crew goes into suspended animation for the initial travel to it. This initial travel is either automated or controlled from Earth. After waking up, the crew speak conversationally with their mission controller Sally.
This conversation between Jack, Vika, and [actual human] Sally happens over a small 2d video communication system. The panel in the middle of the Odyssey’s control panel shows Sally and a small section of Mission Control, presumably back on Earth. Sally confirms with Jack that the readings Earth is getting from the Odyssey remotely are what is actually happening on site.
Soon after, mission control is able to respond immediately to Jack’s initial OMS burn and let him know that he is over-stressing the ship trying to escape the TET. Jack is then able to make adjustments (cut thrust) before the stress damages the Odyssey.
FTL Communication
Communication between Odyssey and the Earth happens in real-time. When you look at the science of it all, this is more than a little surprising. Continue reading →
Several times throughout the movie, Loki uses places the point of the glaive on a victim’s chest near their heart, and a blue fog passes from the stone to infect them: an electric blackness creeps upward along their skin from their chest until it reaches their eyes, which turn fully black for a moment before becoming the same ice blue of the glaive’s stone, and we see that the victim is now enthralled into Loki’s servitude.
You have heart.
The glaive is very, very terribly designed for this purpose. Continue reading →
In the prior post we looked at the HUD display from Tony’s point of view. In this post we dive deeper into the 2nd-person view, which turns out to be not what it seems.
The HUD itself displays a number of core capabilities across the Iron Man movies prior to its appearance in The Avengers. Cataloguing these capabilities lets us understand (or backworld) how he interacts with the HUD, equipping us to look for its common patterns and possible conflicts. In the first-person view, we saw it looked almost entirely like a rich agentive display, but with little interaction. But then there’s this gorgeous 2nd-person view.
When in the first film Tony first puts the faceplate on and says to JARVIS, “Engage heads-up display”… …we see things from a narrative-conceit, 2nd-person perspective, as if the helmet were huge and we are inside the cavernous space with him, seeing only Tony’s face and the augmented reality interface elements. You might be thinking, “Of course it’s a narrative conceit. It’s not real. It’s in a movie.” But what I mean by that is that even in the diegesis, the Marvel Cinematic World, this is not something that could be seen. Let’s move through the reasons why. Continue reading →
In the last post we discussed some necessary, new terms to have in place for the ongoing deep dive examination of the Iron Man HUD, there’s one last bit of meandering philosophy and fan theory I’d like to propose, that touches on our future relationship with technology.
The Iron Man is not Tony Stark. The Iron Man is JARVIS. Let me explain.
Tony can’t fire weapons like that
The first piece of evidence is that most of the weapons he uses are unlikely to be fired by him. Take the repulsor rays in his palms. I challenge readers to strap a laser perpendicular to each of their their palms and reliably target moving objects that are actively trying to avoid getting hit, while, say, roller skating an obstacle course. Because that’s what he’s doing as he flies around incapacitating Hydra agents and knocking around Ultrons. The weapons are not designed for Tony to operate them manually with any accuracy. But that’s not true for the artificial intelligence.
Note: In honor of the season, Rogue One opening this week, and the reviews of Battlestar Galactica: The Mini-Series behind us, I’m reopening the Star Wars Holiday Special reviews, starting with the show-within-a-show, The Faithful Wookie. Refresh yourself of the plot if it’s been a while.
On board the R.S. Revenge, the purple-skinned communications officer announces he’s picked up something. (Genders are a goofy thing to ascribe to alien physiology, but the voice actor speaks in a masculine register, so I’m going with it.)
He attends a monitor, below which are several dials and controls in a panel. On the right of the monitor screen there are five physical controls.
A stay-state toggle switch
A stay-state rocker switch
Three dials
The lower two dials have rings under them on the panel that accentuate their color.
Map View
The screen is a dark purple overhead map of the impossibly dense asteroid field in which the Revenge sits. A light purple grid divides the space into 48 squares. This screen has text all over it, but written in a constructed orthography unmentioned in the Wookieepedia. In the upper center and upper right are unchanging labels. Some triangular label sits in the lower-left. In the lower right corner, text appears and disappears too fast for (human) reading. The middle right side of the screen is labeled in large characters, but they also change too rapidly to make much sense of it.
While recording a podcast with the guys at DecipherSciFi about the twee(n) love story The Space Between Us, we spent some time kvetching about how silly it was that many of the scenes involved Gardner, on Mars, in a real-time text chat with a girl named Tulsa, on Earth. It’s partly bothersome because throughout the rest of the the movie, the story tries for a Mohs sci-fi hardness of, like, 1.5, somewhere between Real Life and Speculative Science, so it can’t really excuse itself through the Applied Phlebotinum that, say, Star Wars might use. The rest of the film feels like it’s trying to have believable science, but during these scenes it just whistles, looks the other way, and hopes you don’t notice that the two lovebirds are breaking the laws of physics as they swap flirt emoji.
Hopefully unnecessary science brief: Mars and Earth are far away from each other. Even if the communications transmissions are sent at light speed between them, it takes much longer than the 1 second of response time required to feel “instant.” How much longer? It depends. The planets orbit the sun at different speeds, so aren’t a constant distance apart. At their closest, it takes light 3 minutes to travel between Mars and Earth, and at their farthest—while not being blocked by the sun—it takes about 21 minutes. A round-trip is double that. So nothing akin to real-time chat is going to happen.
But I’m a designer, a sci-fi apologist, and a fairly talented backworlder. I want to make it work. And perhaps because of my recent dive into narrow AI, I began to realize that, well, in a way, maybe it could. It just requires rethinking what’s happening in the chat.
Let’s first acknowledge that we’ve solved long distance communications a long time ago. Gardner and Tulsa could just, you know, swap letters or, like the characters in 2001: A Space Odyssey, recorded video messages. There. Problem solved. It’s not real-time interaction, but it gets the job done. But kids aren’t so much into pen pals anymore, and we have to acknowledge that Gardner doesn’t want to tip his hand that he’s on Mars (it’s a grave NASA secret, for plot reasons). So the question is how could we make it work so it feels like a real time chat to her. Let’s first solve it for the case where he’s trying to disguise his location, and then how it might work when both participants are in the know.
Fooling Tulsa
Since 1984 (ping me, as always, if you can think of an earlier reference) sci-fi has had the notion of a digitally-replicated personality. Here I’m thinking of Gibson’s Neuromancer and the RAM boards on which Dixie Flatline “lives.” These RAM boards house an interactive digital personality of a person, built out of a lifetime of digital traces left behind: social media, emails, photos, video clips, connections, expressed interests, etc. Anyone in that story could hook the RAM board up to a computer, and have conversations with the personality housed there that would closely approximate how that person would (or would have) respond in real life.
Listen to the podcast for a mini-rant on translucent screens, followed by apologetics.
Is this likely to actually happen? Well it kind of already is. Here in the real world, we’re seeing early, crude “me bots” populate the net which are taking baby steps toward the same thing. (See MessinaBot, https://bottr.me/, https://sensay.it/, the forthcoming http://bot.me/) By the time we actually get a colony to Mars (plus the 16 years for Gardner to mature), mebot technology should should be able to stand in for him convincingly enough in basic online conversations.
Training the bot
So in the story, he would look through cached social media feeds to find a young lady he wanted to strike up a conversation with, and then ask his bot-maker engine to look at her public social media to build a herBot with whom he could chat, to train it for conversations. During this training, the TulsaBot would chat about topics of interest gathered from her social media. He could pause the conversation to look up references or prepare convincing answers to the trickier questions TulsaBot asks. He could also add some topics to the conversation they might have in common, and questions he might want to ask her. By doing this, his GardnerBot isn’t just some generic thing he sends out to troll any young woman with. It’s a more genuine, interactive first “letter” sent directly to her. He sends this GardnerBot to servers on Earth.
A demonstration of a chat with a short Martian delay. (Yes, it’s an animated gif.)
Launching the bot
GardnerBot would wait until it saw Tulsa online and strike up the conversation with her. It would send a signal back to Gardner that the chat has begun so he can sit on his end and read a space-delayed transcript of the chat. GardnerBot would try its best to manage the chat based on what it knows about awkward teen conversation, Turing test best practices, what it knows about Gardner, and how it has been trained specifically for Tulsa. Gardner would assuage some of his guilt by having it dodge and carefully frame the truth, but not outright lie.
Buying time
If during the conversation she raised a topic or asked a question for which GardnerBot was not trained, it could promise an answer later, and then deflect, knowing that it should pad the conversation in the meantime:
Ask her to answer the same question first, probing into details to understand rationale and buy more time
Dive down into a related subtopic in which the bot has confidence, and which promises to answer the initial question
Deflect conversation to another topic in which it has a high degree of confidence and lots of detail to share
Text a story that Gardner likes to tell that is known to take about as long as the current round-trip signal
Example
TULSA
OK, here’s one: If you had to live anywhere on Earth where they don’t speak English, where would you live?
GardnerBot has a low confidence that it knows Gardner’s answer. It could respond…
(you first) “Oh wow. That is a tough one. Can I have a couple of minutes to think about it? I promise I’ll answer, but you tell me yours first.”
(related subtopic) “I’m thinking about this foreign movie that I saw one time. There were a lot of animals in it and a waterfall. Does that sound familiar?”
(new topic) “What? How am I supposed to answer that one? 🙂 Umm…While I think about it, tell me…what kind of animal would you want to be reincarnated as. And you have to say why.”
(story delay) “Ha. Sure, but can I tell a story first? When I was a little kid, I used to be obsessed with this music that I would hear drifting into my room from somewhere around my house…”
Lagged-realtime training
Each of those responses is a delay tactic that allows the chat transcript to travel to Mars for Gardner to do some bot training on the topic. He would be watching the time-delayed transcript of the chat, keeping an eye on an adjacent track of data containing the meta information about what the bot is doing, conversationally speaking. When he saw it hit low-confidence or high-stakes topic and deflect, it would provide a chat window for him to tell the GardnerBot what it should do or say.
To the stalling GARDNERBOT…
GARDNER
For now, I’m going to pick India, because it’s warm and I bet I would really like the spicy food and the rain. Whatever that colored powder festival is called. I’m also interested in their culture, Bollywood, and Hinduism.
As he types, the message travels back to Earth where GardnerBot begins to incorporate his answers to the chat…
At a natural break in the conversation…
GARDNERBOT
OK. I think I finally have an answer to your earlier question. How about…India?
TULSA
India?
GARDNERBOT
Think about it! Running around in warm rain. Or trying some of the street food under an umbrella. Have you seen youTube videos from that festival with the colored powder everywhere? It looks so cool. Do you know what it’s called?
Note that the bot could easily look it up and replace “that festival with the colored powder everywhere” with “Holi Festival of Color” but it shouldn’t. Gardner doesn’t know that fact, so the bot shouldn’t pretend it knows it. A Cyrano-de-Bergerac software—where it makes him sound more eloquent, intelligent, or charming than he really is to woo her—would be a worse kind of deception. Gardner wants to hide where he is, not who he is.
That said, Gardner should be able to direct the bot, to change its tactics. “OMG. GardnerBot! You’re getting too personal! Back off!” It might not be enough to cover a flub made 42 minutes ago, but of course the bot should know how to apologize on Gardner’s behalf and ask conversational forgiveness.
Gotta go
If the signal to Mars got interrupted or the bot got into too much trouble with pressure to talk about low confidence or high stakes topics, it could use a believable, pre-rolled excuse to end the conversation.
GARDNERBOT
Oh crap. Will you be online later? I’ve got chores I have to do.
Then, Gardner could chat with TulsaBot on his end without time pressure to refine GardnerBot per their most recent topics, which would be sent back to Earth servers to be ready for the next chat.
In this way he could have “chats” with Tulsa that are run by a bot but quite custom to the two of them. It’s really Gardner’s questions, topics, jokes, and interest, but a bot-managed delivery of these things.
So it could work, does it fit the movie? I think so. It would be believable because he’s a nerd raised by scientists. He made his own robot, why not his own bot?
From the audience’s perspective, it might look like they’re chatting in real time, but subtle cues on Gardner’s interface reward the diligent with hints that he’s watching a time delay. Maybe the chat we see in the film is even just cleverly edited to remove the bots.
How he manages to hide this data stream from NASA to avoid detection is another question better handled by someone else.
An honest version: bot envoy
So that solves the logic from the movie’s perspective but of course it’s still squickish. He is ultimately deceiving her. Once he returns to Mars and she is back on Earth, could they still use the same system, but with full knowledge of its botness? Would real world astronauts use it?
Would it be too fake?
I don’t think it would be too fake. Sure, the bot is not the real person, but neither are the pictures, videos, and letters we fondly keep with us as we travel far from home. We know they’re just simulacra, souvenir likenesses of someone we love. We don’t throw these away in disgust for being fakes. They are precious because they are reminders of the real thing. So would the themBot.
GARDNER
Hey, TulsaBot. Remember when we were knee deep in the Pacific Ocean? I was thinking about that today.
TULSABOT
I do. It’s weird how it messes with your sense of balance, right? Did you end up dreaming about it later? I sometimes do after being in waves a long time.
GARDNER
I can’t remember, but someday I hope to come back to Earth and feel it again. OK. I have to go, but let me know how training is going. Have you been on the G machine yet?
Nicely, you wouldn’t need stall tactics in the honest version. Or maybe it uses them, but can be called out.
TULSA
GardnerBot, you don’t have to stall. Just tell Gardner to watch Mission to Mars and update you. Because it’s hilarious and we have to go check out the face when I’m there.
Sending your loved one the transcript will turn it into a kind of love letter. The transcript could even be appended with a letter that jokes about the bot. The example above was too short for any semi-realtime insertions in the text, but maybe that would encourage longer chats. Then the bot serves as charming filler, covering the delays between real contact.
Ultimately, yes, I think we can backworld what looks physics-breaking into something that makes sense, and might even be a new kind of interactive memento between interplanetary sweethearts, family, and friends.
So I mentioned in the intro to this review that I was drawn to review Doctor Strange (with my buddy and co-reviewer Scout Addis) because the Cloak displays some interesting qualities in relation to the book I just published. Buy it, read it, review it on amazon.com, it’s awesome.
That sales pitch done, I can quickly cover the key concepts here.
A tool, like a hammer, is a familiar but comparatively-dumb category of thing that only responds to a user’s input. A hammer is an example. Tool has been the model of the thing we’re designing in interaction design for, oh, 60 years, but it is being mostly obviated by narrow artificial intelligence, which can be understood as automatic, assistive, or agentive.
Assistive technology helps its user with the task she is focused on: Drawing her attention, providing information, making suggestions, maybe helping augment her precision or force. If we think of a hammer again, an assistive might draw her attention to the best angle to strike the nail, or use an internal gyroscope to gently correct her off-angle strike.
Agentive technology does the task for its user. Again with the hammer, she could tell hammerbot (a physical agent, but there are virtual ones, too) what she wants hammered and how. Her instructions might be something like: Hammer a hapenny nail every decimeter along the length of this plinth. As it begins to pound away, she can then turn her attention to mixing paint or whatever.
When I first introduce people to these distinctions, I step one rung up on Wittgenstein’s Ladder and talk about products that are purely agentive or purely assistive, as if agency was a quality of the technology. (Thabks to TU prof P.J. Stappers for distinguishing these as ontological and epistemological approaches.) The Roomba, for example, is almost wholly agentive as a vacuum. It has no handle for you to grab, because it does the steering and pushing and vacuuming.
Once you get these basic ideas in your head, we can take another step up the Ladder together and clarify that agency is not necessarily a quality of the thing in the world. It’s subtler than that. It’s a mode of relationship between user and agent, one which can change over time. Sophisticated products should be able to shift their agency mode (between tool, assistant, agent, and automation) according to the intentions and wishes of their user. Hammerbot is useful, but still kind of dumb compared to its human. If there’s a particularly tricky or delicate nail to be driven, our carpenter might ask hammerbot’s assistance, but really, she’ll want to handle that delicate hammering herself.
Since I only manage to restart The Star Wars Holiday Special reviews right around the time a new Star Wars franchise movie comes out, many of you may have forgotten it was even being reviewed. Well, it is. If you need to catch up, or have joined this blog after I began it years ago, you can head back to beginning to read about the plot and the analyses so far. It’s not pretty.
When we last left the Special, Lumpy was distracted from the Stormtrooper ransack of their home by watching The Faithful Wookiee. The 6 analyses of that film focused on the movie from a diegetic perspective, as if it were a movie like any other on this blog, dealing mostly with its own internal “logic.”
Picking up, we need to look at The Faithful Wookiee from a “hyperdiegetic” perspective, that is, in the context of the other show in which it occurs, that is, The Star Wars Holiday Special. Please note that, departing from the mission statement for a bit, these questions not about the interfaces, but about the backworlding that informs these interfaces. Continue reading →
To provide the Victim Cards to the Robot Asesino, Orlak inserts it into an open slot in the robot’s chest, which then illuminates, confirming that the instructions have been received.
There is, I must admit, a sort of lovely, morbid poetry to a cardiogram being inserted into a slot where the robot heart would be to give the robot instructions to end the beating of the human heart described in the cardiogram. And we don’t see a lot of poetry in sci-fi interface designs. So, props for that.
The illumination is a nice bit of feedback, but I think it could convey the information in more useful and cinegenic ways.
In this new scenario…
Orlak has the robot pull back its coat
The chamfered slot is illuminated, signaling “card goes here.”
As Orlak inserts the target card, the slot light dims as the chest-cavity light brightens, signaling “I have the card.”
After a moment, the chest-cavity light turns blood red, signaling confirmation of the victim and the new dastardly mission.
When the robot returns to Orlak after completing a mission, the red light would dim as the slot light illuminates again, signaling that it is ready for its next mission.
These changes improve the interface by first drawing the user’s locus of attention exactly where it needs to go, and then distinguishing the internal system states as they happen. It would also work for the audience, who understands by association that red means danger.
The shape of the slot is pretty good for its base usability. It has clear affordances with its placement, orientation, and metallic lining. There’s plenty of room to insert the target card. It might benefit from a fillet or chamfer for the slot, to help avoid accidentally crumpling the paper cards when they are aimed poorly.
In addition to the tactical questions of illumination and shape of the slot, I have a few strategic questions.
There is no authorization in evidence. Can just anyone specify a target? Why doesn’t Gaby use her luchadora powers to Spin-A-Roonie a target card with Orlak’s face on it and let the robot save the day? Maybe the robot has a whitelist of heartbeats, and would fight to resist anyone else, but that’s just me making stuff up.
Also I’m not sure why the card stays in the robot. That leaves a discoverable paper trail of its crimes, perfect for a Scooby to hand over to the federales. Maybe the robot has some incinerator or shredder inside? If not, it would be better from Orlak’s perspective to design it as an insert-and-hold slot, which would in turn require a redesign of the card to have some obvious spot to hold it, and a bump-in on the slot to make way for fingers. Then he could remove the incriminating evidence and destroy it himself and not worry whether the robot’s paper shredder was working or not.
Another problem is that, since the robot doesn’t talk, it would be difficult to find out who its current target is at any given time. Since anyone can supply a target, Orlak can’t just rely on his memory to be certain. If the card was going to stay inside, it would be better to have it displayed so it’s easy to check.
How would Orlak cancel a target?
It is unclear how Orlak specifies whether the target is to be kidnapped or killed even though some are kidnapped and some are killed.
It’s also unclear about how Orlak might rescind or change an order once given.
It is also unclear how the assassin finds its target. Does it have internal maps with addresses? Or does it have unbelievably good hearing that can listen to every sound nearby, isolate the particular heartbeat in question, and just head in that direction, destroying any walls it encounters? Or can it reasonably navigate human cities and interiors to maintain its disguise? Because that would be some amazing technology for 1969. This last is admittedly not an interface question, but a backworlding question for believability.
So there’s a lot missing from the interface.
It’s the robot assassin designer’s job to not just tick a box to tell themselves that they have provided feedback, but to push through the scenarios of use to understand in detail how to convey to the evil scientist what’s happening with his murderous intent.
My partner and I spent much of March watching episodes of Star Trek: The Next Generation in mostly random order. I’d seen plenty of Trek before—watching pretty much all of DS9 and Voyager as a teenager, and enjoying the more recent J.J. Abrams reboot—but it’s been years since I really considered the franchise as a piece of science fiction. My big takeaway is…TNG is bonkers, and that’s okay. The show is highly watchable because it’s really just a set of character moments, risk taking, and ethical conundrums strung together with pleasing technobabble, which soothes and hushes the parts of our brain that might object to the plot based on some technicality. It’s a formula that will probably never lose its appeal.
But there is one thing that does bother me: how can the crew respond to Picard’s orders so fast? Like, beyond-the-limits-of-reason fast.
How are you making that so?
When the Enterprise-D encounters hostile aliens, ship malfunctions, or a mysterious space-time anomaly, we often get dynamic moments on the bridge that work like this. Data, Worf and the other bridge crew, sometimes with input from Geordi in engineering, call out sensor readings and ship functionality metrics. Captain Picard stares toward the viewscreen/camera and gives orders, sometimes intermediated by Commander Riker. Worf or Data will tap once or twice on their consoles and then quickly report the results—i.e. “our phasers have no effect” or “the warp containment field is stabilizing,” that sort of thing. It all moves very quickly, and even though the audience doesn’t quite know the dangers of tachyon radiation or how tricky it is to compensate for subspace interference, we feel a palpable urgency. It’s probably one of the most recognizable scenes-types in television.
Now, extradiegetically, I think there are very good reasons to structure the action this way. It keeps the show moving, keeps the focus on the choices, rather than the tech. And of course, diegetically, their computers would be faster than ours, responding nearly instantaneously. The crew are also highly trained military personnel, whose focus, reaction speed, and knowledge of the ship’s systems are kept sharp by regular drills. The occasional scenes we get of tertiary characters struggling with the controls only drives home how elite the Enterprise senior staff are.
Just kidding, we love ya, Wil.
Nonetheless, it is one thing to shout out the strength of the ship’s shields. No doubt Worf has an indicator at tactical that’s as easy to read as your laptop’s battery level. That’s bound to be routine. But it’s quite another for a crewmember to complete a very specific and unusual request in what seems like one or two taps on a console. There are countless cases of the deflector dish or tractor beam being “reconfigured” to emit this or that kind of force or radiation. Power is constantly being rerouted from one system to another. There’s a great deal of improvisational engineering by all characters.
Just to pick examples in my most recent days of binging: in “Descent, Part 2,” for instance, Beverly Crusher, as acting captain, tells the ensign at ops to launch a probe with the ship’s recent logs on it, as a warning to Starfleet, thus freeing the Enterprise to return through a transwarp conduit to take on The Borg. Or in the DS9 episode “Equilibrium”—yes, we’ve started on the next series now that TNG is off Netflix—while investigating a mysterious figure from Jadzia’s past, Sisko instructs Bashir to “check the enrollment records of all the Trill music academies during Belar’s lifetime.” In both cases, the order is complete in barely a second.
Even for Julian Bashir—a doctor and secretly a mutant genius—there is no way for a human to perform such a narrow and out-of-left-field search without entering a few parameters, perhaps navigating via menus to the correct database. From a UX perspective, we’re talking several clicks at least!
There is a tension in design between…
Interface elements that allow you to perform a handful of very specific operations quickly (if you know where the switch is), and…
Those that let you do almost anything, but slower.
For instance, this blog has big colorful buttons that make it easy to get email updates about new posts or to donate to a tip jar. If you want to find a specific post, however, you have to type something into the search box or perhaps scroll through the list of TV/movie properties on the right. While the 24th Century no doubt has somewhat better design than WordPress, they are still bound by this tension.
Of course it would be boring to wait while Bashir made the clicks required to bring up the Trill equivalent of census records or LexisNexis. With movie magic they simply edit out those seconds. But I think it’s interesting to indulge in a little backworlding and imagine that Starfleet really does have the technology to make complex general computing a breeze. How might they do it?
Enter the Ship’s AI
One possible answer is that the ship’s Computer—a ubiquitous and omnipresent AI—is probably doing most of the heavy lifting. Much like how Iron Man is really Jarvis with a little strategic input from Tony, I suspect that the Computer listens to the captain’s orders and puts the appropriate commands on the relevant crewman’s console the instant the words are out of Picard’s mouth. (With predictive algorithms, maybe even just before.) The crewman then merely has to confirm that the computer correctly interpreted the orders and press execute. Similarly, the Computer must be constantly analyzing sensor data and internal metrics and curating the most important information for the crew to call out. This would be in line with the Active Academy model proposed in relation to Starship Troopers.
Centaurs, Minotaurs, and anticipatory computing
I’ve heard this kind of human-machine relationship called “Centaur Computing.” In chess, for instance, some tournaments have found that human-computer teams outperform either humans or computers working on their own. This is not necessarily intuitive, as one would think that computers, as the undisputed better chess players, would be hindered by having an imperfect human in the mix. But in fact, when humans can offer strategic guidance, choosing between potential lines that the computer games out, they often outmaneuver pure-AIs.
I often contrast Centaur Computing with something I call “Minotaur Computing.” In the Centaur version—head of a man on the body of a beast—the human makes the top-level decision and the computer executes. In Minotaur Computing—head of a beast with the body of a man—the computer calls the shots and leaves it up to human partners to execute. An example of this would be the machine gods in Person of Interest, which have no Skynet Terminator armies but instead recruit and hire human operatives to carry out their cryptic plans.
In some ways this kind of anticipatory computing is simply a hyper-advanced version of AI features we already have today, such as when Gmail offers to complete my sentence when I begin to type “thank you for your time and consideration” at the end of a cover letter.
Hi, it looks like you’re trying to defeat the Borg…
In this formulation, the true spiritual ancestor of the Starfleet Computer is Clippy, the notorious Microsoft Word anthropomorphic paperclip helper, which would pop up and make suggestions like “It looks like you’re writing a letter. Would you like help?” Clippy was much maligned in popular culture for being annoying, distracting, and the face of what was in many ways a clunky, imperfect software product. But the idea of making sense of the user’s intentions and offering relevant options isn’t always a bad one. The Computer in Star Trek performs this task so smoothly, efficiently, and in-the-background, that Starfleet crews are able to work in fast-paced harmony, acting on both instinct and expertise, and staying the heroes of their stories.
One to beam into the Sun, Captain.
Admittedly, this deftness is a bit at odds with the somewhat obtuse behavior the Computer often displays when asked a question directly, such as demanding you specify a temperature when you request a glass of water. Given how often the Computer suffers strange malfunctions that complicate life on the Enterprise for days a time, one wonders if the crew feel as though they are constantly negotiating with a kind of capricious spirit—usually benign but occasionally temperamental and even dangerously creative in its interpretations of one’s wishes, like a djinn. Perhaps they rarely complain about or even mention the Computer’s role in Clippy-ing orders onto their consoles because they know better than to insult the digital fairies that run the turbolifts and replicate their food.
All of which brings a kind of mystical cast to those rapid, chain-of-command-tightened exchanges amongst the bridge crew when shit hits the fan. When Picard gives his crew an order, he’s really talking to the Computer. When Riker offers a sub-order, he’s making a judgment call that the Computer might need a little more guidance. The crew are there to act as QA—a general-intelligence safeguard—confirming with human eyes and brain that the Computer is interpreting Picard correctly. The one or two beeps we often hear as they execute a complex command are them merely dismissing incorrect or confused operation-lines. They report back that the probe is ready or the phasers are locked, as the captain wished, and Picard double confirms with his iconic “make it so.” It’s a multilayered checking and rechecking of intentions and plans, much like the military today uses to prevent miscommunications, but in this case with the added bonus of keeping the reins on a powerful but not always cooperative genie.
There’s a good argument to be made that this is the relationship we want to have with technology. Smooth and effective, but with plenty of oversight, and without the kind of invasive elements that right now make tech the center of so many conversations. We want AI that gives us computational superpowers, but still keeps us the heroes of our stories.
Andrew Dana Hudson is a speculative fiction author, researcher, and theorist. His first book, Our Shared Storm: A Novel of Five Climate Futures, is fresh off the press. Check it out here. And follow his work via his newsletter, solarshades.club.