Back to the Forbidden Planet

Over the last few posts we’ve covered the Fermi Problem and hypotheses, which of the hypotheses sci-fi likes to write about, and which of the hypotheses it’s strategic to write about. This brings us back around to Forbidden Planet.


As a species, we’re faced with a number of big problems that need solving. Some feel more abstract than others, but it sure would suck if we were wrong about that. And while sci-fi can be pure escapism, when it does, hopefully it serves as a mild indulgence rather than something which lets us ignore problems in the real world. As I’ve said before, it is part of my mission with this blog to get readers to not just watch sci-fi but to use it; to understand its effects and limitations; to decide how believable its scenarios are; and to think about the lessons you can take back with you to the real world.

This is why Forbidden Planet is such a stellar movie for me.

It is a singular example (in the survey at least) of humans encountering an ancient, vastly advanced, dead civilization through the “ruins” of its technology. There was no tense tête–à–tête diplomacy, or sexily-foreign green aliens to seduce, or any of those other Terran imperialist thrills.

I'm not exactly sure which of those two possibilities this image represents.
I’m not exactly sure which of those two possibilities this image represents.

I don’t want to demean its historical importance. It came at a time in cinematic history after a few decades where Hollywood created little more sci-fi than space opera for kids, and it proved enough of a commercial and critical success that suddenly sci-fi was a serious consideration for big budget attention. That meant broader reach, and more people thinking about speculative futures. (Heck, it meant enough serious sci-fi that I could keep a blog about the genre. So, you know, thanks for that.)

But more than its historical importance is that it’s the best model of a likely future. Just this past May, Adam Frank (an astrophysicist at the University of Rochester) and Woodruff Sullivan published “A New Empirical Constraint on the Prevalence of Technological Species in the Universe.” In the paper they note that the 1,284 new exoplanets discovered by the Kepler observatory scientists puts some lower-limit constraints on a few factors in the Drake equation.

Kepler-11 is a sun-like star around which six planets orbit. At times, two or more planets pass in front of the star at once, as shown in this artist's conception of a simultaneous transit of three planets observed by NASA's Kepler spacecraft on Aug. 26, 2010. Image credit: NASA/Tim Pyle
Image credit: NASA/Tim Pyle

“Three of the seven terms in Drake’s equation are now known. We know the number of stars born each year. We know that the percentage of stars hosting planets is about 100. And we also know that about 20 to 25 percent of those planets are in the right place for life to form. This puts us in a position, for the first time, to say something definitive about extraterrestrial civilizations—if we ask the right question.”

Their work suggests that the odds are in favor of finding alien life—but finding evidence of it long dead. They suggest a shift in our attentions away from contacting a living civilization, towards cosmic archaeology. You know, like Forbidden Planet illustrates.

A graph from "A New Empirical Constraint on the Prevalence of Technological Species in the Universe" showing the lower limit to the number of technological species in the universe as being 2.5x10^-24.
Number of Technological Species Ever in the Universe, from the paper.

Frankly it could stop there and be canonized for that purpose, but the film goes one better

We still don’t have great constraints for the other troubling component of Drake’s equation, and that’s how long technological civilizations tend to last. That question in turn raises the darker question of what tends to doom those civilizations. One possibility is that it is that technology itself is the thing, which is, again, what Forbidden Planet illustrates.

This is a blog about sci-fi interfaces, and I presume that readers are, like me, directly involved in shaping technology. So it is that this 60 year-old film has a one-two punch. It shows us both what the future will probably be like, and then turns our attention to something we can think about—and work to make right—now.

And that’s sci-fi we can use.


A Fermi strategy

In the first post I gave an overview of the Fermi question and its hypothetical answers. In the second, I reviewed which of the answers sci-fi is given to. In this post I compare the costs of acting on each answer.

Which should we be telling stories about?

Sci-fi likes to tell stories about the Prime Directive Fermi answer. But is it the most useful answer? Keep in mind that most of us are not working in space programs. For us, sci-fi is less direct inspiration to go build the most kick-ass rocketship we can, but rather inform how we think about and support the space program culturally and politically. With that in mind, let’s spend a little bit of time talking about the effects of confronting each hypothesis in our sci-fi. To be able to compare apples to apples, let’s apply the same thinking to each.

  1. What would be the call to action (if any) if this hypothesis is true?
  2. What if this is true, but we fail to act on it?
  3. What if it’s true, and we do act on it?

Warning: This will be long, but if we’re thinking strategy, risk aversion, and opportunity maximization (as we are) we have to be thorough.

Life is rare

All life is precious, Daryl.

These stories tell us to not get our hopes up about thrilling tales of space imperialism. We need to get our shit sorted, since, no, we won’t have peace treaties with Romulan Sith, but we will have our hands full dealing with our own worst natures and the weirdness of natural space problems like black holes and special relativity. While we go about this, we should take advantage of this freakish circumstance by protecting life for the precious thing it is.

What if it’s true, but we fail to act on it?

We squander life’s only chance, fail to protect ourselves or the network of life on which we depend, and die out. It’s not a guarantee, but a greater risk.

What if it’s true, and we act on it?

Then we ensure our (and all life’s) survival, escape the planet before the sun goes red giant, and try to colonize the galaxy to increase life’s chances out there.

Fearful silence

Maybe they’ll just ignore us.

I’ll lump the physical and the informational threats into one discussion bucket, because these serve as similar dire warnings. They tell us that we need to keep quiet and/or deliberately deaf until we know what’s out there, and build strong offense and defense capabilities for when they do show up.

What if it’s true, but we fail to act on it?

We could be advertising our tender, tasty flesh up the nearest thing that would try to treat us like their personal fast food depot. Or we could be broadcasting our picturesque and utterly defenseless natural resources. And it is very much in our interest to keep those things intact.

What if it’s true, and we act on it?

You might think that we can shut up and stay hidden while we protect our defensive and maybe even offensive capabilities. The bad news is that ship has sailed. Not only have we shot out a few calling cards voyaging into the void, we’ve been leaking radio emissions for the better part of a century. That spherical announcement will continue through the universe for a long time. Even before humans evolved, our atmosphere was announcing the presence of life through signature biogasses. If there’s a hyperadvanced superpredator out there, they already know about us, and we don’t have the time scales, species coordination, or resources to do anything other than beg forgiveness when they get here.

OK. If we put all our efforts into offense and defense we might slightly increase the odds of or duration of our survival, but the odds are very much against it. We should hope that this Fermi hypothesis is unlikely.

Prime Directives?


Any of the Prime Directives call us to keep striving, inventing, maturing, evolving, and exploring. One day we’ll figure out or accidentally pass the test and BAM—we’ll be having space adventures and chuckling about how long it took us.

What if it’s true, but we fail to act on it?

We continue to be isolated, ignorant, and alone, an embarrassing backwater species unable to pick itself out of the blood, poop, and mud.

What if it’s true, and we act on it?

Since the exact nature of the Prime Directive is unknown to us, our action in this scenario is to just keep at it, and performing well and behaving well for our invisible observers. To improve our advertising, to demonstrate our achievements, knowledge, moral fiber, and compatibility with alien life. Eventually, we pass the test, have the universe open up to us, and finally get to taste Pan Galactic Gargle Blasters.



I’ll lump these three together because in each case, there’s a “reality” under the surface of things we’ve yet to uncover. But it’s worth nothing that each implies we were either put in this circumstance or deliberately kept here. The call to action for us is to continue as we have been, but be prepared for the nasty shock when it comes. Perhaps the call to action is to try and find the seams of our cage, to prove the nature of our reality, identify and maybe learn to communicate with our captors.

What if it’s true, but we fail to act on it?

Failing to act in this case is to…what…not seek out the truth of our reality?

What if it’s true, and we act on it?

Then we look for the cage, the disguises, or containing display. Maybe we even escape. But when the true nature of reality comes, we’re going to have a very sobering moment. Maybe it will be akin to Dave Bowman when his mind was blown by the second obelisk in 2001.

But we should ask ourselves: What happens when a dangerous animal escapes its paddock at the zoo? At the very best, the animal is sedated and put back in the zoo. Maybe with its collective memory wiped? Worse is the animal escapes to the wild where it learns it has zero of the skills necessary to survive there, even though instinct drove it there. In the worst case, the animal is killed to protect the visitors, or to prevent the rest of the zoo from catching wise. It might be that it’s in our best interest to stay in the pretty and utterly safe fishbowl.


I don’t know that the category of logistics means anything in this context. If it’s genuinely logistical reasons we haven’t found aliens, then it’s unlikely our efforts will be able to overcome those reasons any more than other civilizations much more advanced than us. So the call to action is that it doesn’t matter if there are or aren’t aliens, because we’ll never encounter them. Then we shift into a Life is Rare circumstance.

Natural disasters

Courtesy NASA/JPL–Caltech

We know these happen, as the geologic record tells many tales of catastrophic terrors long before the modern anthropogenic one, and worse even than the Chicxulub comet that killed off the dinosaurs. Like most disaster porn, this can be the unifying force humanity needs to band together and figure out a way around, out, or through it. The call to action is for us to get a much more robust sensor network in place, have scenarios plotted out in advance with actionable and tested contingency plans for each one. It also implies colonizing the galaxy so all our eggs aren’t in this single planetary basket. Maybe create a panspermia technology all our own.

What if it’s true, but we fail to act on it?

We might get blindsided. We might defund (or continue underfunding) astronomy initiatives to keep an eye out for just these things, or be scientifically undereducated to manage. We could be wiped out.

What if it’s true, and we act on it?

We invest in research, sensors, and defenses such that we can detect and stop the threats to our existence. I’m not sure it will be oil riggers suddenly trained for space travel. But we will be protected. Hopefully this does not come at the cost of exploration, since one of the things we are protecting against is the sun’s red giant phase.

They are inconceivable


If aliens are inconceivable, what is the call to action? It could be to continue forward but be prepared, as we should be with the Zoo hypothesis, for a rude awakening. Another might be to try and accelerate our own evolution so we might be able to conceive them. But since it’s impossible to know what we’re hoping to conceive that seems directionless. Another might be to keep building something bigger than ourselves that might be able to perceive them, like super artificial intelligence.

What if it’s true, but we fail to act on it?

It might be mundane, like getting our planet paved over for an interstellar bypass. It might be terrible, winding up in the giant maw of the Space Angler Fish, or under the magnifying glass of the terrible Space Pre-Teen. It might be euphoric, if they have a policy of kindness to lower-order creatures. (This is not the precedent we ourselves have set.) Since they are inconceivable, there is no way to know what this might be.

What if it’s true, and we act on it?

We will be the ants spelling out “Hello world” on the beach, much to the amazement of the people witnessing it. We may have an A.I. that tells us gently what it finds. We might just understand the existential terror and have time to escape or shore up defenses. We might advance our evolution to greater heights, or toy with the building blocks of life and destroy ourselves. There’s no clear positive or negative that’s implied.

Our tech will destroy us

If tech is the threat, the call to action is to take a much more careful approach to our technology. On one extreme, to adopt an Amish-ish approach, and abandon it all. On the other, to carefully limit its capabilities, or test it for generation in sandboxes so it can be destroyed if necessary. Or another, to build in robust failsafes while we go whole-hog forward into our technological future. Or roll the dice and hope one of our good technologies saves us from the self-destructive one.

What if it’s true, but we fail to act on it?

We are wiped out by our powerful technology.

What if it’s true, and we act on it?

We will keep a critical eye on not just the novelty features of tech, but its possible effects at the broadest scale, and consider that in our designs, use of technology, and policies. We’ll be careful with technology.

The set of possibilities

If, as I mentioned at the beginning of the post, we look at it from a strategic perspective, we should ask ourselves which of the possibilities we should keep thinking about, and encourage that kind of sci-fi storytelling to encourage us to keep on track.

Screen Shot 2016-06-07 at 15.54.17
It is admittedly more craft than science.

To do this we would look for those hypothesis which offer the greatest danger to avoid, and the greatest opportunity on the far side, which leads us to three: Life is Rare, Natural Disasters, and Tech will Destroy Us. Each of these has a deep dark chasm if they are true but we fail to act, and a terrific upside if we manage to succeed, survival being chief among them.

If we had to go further, and pick a primary one from these, it seems that Tech will Destroy Us carries both the biggest threat of self-destruction, the thing most under our control, and which solving may contribute to the successfully dealing with most of the others.

Then we have to note that, per my prior post, this isn’t the one sci-fi has told its stories about. We like to tell stories about Prime Directives. And this takes us back, in the next post, to Forbidden Planet.