In the first post I gave an overview of the Fermi question and its hypothetical answers. In the second, I reviewed which of the answers sci-fi is given to. In this post I compare the costs of acting on each answer.
Which should we be telling stories about?
Sci-fi likes to tell stories about the Prime Directive Fermi answer. But is it the most useful answer? Keep in mind that most of us are not working in space programs. For us, sci-fi is less direct inspiration to go build the most kick-ass rocketship we can, but rather inform how we think about and support the space program culturally and politically. With that in mind, let’s spend a little bit of time talking about the effects of confronting each hypothesis in our sci-fi. To be able to compare apples to apples, let’s apply the same thinking to each.
- What would be the call to action (if any) if this hypothesis is true?
- What if this is true, but we fail to act on it?
- What if it’s true, and we do act on it?
Warning: This will be long, but if we’re thinking strategy, risk aversion, and opportunity maximization (as we are) we have to be thorough.
Life is rare

These stories tell us to not get our hopes up about thrilling tales of space imperialism. We need to get our shit sorted, since, no, we won’t have peace treaties with Romulan Sith, but we will have our hands full dealing with our own worst natures and the weirdness of natural space problems like black holes and special relativity. While we go about this, we should take advantage of this freakish circumstance by protecting life for the precious thing it is.
What if it’s true, but we fail to act on it?
We squander life’s only chance, fail to protect ourselves or the network of life on which we depend, and die out. It’s not a guarantee, but a greater risk.
What if it’s true, and we act on it?
Then we ensure our (and all life’s) survival, escape the planet before the sun goes red giant, and try to colonize the galaxy to increase life’s chances out there.
Fearful silence

I’ll lump the physical and the informational threats into one discussion bucket, because these serve as similar dire warnings. They tell us that we need to keep quiet and/or deliberately deaf until we know what’s out there, and build strong offense and defense capabilities for when they do show up.
What if it’s true, but we fail to act on it?
We could be advertising our tender, tasty flesh up the nearest thing that would try to treat us like their personal fast food depot. Or we could be broadcasting our picturesque and utterly defenseless natural resources. And it is very much in our interest to keep those things intact.
What if it’s true, and we act on it?
You might think that we can shut up and stay hidden while we protect our defensive and maybe even offensive capabilities. The bad news is that ship has sailed. Not only have we shot out a few calling cards voyaging into the void, we’ve been leaking radio emissions for the better part of a century. That spherical announcement will continue through the universe for a long time. Even before humans evolved, our atmosphere was announcing the presence of life through signature biogasses. If there’s a hyperadvanced superpredator out there, they already know about us, and we don’t have the time scales, species coordination, or resources to do anything other than beg forgiveness when they get here.
OK. If we put all our efforts into offense and defense we might slightly increase the odds of or duration of our survival, but the odds are very much against it. We should hope that this Fermi hypothesis is unlikely.
Prime Directives?

Any of the Prime Directives call us to keep striving, inventing, maturing, evolving, and exploring. One day we’ll figure out or accidentally pass the test and BAM—we’ll be having space adventures and chuckling about how long it took us.
What if it’s true, but we fail to act on it?
We continue to be isolated, ignorant, and alone, an embarrassing backwater species unable to pick itself out of the blood, poop, and mud.
What if it’s true, and we act on it?
Since the exact nature of the Prime Directive is unknown to us, our action in this scenario is to just keep at it, and performing well and behaving well for our invisible observers. To improve our advertising, to demonstrate our achievements, knowledge, moral fiber, and compatibility with alien life. Eventually, we pass the test, have the universe open up to us, and finally get to taste Pan Galactic Gargle Blasters.
Zoo/Planetarium/Disguises

I’ll lump these three together because in each case, there’s a “reality” under the surface of things we’ve yet to uncover. But it’s worth nothing that each implies we were either put in this circumstance or deliberately kept here. The call to action for us is to continue as we have been, but be prepared for the nasty shock when it comes. Perhaps the call to action is to try and find the seams of our cage, to prove the nature of our reality, identify and maybe learn to communicate with our captors.
What if it’s true, but we fail to act on it?
Failing to act in this case is to…what…not seek out the truth of our reality?
What if it’s true, and we act on it?
Then we look for the cage, the disguises, or containing display. Maybe we even escape. But when the true nature of reality comes, we’re going to have a very sobering moment. Maybe it will be akin to Dave Bowman when his mind was blown by the second obelisk in 2001.
But we should ask ourselves: What happens when a dangerous animal escapes its paddock at the zoo? At the very best, the animal is sedated and put back in the zoo. Maybe with its collective memory wiped? Worse is the animal escapes to the wild where it learns it has zero of the skills necessary to survive there, even though instinct drove it there. In the worst case, the animal is killed to protect the visitors, or to prevent the rest of the zoo from catching wise. It might be that it’s in our best interest to stay in the pretty and utterly safe fishbowl.
Logistics
I don’t know that the category of logistics means anything in this context. If it’s genuinely logistical reasons we haven’t found aliens, then it’s unlikely our efforts will be able to overcome those reasons any more than other civilizations much more advanced than us. So the call to action is that it doesn’t matter if there are or aren’t aliens, because we’ll never encounter them. Then we shift into a Life is Rare circumstance.
Natural disasters

We know these happen, as the geologic record tells many tales of catastrophic terrors long before the modern anthropogenic one, and worse even than the Chicxulub comet that killed off the dinosaurs. Like most disaster porn, this can be the unifying force humanity needs to band together and figure out a way around, out, or through it. The call to action is for us to get a much more robust sensor network in place, have scenarios plotted out in advance with actionable and tested contingency plans for each one. It also implies colonizing the galaxy so all our eggs aren’t in this single planetary basket. Maybe create a panspermia technology all our own.
What if it’s true, but we fail to act on it?
We might get blindsided. We might defund (or continue underfunding) astronomy initiatives to keep an eye out for just these things, or be scientifically undereducated to manage. We could be wiped out.
What if it’s true, and we act on it?
We invest in research, sensors, and defenses such that we can detect and stop the threats to our existence. I’m not sure it will be oil riggers suddenly trained for space travel. But we will be protected. Hopefully this does not come at the cost of exploration, since one of the things we are protecting against is the sun’s red giant phase.
They are inconceivable

If aliens are inconceivable, what is the call to action? It could be to continue forward but be prepared, as we should be with the Zoo hypothesis, for a rude awakening. Another might be to try and accelerate our own evolution so we might be able to conceive them. But since it’s impossible to know what we’re hoping to conceive that seems directionless. Another might be to keep building something bigger than ourselves that might be able to perceive them, like super artificial intelligence.
What if it’s true, but we fail to act on it?
It might be mundane, like getting our planet paved over for an interstellar bypass. It might be terrible, winding up in the giant maw of the Space Angler Fish, or under the magnifying glass of the terrible Space Pre-Teen. It might be euphoric, if they have a policy of kindness to lower-order creatures. (This is not the precedent we ourselves have set.) Since they are inconceivable, there is no way to know what this might be.
What if it’s true, and we act on it?
We will be the ants spelling out “Hello world” on the beach, much to the amazement of the people witnessing it. We may have an A.I. that tells us gently what it finds. We might just understand the existential terror and have time to escape or shore up defenses. We might advance our evolution to greater heights, or toy with the building blocks of life and destroy ourselves. There’s no clear positive or negative that’s implied.
Our tech will destroy us
If tech is the threat, the call to action is to take a much more careful approach to our technology. On one extreme, to adopt an Amish-ish approach, and abandon it all. On the other, to carefully limit its capabilities, or test it for generation in sandboxes so it can be destroyed if necessary. Or another, to build in robust failsafes while we go whole-hog forward into our technological future. Or roll the dice and hope one of our good technologies saves us from the self-destructive one.
What if it’s true, but we fail to act on it?
We are wiped out by our powerful technology.
What if it’s true, and we act on it?
We will keep a critical eye on not just the novelty features of tech, but its possible effects at the broadest scale, and consider that in our designs, use of technology, and policies. We’ll be careful with technology.
The set of possibilities
If, as I mentioned at the beginning of the post, we look at it from a strategic perspective, we should ask ourselves which of the possibilities we should keep thinking about, and encourage that kind of sci-fi storytelling to encourage us to keep on track.

To do this we would look for those hypothesis which offer the greatest danger to avoid, and the greatest opportunity on the far side, which leads us to three: Life is Rare, Natural Disasters, and Tech will Destroy Us. Each of these has a deep dark chasm if they are true but we fail to act, and a terrific upside if we manage to succeed, survival being chief among them.
If we had to go further, and pick a primary one from these, it seems that Tech will Destroy Us carries both the biggest threat of self-destruction, the thing most under our control, and which solving may contribute to the successfully dealing with most of the others.
Then we have to note that, per my prior post, this isn’t the one sci-fi has told its stories about. We like to tell stories about Prime Directives. And this takes us back, in the next post, to Forbidden Planet.
Pingback: The Fermi Paradox and Sci-fi | Sci-fi interfaces
Pingback: Back to the Forbidden Planet | Sci-fi interfaces
I love these questions. How about a technological singularity?
A technological singularity will either be “Our tech will destroy us” or be the bridge past, say, an Anthropocene extinction event, yes? Or did you mean something else?
Ah, Anthropocene extinction would fit. I was thinking along the lines of a technological assimilation/evolution of the species. If it was common, we could be mistaken in looking for biological life on other planets. Love the blog!