On Thursday, I shared an essay arguing against the practice of Messaging to Extraterrestrial Intelligence (METI). For today, I’ve invited the most prominent advocate of the practice, Douglas Vakoch, to debate the merits of intentionally sending messages to intelligent life that may exist in outer space.
In the coming weeks, we’ll return to more terrestrial concerns: among others, I will discuss why so many journalists lie about their beliefs with Timur Kuran and share my conversation with the late political scientist and anthropologist James C. Scott.
If you don’t want to miss any of these conversations, please set up The Good Fight on your favorite podcasting app today—or, better still, become a paying subscriber to get ad-free access to all of our expanded content!
Douglas Vakoch is an American astrobiologist, extraterrestrial intelligence researcher, and the president of METI International, an organization devoted to transmitting messages to outer space. His books include The Drake Equation: Estimating the Prevalence of Extraterrestrial Life through the Ages and Archaeology, Anthropology, and Interstellar Communication.
In this week’s conversation, Yascha Mounk and Douglas Vakoch discuss the case for and against trying to communicate with extraterrestrial intelligence; why we are yet to discover evidence of extraterrestrial civilizations; and whether the history of human imperialism and our exploitation of the animal kingdom are reason to worry about how technologically superior species of aliens might treat humans.
The transcript has been condensed and lightly edited for clarity.
Yascha Mounk: I'm really excited to talk about this set of topics, which is a little bit further afield for this podcast. But to start with, I’ve found that when I mention this topic to friends in the last days of preparing for this conversation, the instinctive response is, “Have you gone a little bit nuts? Do you believe in little green men from Mars? What exactly are we talking about here?”
So before we get into the details of SETI and METI and so on, why is it that scientists take seriously or in many cases believe that it's likely that there is intelligent extraterrestrial life?
Douglas Vakoch: Well, the big case is that over the last 30 years, we have now discovered that there are planets orbiting virtually every star. That's not something we knew when the search for intelligent signals coming in was launched. Now we know they are everywhere. And we know the basic building blocks of life are scattered throughout the universe. So—a lot of real estate, a lot of the essential materials to make life. But then, the big unknown: does life arise? Does it evolve on other worlds? And does it evolve to the point of having the technology and the motivation to make contact? All along the way, there are a lot of steps that are necessary and it could go either way. So this is an open-ended search. I think there's often the belief that there are aliens out there trying to communicate with us. I wish it were that easy.
So in part, the setting of the search for extraterrestrial intelligence is a willingness to sit with ambiguity. And I think to answer this question, we have to be willing to do that.
Mounk: So how do we think about the basic probability of that kind of intelligent life being out there?
I see people throwing around probabilities in various ways, either saying that it's phenomenally unlikely for intelligent life to develop and so therefore we can't assume that that is going to be the case at least anywhere within the universe that is close enough for us to communicate with them. And then there's other puzzles and paradoxes that seem to rely on the assumption that it's actually very likely for extraterrestrial life to exist. And then the question becomes, well, why is it that the aliens haven't shown up yet? Why is it that they haven't communicated with us?
How do we actually try to get a handle on where the distribution of probabilities here lies?
Vakoch: So you raised two great questions. One is how do we put a number on how many civilizations might actually be out there trying to make contact? And then the second is how do we explain the fact that we still haven't made contact? That was a puzzle raised by the Italian physicist Enrico Fermi in 1950, and so it's called the Fermi paradox. So let's take the first one.
The first search for intelligence using radio telescopes was conducted in 1960 by Frank Drake, an astronomer educated at Harvard. And he pointed a radio telescope at two nearby stars for the course of about 140 hours and did not find any signals. What was unusual about that search though is that he used the same telescopes that other astronomers use to look at the distribution of hydrogen throughout the universe; it's what astronomers use to really get an understanding of the cosmos at radio frequencies. But he did something different because when hydrogen or any other chemical element in the universe is emanating radiation, it emanates at a range of frequencies. So it's not like when we transmit here on Earth, it's one point on a radio dial. So Frank said let's reverse that process and send a signal that's like the signals we send to one another, but now directed to the stars (and that's what we've been looking for for almost 65 years now, and still we have found nothing). A few months later he had the first conference, and at that conference he came up with what has been called the Drake equation. It's how we estimate how many civilizations are out there, and we do it by multiplying seven different numbers together. And the beginning ones are more astronomical—as we go through the equation becomes more biological, even sociological. So first we have to know how many stars are out there and what is the rate that stars form at (that was actually known pretty well in the 1960s and hasn't changed much). But the other six terms were really quite unknown in those early days. So the second is, “Of the stars out there, what fraction of them have planets around them?” And then if you have a star system, how many of those planets are potentially habitable? Then if you have a potentially habitable planet, what fraction of those go on to actually form life? If you have life, on what fraction of those does life go on to become intelligent? Then if you have intelligent beings, do they create a technology that would let them transmit the message to us? And then finally, the biggest unknown, what is the longevity of the civilization? The longer they're able to transmit, the more likely it is that we will make contact. In fact, Frank Drake himself for decades went around with his summary of the Drake equation on his vanity plate. If the civilizations last a thousand years, he said, there are probably about a thousand civilizations in our galaxy. So those are some of the ways that we try to estimate how many civilizations are out there. But then we come up with this puzzling question: “Well, why haven't we made contact?”
Mounk: Before we get to that question, it seems to me like some of the terms in that equation are relatively knowable in astronomy, but many of these are deeply speculative. So, while this equation is a really helpful way of demonstrating what kind of factors would drive both the number of intelligent species that may be out there and our likelihood of making contact with them, I don't know how much it actually helps us to narrow down the range of probabilities because what evidence do we really have about how long an average civilization is able to last at a high level of technological development? How do we really think about whether they would be likely to try and contact us in some kind of way?
So how much does the Drake equation help us to actually get beyond the realm of the purely speculative?
Vakoch: I think, completely consistent with what you're saying, the Drake equation helps us identify where we are ignorant. The good news is those first couple of astronomical terms, we do have solid figures on that. We didn't have that 30 years ago. But we also have to look at our assumptions and I think that's one of the hardest things. So for example, the whole question of what percentage of civilizations are technological. That term also assumes that they have the motivation to use that technology to communicate with us. We're talking about SETI, the search for extraterrestrial intelligence, but that means we have to unpack everything that undergirds intelligence. That's biological, psychological, social. And so the more deeply we look into this whole challenge of trying to guess how many civilizations are out there, the more complicated we realize it is.
Mounk: So now we get to the Fermi paradox. If the number of planets is just vast, and it seems now that many of these planets have the kind of conditions where we might potentially have life, it's a really natural and provocative question to ask why have none of them made contact with us? Why haven't we actually spoken to them? But if the Drake equation is so speculative, if so many other terms within the Drake equation are open to being answered one way or the other, how do we know that the Fermi paradox really gets off the ground, as it were? Perhaps we should read it the other way around—the fact that there haven't been aliens who have contacted us, the fact that we haven't found any of them may in fact be circumstantial evidence for thinking that there just aren't that many intelligent species out there, or they are so far away that we can't communicate with them, or they’re not motivated to communicate with us.
Vakoch: A lot of SETI scientists would say there's no paradox at all. When we say we have been looking for 65 years, that sounds very impressive, but it's been looking in fits and starts. That early search was one astronomer looking at two stars at one single frequency, but there are billions of different frequencies. The good news is that with each passing year, technological and computational capabilities become all the greater. So now the searches are a trillion times more effective than the one that Frank Drake used. So we can continue to keep on looking. I think we have a better chance of finding them. But again, if our initial assumptions are correct, that's the big thing that we always need to step back and look at. So at the same time that radio SETI was launching in the 1960s, Charles Townes, who got the Nobel Prize for inventing the laser, said we could also be looking for laser signals from extraterrestrials. And his colleagues, the other SETI scientists said, “That's ridiculous, Charlie. No civilization could develop that kind of power to send signals at optical frequencies.” But then, as the decades passed, by the 1990s, we were realizing we could use off-the-shelf technologies to do very cheaply signal processing to look for laser pulses a billionth of a second long.
The challenge is to get over our own limited imagination about what's possible. And as you said at the beginning, we don't look for aliens to actually have come to Earth. We don't think there are alien abductions. We need some hard evidence. So we've always needed to be grounded in scientific realities. But those scientific realities change as science progresses as well.
There has been tremendous development since those first SETI searches in the 1960s. But in the early days, 1960s and 1970s, it was a handful of stars that were being looked at at a limited range of frequencies. In fact, when NASA launched its SETI project, its goal was over the course of a decade to look at 1000 nearby stars. Now, 1000 stars is not a lot to look at. If you look at some of the most optimistic estimates in the Drake equation, you have to look at a lot more than 1000 stars.
Mounk: And just for broad comparison, how many stars roughly are there in the universe?
Vakoch: Each galaxy has a hundred billion stars or so. That's what our Milky Way galaxy has. And there are billions of galaxies in the universe. So there are a tremendous number of stars. The challenge, of course, though, is if we're talking about SETI, you want those stars to be close enough that we're going to be around when that signal comes in. And the further away you are, the weaker the signal becomes as it passes through space.
Mounk: If there's a billion stars in a galaxy and there's a billion galaxies, I think it just shows how limited a range of search that is. So take us back to where we are in the overall trajectory of SETI. How much have we been able to listen to?
Vakoch: We've been able to listen to a few tens of thousands of stars right now, but again, that is accelerating with every passing year. So within the next decade, we will have looked at a million stars within our galaxy.
And it's really reliant on two things: technological advance and funding. And so we have secured funding now through philanthropy and that combination will let us search more than we have searched for the previous 60 years.
Mounk: And so to go back to that question, what is the case for doing this?
Vakoch: The case for searching for other intelligence is to try to really calibrate our place in the universe. Some have said that this would be the next big revolution. We've had a number of realizations that humans aren't quite so central in the universe as we initially thought. We got it 400 years ago with Copernicus when we realized the Earth isn't at the center of the universe. We got it in the 19th century with Darwin when we realized that humanity is not at the apex of evolution. We're just down the middle of one branch somewhere. We got it with Freud when we realized that—I don't know if we got it with Freud.
Mounk: Well, Freud claims that Freud was the third humbling of humanity. And I think Freud might want to lie down on a couch and have himself examined for narcissism for claiming that he is the inventor of a third humbling. I sort of believe AI could in certain ways be the third humbling.
Vakoch: Yeah. I think it depends on what happens with AI, whether connecting with extraterrestrials would be our third or our fourth humbling. But I think that is what the impact would be to say we are not the only technological civilization in the universe.
But the purpose is not just to be humbler. There are probably more direct ways to do that than through astronomy. But it is a chance to have some other way of calibrating ourselves. We learn a lot by comparing ourselves with peers. And so it would be a different kind of peer, a peer we are, in many cases, less capable than. Because the civilizations we make contact with are not going to be civilizations at our level. They're not going to be at the same age, if we think of that longevity term of the Drake equation. The reason for that is if the norm in the universe is to have a civilization capable of interstellar communication as long as we've had it, about 100 years (that's how long we've had radio) then civilizations are going to be so few and far between that we will never make contact. The only way realistically we're going to make contact is if civilizations live for thousands, even millions of years. Otherwise, if they do live a hundred years and then they either annihilate themselves in a nuclear war or a climate crisis or they simply become contemplative and turn inward, then what are the chances that their hundred years and our hundred years are going to coincide, given the 13 billion year history of the universe? It's not going to happen. So the only way we're going to make contact is if the other civilization is much older than we are. And so that means, yes, it would be a peer in the sense of an independently evolved intelligence with the technology, but we're going to be the younger partner in that comparison.
Mounk: And the key premise here is that since we are just at the very beginning of being able to communicate with these kinds of species, we presumably would be at the relative beginning of the window of a certain level of technological development. So it would be very, very unlikely that this other civilization happens to be at the same beginning stage as us. And if they've been at that stage for much longer, then presumably they've continued to grow their technological arsenal.
Vakoch: That's right. And it could be that most civilizations are younger than we are, less capable than we are, live in some sort of a stone age, and never develop radio technology. But if that's the case, those are civilizations we can't make contact with. We don't have a spaceship to hop in and travel to their star system. If we don't get a radio signal, we're not going to know they exist.
Souhaitez-vous - ou quelqu'un que vous connaissez - avoir accès à tous ces articles et conversations en français ? Cliquez sur ce lien et activez “en français” sous Notifications ! 🇫🇷
Mounk: So I'm fascinated by SETI and I see the normative case for SETI both because I think it's important to discover truths about the universe and it would be a fascinating and important truth if it could be proven that there's all of these extraterrestrial forms of life out there and because they are out there we probably want to know what it is that they're saying to us. It might be helpful to know what it is they're trying to communicate to us. I take it that so far all these SETI attempts have continued to be in vain, we don't have clear communications from SETI. So they've left us empty-handed so far.
Now, you are among a set of scientists and activists at the forefront of developing something that goes beyond SETI, and that is METI—messaging to extraterrestrial intelligence. So rather than simply being the passive recipient of messages, you're saying we should advertise to them our existence. We say, here we are on Earth, we're an intelligent species in order to get into contact with them proactively. I have to say that while I am fully on board with SETI, I am rather more skeptical of METI. So I'm looking forward to getting into the pros and the cons of that. But why don't you start by explaining why it is that you think that METI is an important activity to engage in and why it is that in normative terms there's a justification for doing so.
Vakoch: Well, I think the same motivation of SETI—of trying to understand our place in the universe, “are there other civilizations out there?”—is what motivates us with METI. But what we're doing is expanding the range of scenarios where we might actually discover them.
One of the big questions is whether, even if there are civilizations out there, as SETI hopes, are they doing what the SETI scientists want them to do, which is to send us intentional signals? Now, it's important to be clear, at this point of our development of our SETI technology here on Earth, we are not able to pick up the kind of accidental TV and radio signals that we have been sending out since the advent of radio and television. We need a signal that is directed at us, a more powerful signal. And that's going to change markedly just within a few decades. So in a couple of hundred years from now, we will have the technology, if our radio technologies keep on improving, to pick up our kind of weak leakage radiation out to 500 light years. So a very long distance. But we don't have it yet.
Sometimes people think of this in terms of game theory, which is the idea that one person, an interactant, takes an action. And the simplest way of describing what you should be doing is simply to repeat what the other person has been doing. So if someone transmits a message, well, maybe you transmit back and you have this interstellar conversation going. But the problem with these theoretic notions is you have to have someone who's willing to start playing the game. And we've always said in the past, well, let the extraterrestrials do the transmitting. If they've been at this, if they're much older than we are, they can afford to do this. They have the patience to do this. So let's put the burden on them. Now, I'm all for that, if the extraterrestrials will accept the burden (maybe that's not how they view it).
Maybe what they're doing is thinking, okay, so you people there on earth want to make contact? Well, show us that you've got some skin in the game. You send us something and then we'll think about reciprocating—so that's the motivation of METI, messaging extraterrestrial intelligence; to open up a new scenario for making contact. I mean, we think we should still be doing SETI because there may be civilizations who will be transmitting without any initiation on our part. But maybe there are some civilizations that don't. So those are the civilizations that we're hoping to convince them to reach out back to us if they've just been monitoring but not transmitting so far.
Mounk: And presumably some of the hope here is that if those species are more technologically advanced than us, then we might be able to learn from them in some kind of way, get access to technologies that could improve life or perhaps even learn from their political institutions, their cultural innovations and so on. I do think, though, at this point, it's worth discussing some of the potential negative outcomes. Many people who listen to this podcast may have seen The Three Body Problem, or may have read the book by Cixin Liu, which is based around METI in a sense.
Now, you don't need to believe in that kind of science fiction to worry about what a species that is not human, that is more technologically advanced to us, may do to us. And the obvious reasons for that lie in human history. When we have contacted primitive tribes, that often has led to extermination. We have often exploited them in terrible ways. Sometimes we have inadvertently killed many of their members simply through the spread of disease.
So it seems to me that the history of our own species on planet Earth gives us pretty strong reason to think that there's at least a serious possibility, a serious danger of the kinds of forms of extraterrestrial life which we might get into contact exploiting us rather than sending us their great wisdom and allowing us to make great technological progress.
Vakoch: Well, I think those analogies are wonderful, of Europeans coming to the New World, and it did not turn out well for the indigenous people, which is something Stephen Hawking knows. It's also important to note that the indigenous people did not send out an invitation for Columbus. Columbus simply went. And so it's not because of anything that the native inhabitants of the Americas did or didn't do. It was explorers who wanted resources and they took those resources and sometimes intentionally, sometimes inadvertently, it turned out very poorly for the people who had been here before. But I think it's easy to use that analogy and I think the challenge is when people apply it to METI they often think, “You know, it's a dangerous forest out there. So simply be quiet. If you don't know what kind of animals lurk out in the forest, you need to be quiet.”
It's comforting. And especially when we have so many crises, so many things outside of our control in the world today. It would be nice to say that here is one existential threat that we have some control over. But I want to level with you. We can't be quiet. It's too late. We've already blown our cover. Those radio and TV signals that we've sent, as I mentioned, if there's a duplicate, if there's an Earth 2.0 out there, they have the same level of technology we have, then we could let them know for the first time that we're here by sending an intentional signal. But any civilization a couple of hundred years more advanced than we are already knows we're here. What they don't know is that we want to make contact. And so I think that this thought that transmitting in itself will reveal ourselves for the first time simply isn't justifiable when you look at the kind of civilization that matters.
Mounk: So I'm not a physicist, right? And so I'm repeating arguments I've read here and I'm aware of that. But in my understanding, there's a huge difference between the kinds of signals that we've inadvertently been emitting over the course of the last hundred years and which could be picked up by a very advanced civilization at the relevant distance but are not directed towards them. And so my understanding is that they would have to really be looking in this particular direction with very powerful technology.
Möchten Sie meine Artikeln und Gespräche auf Deutsch direkt in Ihre Mailbox bekommen? Klicken Sie diesen Link und schalten Sie unter Notifications “auf deutsch” an! 🇩🇪
What you're trying to do with METI is to emit in a much more concerted manner so that civilizations that are far further away from Earth and that are less technologically developed would also be able to pick those up. So my understanding is that there is a huge difference between there being some kind of emissions, and if it was a relatively nearby civilization that is super technologically advanced, they may already know we're here, and saying, “All right, because there's this possibility that we're already discovered, let us now deliberately send out signals that vastly increase the likelihood that we're going to be discovered by civilizations, both that may be much further away and that may be at a more intermediate stage of technological development way beyond where we are, but perhaps not so developed that they can tune in to my favorite French radio station from 70000 light years away.”
Vakoch: And I think the examples that you provide make the compelling case that often when people start looking at the details, they realize they're not either just pro-METI or con-METI; but there are some scenarios that they might say, “OK, I could see how that might be safe, but here's something that would be really quite dangerous.” And that I think even the strongest advocates for METI would be appalled at some sort of messages going out there. But let me take your example of a concerted, powerful message. Now, in both cases—whether they're picking up our accidental leakage radiation or whether we're targeting—they need to be looking in this direction. Maybe an analogy would be yelling out into a room; someone doesn't have to actually be looking in our direction to hear that. But with SETI or METI, they really do need to be looking in our direction. But you make the nice case that maybe there's someone out there a thousand light years away that wouldn't know of our existence, but they do because of the powerful signal. And so we are now, to that civilization very far away, revealing our existence for the first time. I would agree with that. It's not actually the most interesting star for us. The last thing we want to do is to transmit to a thousand light years away and have to wait two thousand years for a message to get back.
Mounk: We're firmly in the realm of speculation here, but I'm responding to a point that I think is essentially speculative. There may be thousands of planets that have some form of sub-technological life, and you may not be able, and it wouldn't be cost-effective, to go around annihilating all of those. So it's quite plausible to think that if you worry about competition from other technologically evolved species, you would wait until they reach the point where they're able to engage in things like radio technology and television signals. And at that point, you want to make sure that they don't get to a level of technological development where they rival you.
Vakoch: One of the things that both of us have been repeatedly saying is, “Well, okay, here's a scenario. It's highly speculative, of course.” And so this is a stark reminder that we need to be aware of our cognitive biases as we make these judgments about the riskiness in scenarios in which there's inherently limited information. And so there are a number of our go-to responses in estimating risk that naturally would make us be opposed to METI. One of those is the omission bias, which is the assumption that in general it's riskier to do something than to not do something. But of course, we've seen over the last few years, it's not always the case. So you might have the omission bias when the coronavirus comes around and you're given an opportunity for a vaccination. And you say, well, I'd rather not do anything and see how it turns out—in that case, for most people, it was an advantage to take action. But we do have a tendency to say it's riskier to transmit than not, even though, again, a speculative scenario is that a civilization may want to identify the civilizations who become problems.
We're talking about when two civilizations come into contact—if it's going to be a Hollywood blockbuster involving contact between civilizations, it's probably not going to turn out very well. Well, all of these factors, these analogies that in some ways really are poor analogies when we look more closely, nevertheless influence our estimates of how likely this is to turn out well or poorly. So that's called the availability heuristic. The images that are most available, that come to mind most easily, are the ones that really have the impact on us. And then finally, loss aversion. If we think of what our benefits could be, or what our losses could be, we would rather avoid losing things than gaining benefits, even if those benefits would outweigh the losses. So we are hardwired cognitively to be cautious about something like METI. And I think it's important to be attuned to our gut level reactions and then in parallel have the kind of in-depth point-by-point specific analyses that you and I are having today.
Mounk: How should we reason about risk in circumstances where we have great trouble associating probabilities with different kinds of outcomes, and perhaps even trouble knowing what the range of outcomes might be? Now, I take your point about loss aversion. It is well established that you're more likely to forgo the opportunity to make $100 than you are to accept losing $100. But of course, here we are dealing with two outcomes, both of which are limited in scope, which is to say, it would be great to have $100, it would be painful to lose $100, and we value losing $100 more highly than winning $100, and perhaps that's economically irrational.
But we do know that neither of those outcomes are going to completely transform or to completely screw our lives. I think it's quite different even when you reason about whether, in an enchanted forest, you should eat some kind of random berry—if you're about to starve to death, you should probably eat the berry. But if you have enough food, if you know you're not going to starve, there's some risk that this bright yellow berry might turn out to be poisonous. I would say that it isn't a matter of cognitive bias, of loss aversion, to say this fruit isn’t really worth the risk of me dying from trying it. I would say that for a similar set of reasons, taking this kind of unknown unknown risk is a bad idea for humans.
There's plenty of suffering from natural diseases like cancer, from wars and poverty. But in my estimation, most human beings in the world are leading worthwhile lives. And undoubtedly, in the best case scenario, some form of extra-terrestrial life could improve the life we are leading, could make us more prosperous, more healthy. And it seems to me that under those circumstances, the potential upside of a better life is not worth the risk of annihilating 8 billion worthwhile life prospects. How do you think about that?
Vakoch: I was at a conference in Bern, Switzerland, about the ethics of space exploration. And I recall one of the participants made an argument similar to that. And another participant from a less developed part of the world leaned over to me and said, “Did he just say that most people are living good lives?” Because her experience was that there are quite a few people who are really living in conditions of quite dire poverty.
A number of years ago I did a survey asking people broadly, lay people from countries around the world, what would you want to say to another civilization? One of the strongest messages that came out was “Please help.”
So the concern was not about maintaining our conditions right now, but as a number of people unpacked it in more detailed messages, that we are facing some serious environmental and social conditions. And our concern is, “Are we as a species going to be able to be around as a civilization as we know it right now? Are we going to be able to maintain that?” And so you could say, given the trajectory that humankind is on right now, what we're doing to our environment, maybe the comparison case is not “We have this great loss of stability” but “What is the risk that we're willing to take in order to get some lessons from a civilization that truly has gotten a hold on stability by sticking around for thousands or millions of years?” So I think that's one of the ways that we can have a trade-off. So I don't think it's at all obvious that it is inherently a riskier thing to reach out than not. It depends on what the payoff could be. And again, the difficulty is that we don't know.
In the rest of this conversation, Douglas and Yascha discuss how bad the life circumstances on Earth would have to be to justify rolling the dice on METI, and Douglas gives his reading and viewing recommendations for those interested in learning more about METI. This part is reserved for paying members...
Listen to this episode with a 7-day free trial
Subscribe to Yascha Mounk to listen to this post and get 7 days of free access to the full post archives.