Why Don't We See Signs of Extraterrestrial Life? by William Poundstone

The Milky Way by Tom Hall (CC2 license)

The Milky Way by Tom Hall (CC2 license)

“Where is everybody?” Enrico Fermi, physicist and Nobel Laureate, asked that question at Los Alamos, New Mexico, in 1950. He was referring to extraterrestrial life. Why aren’t ETs visiting the Earth in spaceships? Fermi had done the math and concluded that there ought to be many intelligent species in our galaxy. Some would have technologies far beyond our own. They’d have mastered interstellar travel. So where were they? None of Fermi’s colleagues could say.

The question has resonated down the decades. It is the general opinion of biologists—and screenwriters—that ETs are out there. Since Fermi’s time astronomers have searched the skies for radio signals from ETs. In ever case they’ve come up empty.

One man may have the answer. He is J. Richard Gott III, a Princeton astrophysicist. In a 1993 paper in Nature, Gott invoked the Copernican principle, a foundation of astronomy. This says that our situation in the universe is unlikely to be special or privileged. 

Gott applied this idea to Homo sapiens itself. Suppose that we are not so different from most ETs. Then a typical ET species is not visiting us in spaceships for the same reason we’re not visiting them: They/we don’t have any interstellar spaceships (yet?)

We have a limited Search for Extraterrestrial Intelligence (SETI) effort that listens for signals. But we don’t broadcast any signals at the high power suited to reaching across interstellar distances. Maybe most ETs are like us, listening but not sending (“lurking”).

Now course it’s true that intelligence might have evolved elsewhere billions of years before it did on Earth. ETs could be that much more “advanced.” But, says Gott, let’s not jump to conclusions about what that means. Those old civilizations may have advanced themselves all the way to extinction. We have not one iota of evidence that interstellar travel and multi-million-year civilizations are common things.

Gott proposes that there are ETs out there, but long-lived, technologically advanced species may be rarer than we think. Perhaps most ETs do not, after all, go on to explore many planets, achieve immense populations, and proclaim their existence to the whole universe. We’ve got a nice planet here. For most of the universe’s species, that may be all there is.

Yes, but what about the Drake equation? Devised by radio astronomer Frank Drake in 1960, it is the classic attempt to estimate the number of intelligent extraterrestrial species. Drake proposed that the number of ET species in our galaxy equals the product of seven unknowns:

(1) how many stars come into existence in our galaxy per year;

(2) how many of those stars have planets;

(3) how many planets exist, in a typical star system;

(4) how many such planets develop life;

(5) how many of those life-bearing planets evolve intelligent life;

(6) how many intelligent species broadcast radio signals [or otherwise reveal their existence];

(7) the average lifetime of communicating intelligent species.

There is uncertainty in all the factors but above all in (4) through (7), as these entail speculation about ET biology, history, and motivations. In 1960 Drake and  colleagues estimated the number of ET civilization in the galaxy as between 1000 and 100 million.

Obviously the error bars were wide. Yet even the low-end estimate of a thousand ET species is, well, “a lot.” The notion that there are probably many, many ET species has been with us ever since.

In my book The Doomsday Calculation I talk about Gott’s answer to the Fermi question. I mention a provocative 2018 paper by Anders Sandberg, Eric Drexler, and Toby Ord, all of the Future of Humanity Institute, Oxford. The Oxford group notes that in the Drake equation we are not just multiplying seven unknowns; we are also multiplying the uncertainties in those unknowns. Any Drake estimate inherits all these uncertainties. 

Sandberg and colleagues collected estimates of Drake equation factors that had appeared in the scientific literature. They then did a computer simulation in which the value for each factor was chosen by drawing randomly from among the published estimates for that factor. Seven chosen factors were multiplied together to get a virtual Drake estimate, and this process was repeated many times to reveal the range of variation. The Oxford group found that the resulting estimates varied by over 40 orders of magnitude. Much of this was due to factor (4), the probability of life originating. Drake and colleagues believed this was a near-certainty. But in the decades since, some scholars have argued that this confidence is unjustified. Until and unless we find life elsewhere, we can’t rule out the possibility that the origin of life requires a highly improbable molecular accident.

It’s hard to wrap your head around the level of uncertainty the the Oxford group is talking about. But let’s try. Suppose you are asked to guess the wealth of the next person you will pass in the street in Manhattan. Well, New York is a rich city—with a lot of income inequality. The mean wealth in Manhattan is much bigger than the median, for a handful of billionaires raises the average. The small billionaire population does not much affect the median, the value such that half of Manhattan’s residents have less wealth and half have more. And there’s a fairly large chance that the next person you pass in the street will be broke! There are a lot of poor people in Manhattan too.

Drake equation estimates involve a similarly skewed distribution, only it’s unimaginably greater. According to the Oxford simulation, the estimated mean number of ET civilizations in the galaxy was a generous 53 million. However, the median was only 100. And because of the spread-out distribution, the Oxford group found that 30 percent of virtual estimates had zero ETs in the galaxy. Indeed, about 10 percent of the estimates implied that ET civilizations are so rare that there are unlikely to be any in the observable universe.

Accept this, and we have no reason to be surprised at the absence of ETs. We may well be alone in the galaxy or even the universe, and it’s not out of line with what scholars currently believe. Where is everybody? The Oxford group’s answer is “probably extremely far away.”

The Doomsday Calculation: FAQ by William Poundstone

DoomsdayCalculation copy.jpg

I have found that a good way to write a popular science book is to find a topic on which many smart people disagree. That describes my new book coming out June 4. It’s called The Doomsday Calculation: How an Equation that Predicts the Future Is Transforming Everything We Know About Life and the Universe. In this post I’ll describe it by answering some of the questions I’ve gotten in the two-plus years I’ve been conceiving/researching/writing it.

Q. What is the “doomsday calculation”?

A. It’s a famous paradox, generally known as the doomsday argument (“calculation” seemed to make a catchier title, though). The doomsday argument says that we can use our current position in time to make a statistical prediction about how long humanity will survive into the future. It’s not some Nostradamus or Mayan calendar forecast with an exact day and time. It’s rather a probability distribution saying (for instance) that there is a 50 percent chance that humans will become extinct within the next 760 years.

Q. How is a prediction like that even possible?

A. The doomsday argument uses Bayesian probability and the notion of “self-sampling”—that I may regard myself as a random human out of all the humans who have or will exist. By examining the doomsday controversy, I touch on deep questions about the nature of evidence and probability.

The doomsday argument has been advanced by astrophysicists J. Richard Gott III, Brandon Carter, and Holger Bech Nielsen, and philosopher of science John Leslie. It’s also been attacked as a fallacy by an even larger group of no-less prominent names. The book is the story of an intellectual catfight—and an attempt to sort it out.

Q. Do you actually believe these doomsday predictions?

A. Like almost everyone else, I started out thinking the doomsday argument is completely nuts. That is, I saw it as a classic paradox, a seemingly logical argument that leads to a conclusion so absurd that there has to be an error. I now believe that there is no “error” in Gott’s version of the doomsday argument. Given its modest assumptions, its doomsday predictions are valid statements of probability.

One point I make in the book is that doomsday predictions are not actually that shocking. Anyone who thinks seriously about the risks of nuclear war, bioterrorism, climate change, and AI may feel that even odds on surviving another 760 years is pretty good!

Q. Sounds kind of dark?

A. Not really. The book was a lot of fun to research and write, and I think that comes through. This is an idea that potentially has many applications other than predicting the end of the world. Gott has used doomsday math to predict the runs of Broadway plays, the fall of the Berlin Wall, and the ends of celebrity marriages. It can also be used to address cosmic mysteries: Why haven’t we found signs of extraterrestrial intelligence? Are we living in the real world or a Matrix simulation? Is this world all there is, or is it part of a multiverse? These are perhaps the most significant applications of the idea. Finally, it should be mentioned that Google, Instagram, Facebook, and Twitter use similar (Bayesian) methods to predict what users what users will click on, buy, or vote for. The doomsday argument is not just a philosophic conundrum. We’re living in a doomsday world.

A Brief History of Fake News by William Poundstone

In 2015 Jay Branscomb posted a photo of Steven Spielberg sitting next to a dead triceratops on Facebook. It was a Jurassic Park publicity image. Branscomb added the caption: “Disgraceful photo of recreational hunter happily posing next to a triceratops he just slaughtered. Please share so the world can name and shame this despicable man.”

The post was shared more than thirty thousand times, racking up thousands of outraged comments. Outraged, that is, that Spielberg had shot a dinosaur. Sure, many were playing along with Branscomb’s fun, but I am confident that much of the indignation was sincere.

Lately fake news is big news. It’s been blamed for influencing a close election, and Facebook CEO Mark Zuckerberg has vowed to help get rid of it. This may not be as easy as it sounds. The problem is not just fake news but its audience. People are gullible—and not even Zuckerberg can change that.

In my book Head in the Cloud I report a survey in which 15 percent of the public believed that early humans and dinosaurs coexisted. That’s not the same as believing they’re coexisting right now, but it is an alarmingly wrong idea held by an alarmingly large share of the public.

I also found people who get their news from social networks are less informed on average than audiences for other media. I ran a survey that included a general knowledge quiz. It included items like:

• Which came first, Judaism or Christianity?

• Find South Carolina on an unlabeled U.S. map

• Name at least one of your state’s U.S. Senators

The average score for those who said they got some of their news from Facebook was 60 percent. That was 10 points less than the average scores for those listed NPRThe New York Times, or The Daily Show as news sources. Scores were even lower for Twitter (58 percent) and Tumblr (55 percent).

Fake news is not a new phenomenon. It has its roots in print. In 1979 The National Enquirer switched to color printing, leaving the tabloid publisher’s black-and-white presses idle. Rather than junk them, the publisher invented The Weekly World News. It had a successful, decades-long run in the check-out aisles, retailing obviously fake stories like Babies Living on Board Titanic!, Duck Hunters Shoot Angel!, and Fat Cat Owns 23 Old Ladies.

The Onion was also founded in print, in 1988, moving online in 1996. The Onion pays talented writers to produce fake news parodies that are often brilliant (“Mr. T to Pity Fool”). Despite that, clueless Facebook users have and do post Onion articles, not realizing it’s satire.

The Onion’s success spawned dozens of knock-offs, with names like The Lightly Braised Turnip, The Daily Currant, Global Associated News, Media Mass, and National Report(many of which are now defunct). Known in the trade as spoof sites, their content is largely reader-generated and not funny. They settle for deadpan conceptual prankery. Thus these sites spew steady streams of fake news that is not readily identifiable as satire. Spoof site articles regularly get posted on Facebook. Every now and then a piece gets hundreds of thousands of reposts, not because readers think it’s funny but because they think it’s true.

This election cycle saw was the democratization of fake news. Partisan fakers realized they didn’t need the spoof sites. Anyone could fake a news story or PhotoShop a photo, then post it to the social networks. 

How should we deal with fake news? There has been talk of Facebook assigning a truthfulness rating to stories and using that to favor the higher-rated ones. One problem with this is how to deal with satire (which is after all one of the most popular things to post on Facebook). Surely The Onion is allowed to be The Onion, and Facebook should not keep my friends from seeing an Onion story I wanted to share.

The problem is more like trolling—facetious content that isn’t obvious as such. That may be difficult to identify by algorithm. As Branson's dinosaur prank shows, “obviously a joke” is in the eye of the beholder.

The pitch has always been that the Internet will improve the quality and relevance of news. Instead of being restricted to the narrow blinkers of a local newspaper or TV station, we can sample stories from around the nation and globe, tailored to our own interests. One drawback is that stories are effectively stripped from context—and sometimes the context matters. It does when we stumble onto someone’s deadpan joke, not realizing we’re the butt of it.

P.S. There is a site, Real or Satire?, that purports to tell whether any online news story is fake. I would say it’s useful except that the people who could most benefit from it are the ones least likely to use it.

Peace Through Ignorance? by William Poundstone

Charlie Chaplin in "The Great Dictator" (1940)

Charlie Chaplin in "The Great Dictator" (1940)

Is it important for a U.S. President to know geography? Recent gaffes by Gary Johnson, the former New Mexico governor who's running as the Libertarian candidate for President, have raised that question. On Sep. 8, Johnson drew a blank when asked about Aleppo, the war-torn city in Syria. He didn't do much better a few weeks later when Chris Matthews asked him to name a foreign leader he admired. Johnson stumbled, and Matthews baited him unmercifully.

Matthews: Go ahead, you gotta do this. Anywhere. Any continent. Canada, Mexico, Europe, over there, Asia, South America, Africa. Name a foreign leader that you respect.

Johnson: I guess I'm having an Aleppo moment… the former president of Mexico.

Matthews: But I'm giving you the whole world.

Johnson: I know, I know, I know.

Matthews: But I'm giving you the whole world. Anybody in the world you like. Anybody. Pick any leader.

Johnson: The former president of Mexico.

I doubt that ignorance of geography will hurt Johnson, or anyone else, with most voters. It's well established that Americans are terrible with geography. In my book, Head in the Cloud, I report a poll in which 9 percent of Americans are unable to say what country New Mexico is in. (Johnson knows that one.) Eighteen percent of Americans think the Amazon is in Africa, and well under half can name the capital of Canada. 

The tougher question is whether we need to know much geography. Our mobile devices really have changed the world. Anyone can look up the Amazon, or Aleppo, in seconds—assuming they need to know it. 

Still, as Johnson's brain freeze moments demonstrate, you can't always whip out a phone. Some voters—and certainly some journalists—do judge politicians by how well-prepared they are for high office. 

Johnson recently made a comment that's gotten as much attention as his gaffes. He said that ruinous wars, like that in Syria, happen "because we elect people who can dot the I's and cross the T's on these names and geographic locations."

His stream-of-consciousness statement is a little difficult to parse, but his point seemed to be that wars are started by geography wonks. A leader who doesn't even know where Aleppo is won't likely be sending troops there. 

Oddly enough, there is some evidence refuting that very idea. In 2014, shortly after Russian troops invaded Ukraine, political scientists Kyle Dropp, Joshua D. Kertzer, and Thomas Zeitzoff asked Americans to locate Ukraine on an unlabeled world map. As expected, most didn't do well. The surprise was this: Those who didn't have a clue where Ukraine is were more likely to support a U.S. intervention there, than were those who could locate Ukraine. 

That's worth thinking about, the next time you hear that ignorance is bliss.

Taste Freeze by William Poundstone

The Chainsmokers' "Closer" is the #1 song on this week's Billboard Hot 100.  If you're under 25, there's a good chance you're sick of hearing "Closer." If you've over 35, the odds are that you have no idea who the Chainsmokers are. 

Age 33 is the point of no return when it comes to popular music. By about then, most people stop listening to top 40 hits at all. This is one of the findings of Big Data. It's even earned a name: taste freeze. Add it to the list of awkward aspects of aging with funny names. Mom jeans, dad body, taste freeze.

Taste freeze doesn't mean that 30-somethings stop listening to music, obviously. Nor does it necessarily mean that older people start listening to "Oldies" stations (though that's a particularly extreme symptom). What happens, according to data assembled by the music streaming service Spotify, is that musical tastes diversify. Some older listeners discover alternative music, classical music, world music, or jazz. They follow new songs by the aging pop stars of their youth (U2, Lady Gaga, Beyoncé) but not by younger artists.

I did a survey in which I asked Americans of all ages to identify artists from photographs. Only 56 percent of those 30 and older could identify Kanye West, which seems incredible given the amount of attention he receives. But 74 percent of the same age group recognized Snoop Dogg, whose first album was in 1992.

Taste freeze has probably always existed, but streaming has enhanced it. Today we’re all DJs. We create playlists to share with friends who are the same age and like exactly the same things. 

Another fun fact: According to Spotify, becoming a parent ages your musical tastes the equivalent of 4 years.

The Icecap Riddle by William Poundstone

"Climate scientists believe that if the North Pole icecap melted as a result of human-caused global warming, global sea levels would rise—true or false?"

SPOILER ALERT. You'll want to answer this question before reading on.

Today's New York Times reports that 2016's Presidential contenders are further apart than they've ever been on the issue of climate change. That can't surprise many people, but it's worth pointing out how new this difference is. As recently as 2008 candidates of both major parties saw virtually eye to eye on climate today. Eight years Hillary Clinton is making climate change "the center" of her foreign policy, while Donald Trump is dismissing climate change as a hoax. 

How can we differ so much on a scientific issue that most scientists consider settled? The best answer I know of is the riddle above, devised by Dan Kahan, a Yale professor of law and psychology. Kahan has presented it in surveys and found that only about 14 percent of the American public give the right answer. Liberals and conservatives are about equally likely to get it wrong. 

The correct answer is false. If you don't believe me, then throw a few ice cubes in a glass of water. Make the water level with a bit of masking tape. Let the ice melt. You'll find the water level is unchanged.

The key point here is that the North Pole icecap is floating in the Arctic Ocean. It is already displacing its weight of water. Should it melt, the ocean levels will be unchanged.

Then why is there talk of Manhattan and Florida being inundated by rising sea levels? That's because there is a lot of polar ice on land, in Antarctica and Greenland. Should that melt, the water will pour into the ocean and raise the global sea level.

So much for the climate science. The icecap riddle is really about psychology. It demonstrates, first of all, that most people do not have a deep understanding of climate science. Instead they absorb a few basic bullet points, like that climate scientists (who may or may not be trustworthy, say politicians…) are always warning about rising ocean levels.

Of course, most people have strong political convictions about climate change, which they believe to be grounded in fact—unlike the other party's beliefs.

I explore the consequences of this in my book Head in the Cloud. One is that people tend to agree with the opinions of those they around them, the people and leaders they trust. Survey responses, and votes, are often expressions of community. As Kahan puts it: "Obviously, no one will answer ‘true’ when asked, ‘true or false—you and everyone you are intimately connected to are idiots?’”

Plagiarism and Google by William Poundstone

Melania Trump is the latest of a long string of public figures accused of cribbing words from other people. That raises the question: Is there really an epidemic of celebrity plagiarism, or does it just seem that way?

Last night Jarrett Hill, an unemployed TV journalist, was in a Los Angeles Starbucks with his laptop. He noticed that Melania Trump's words at the Republican National Convention sounded familiar. He was quickly able to pull up a clip of Michelle Obama's convention speech from 2008, compare it to Trump's, and tweet about the similarity. Before broadband, that probably wouldn't have happened. The similarity might have been shrugged off; it wouldn't have been worth the trouble to check out. Clearly the Internet makes it easier to detect copying, and to shame the perpetrators. 

So why do people who ought to know better do it? A remarkable experiment suggests that our evolution relationship to the cloud also promotes copying. Harvard Psychologists Daniel Wegner and Adrian F. Ward gave a trivia quiz to two groups of volunteers. One group was told they could look up the answers on the Internet; for the other group this wasn't allowed. Afterward both groups filled out questions rating their knowledge, memory, and intelligence.

There was a clear connection between these self-ratings and performance on the quiz. Those who scored better on the quiz rated themselves highly for knowledge and cognitiveskills. That's as you'd expect.

Less expected was that the people who'd looked up the answers rated their cognitive skills higher than those who had relied on their own knowledge. The questionnaire had people agree or disagree with statements like "I am smart." On average, the group that looked up the answers felt smarter.

As Ward, now at the University of Texas, puts it, people "become one with the cloud… [they] lose sight of where their own minds end and the mind of the Internet begins."

On the one hand, "becoming one with the cloud has nothing to do with the ethics of stealing intellectual property. On the other hand, it has everything to do with it. We all tend to live our lives on autopilot, like a self-driving car, doing what seems proper at the time. It's only after a disaster that we invent ethical justifications. It's dumb to steal from the cloud, but the science says it can make people feel smart—and that may be one reason why it's happening.  

Do Misspelled Menus Matter? by William Poundstone

We're told the Internet has made spelling and punctuation obsolete. Maybe, but it's also inaugurated a golden age of grammar shaming. There are memes for scolding your social-net friends on every error of spelling and usage; listicles of funny misspellings in signs and menus. Does spelling matter in the emoticon age? Purists say it does. Most others aren’t so sure.

Restaurant menus are a battlefield of this culture war. I live near a restaurant with the slogan "Cuban Food at it’s Best!" Having been an editor, I can't unsee the misplaced apostrophe. These days, menus are typed by a sous-chef on a laptop, unmediated by an English major. We’ve all seen "Ceasar" rather than Caesar salad. Or worse, "mescaline"for mesclun. 

"I don’t expect chefs to be writers," wrote the Washington Post’s Jane Black, "just as they don’t expect me to make my own puff pastry. But given the existence of spell-checkers (the writing equivalent of frozen puff pastry dough), the number of errors is surprising."

In my book Head in the Cloud, I try to measure the real-world value of knowledge, including spelling and grammar. In one experiment, I ran a national survey presenting a fictitious sandwich shop menu and asking people of all ages and educational backgrounds to answer a few questions about it: How appealing is the food selection? Would you try this place? How much would you be willing to pay for lunch here?

Unknown to the survey participants, each was randomly assigned to a group that saw one of two versions of the menu. In one version, the spelling and grammar were scrupulously correct. In the other, I packed in every common menu misspelling and error I could manage.

The survey did not ask anything about spelling. I wanted to see whether bad spelling would have an effect, maybe an unconscious one, on perceptions of the sandwich shop and its food.

It didn’t. By every criterion the misspelled menu was rated the same as the correct one, to within statistical margins of error. People were as likely to try the sandwich shop, to rate its food healthy, and to judge its prices fair. 

The chart shows the error bars. In all cases they overlap extensively, providing not the slightest evidence that the errors made any difference. And I’m talking about big errors like Sandwhitchs! Barbaque or vegitarian!

I’m not saying that spelling doesn’t matter anywhere, in any context. But a restaurateur concerned only with the bottom line need not worry too much about it, it seems. When it comes to spelling and grammar, we’re willing to cut restaurants a good deal of slack.