Myth #2: You Will Get Rich by Heeding the Forecasts of Experts
We have spent a great deal of time offering evidence that we are poor forecasters of the future. Disturbingly, it turns out that experts are no more prescient than the rest of us, even in their area of primary expertise. This post will describe the results of the most comprehensive and compelling study of expert fallibility to date, and offer lessons from the study that we can use to make better use (or not!) of expert opinions in future decisions.
Of course, we – the consumers of expert pronouncements – will continue to be in thrall to experts for the same reasons that our ancestors submitted to shamans and oracles: our uncontrollable need to believe in a controllable world and our flawed understanding of the laws of chance. We generally lack the willpower and good sense to resist the snake oil products on offer. Who wants to believe that, on the big questions, we could do as well tossing a coin as by consulting accredited experts.
Philip Tetlock spent over 20 years asking some of the top experts in their fields to make predictions about the future. The idea for the experiment took shape in the two or three year period prior to 1984 during the early years of the Reagan administration. Many of you will recall that this was a time of great anxiety and tension as the Soviets and the Americans seemed to move closer to nuclear Armageddon each day. Tetlock served on a committee charged with observing and forming opinions on American/Soviet relations. At that time in late 1983 the Bulletin of Nuclear Scientists had moved their Doomsday clock closer to midnight than at any other time since the Cuban Missile Crisis. It was widely believed by liberals that Reagan was leading the country on the road to nuclear apocalypse. Conservatives meanwhile believed that the best realistic outcome was for it to adopt a neo-Stalinist mode and retreat. Generally, the dominant view on both sides of the political aisle was that nothing good was going to happen.
While on this committee, which consisted of many well known political and military strategists at the time, it was widely noted that Gorbachev was rising through the political ranks in the Kremlin. Tetlock observed that no one at the time, however, believed that Gorbachev was likely to assume a leadership role in the Politboro. Further, it was commonly held that Gorbachev was secretly a neo-Stalinist in disguise. No one of any credibility thought that Gorbachev would execute a liberal revolution which would lead to the dissolution of the Soviet Union, and eventually the collapse of the Berlin wall and the reunification of Germany.
Of course, that’s just what Gorbachev went on to do. Interestingly, once Gorbachev had executed his coup, strategists of all stripes were eager to claim credit for having predicted just this outcome. Tetlock knew that in fact no one had predicted this outcome. This convinced Tetlock that there really would be great value if someone tried systematically to keep score on political experts. And that is just what he proceeded to do. From 1984 through 2001 Tetlock solicited frequent predictions from 284 experts in international affairs, economics, political strategy, and other complex fields. The experts consisted of a mixture of academics, journalists, intelligence analysts and people in various think-tanks, with an average of roughly 12 years of work experience each. No political view was over or underrepresented. Each expert made approximately 100 predictions, resulting in about 28,000 predictions in total. This allowed Tetlock to put the law of large numbers to good use. Experts were asked to make predictions on such topics as economic growth, inflation, unemployment, policy priorities, defense spending, leadership changes, border conflicts, entry-exit from international agreements, etc.
The results from the study are broad reaching and complex. Generally the results support the view that it is the way one thinks, not the depth of knowledge about a certain topic or theory, which matters most in tests of complex prediction. Tetlock expounds on the spectrum of thinking process bounded by foxes at one end of the spectrum and hedgehogs on the other, but this distinction is beyond the scope of this essay. We are more interested specifically in how well experts delivered accurate predictions over time, especially as it relates to experts’ confidence in their own predictions.
- Experts are no better at predicting the future than the rest of us. In fact they are less accurate than a large group of dart-throwing monkeys
- Experts (like everyone else) are unlikely to admit when they are wrong, or to revise their beliefs in the face of conflicting evidence
- Those who know a lot about a subject are more likely to predict extreme outcomes (which rarely happen), and are more overconfident in their forecasts
- Specialists are no more reliable than non-specialists in forecasting outcomes in their own domain of study
- Experts who hedge their views, are self critical and consider alternative outcomes are more likely to be right
- Those experts who are better known and more frequently quoted are less likely to be right. Frightfully, these experts also make entertaining media guests
- Experts are no better at forecasting than basic trend-following systems such as ‘no change’ or ‘continue with the same rate of change’
- Of the 284 experts who offered predictions over 18 years, not one expert demonstrated a superior forecasting ability
“A rat was placed in a T-shaped maze. Food was placed in either the right or the left transept of the T in a random sequence such that, over the long run, the food was on the left sixty per cent of the time and on the right forty per cent. Neither the students nor (needless to say) the rat was told these frequencies. The students were asked to predict on which side of the T the food would appear each time. The rat eventually figured out that the food was on the left side more often than the right, and it therefore nearly always went to the left, scoring roughly sixty per cent—D, but a passing grade. The students looked for patterns of left-right placement, and ended up scoring only fifty-two per cent, an F. The rat, having no reputation to begin with, was not embarrassed about being wrong two out of every five tries. But Yale students, who do have reputations, searched for a hidden order in the sequence. They couldn’t deal with forty-per-cent error, so they ended up with almost fifty-per-cent error.”
Also, the full New Yorker article is worth reading. You will find it at http://www.newyorker.com/archive/2005/12/05/051205crbo_books1?currentPage=2
Purchase Philip Tetlock’s book at Amazon.