Experts are bad at forecasting: Remember this, next time you see a forecast

forecast
If I ever have the time, the money and the resources, I would like to carry out an experiment. Every day on business TV channels, experts offer their forecasts on stock prices, commodity prices, the direction of the economy, politics of the nation and so on.

There are other experts making forecasts through research reports. As the British economist John Kay writes in Other People’s Money: “Most of what is called ‘research’ in financial sector would not be recognised as research by anyone who has completed an undergraduate thesis.”

Getting back to the topic at hand, I would like to figure out how many of these forecasts eventually turned out to be correct. So, if an analyst says that he expects the price of HDFC Bank to cross Rs 1300 per share in a year’s time, did he eventually get it right.

Also, I would like to figure out whether the “so-called” forecasts were forecasts at all, in the first place? Saying that the HDFC Bank stock price will cross Rs 1300 per share, but not saying when, is not a forecast. As Philip Tetlock and Dan Gardner write in their new book Superforecasting—The Art and Science of Prediction: “Obviously, a forecast without a time frame is absurd. And yet, forecasters routinely make them.”

When it comes to the stock market, there are two kinds of experts who come under this category of making a forecast without a time-frame attached to it. One category is of those who keep saying that the bull market will continue, without really telling us, until when. “Predicting the continuation of a long bull market in stocks can prove profitable for many years—until it suddenly proves to be your undoing,” write Tetlock and Gardner.

The second category is of those who keep saying that the bear market is on its way, without saying when. “Anyone can easily “predict” the next stock market crash by incessantly warning that the stock market is about to crash,” write Tetlock and Gardner.

The broader point is that no one goes back to check whether the forecast eventually turned out to be correct. There is no measurement of how good or bad a particular expert is at making forecasts. I mean, if an expert is constantly getting his forecasts wrong, should you be listening to him in the first place.

But no one is keeping track of this, not even the TV channel.

As Tetlock and Gardner write: “Accuracy is seldom even mentioned. Old forecasts are like old news—soon forgotten—and pundits are almost never asked to reconcile what they said with what actually happened.” And since no one is keeping a record, it allows experts to keep peddling their stories over and over again, without the viewers knowing how good or bad their previous forecasts were.

The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough. Many have become wealthy peddling forecasting of untested value to corporates executives, government officials, and ordinary people who never think of swallowing medicine of unknown efficacy and safety,” write Tetlock and Gardner.

In fact, in the recent past, many stock market experts were recommending midcap stocks. After the Sensex started crashing the same set of experts asked investors to stay away from midcap stocks as far as possible.

There is a great story I was told about an expert, who was the head of the commodities desk at one of the big brokerages. He was also a regular on one of the television channels as well. This gentlemen kept telling the viewers to keep shorting oil for as long as prices were going up and then when the prices started to fall, he asked them to start buying. This was exactly opposite of what he should have been recommending. Obviously anyone who followed this forecast would have lost a lot of money.

I can say from personal experience that predicting the price of oil is very difficult, given that there are so many factors that are at work. As Tetlock and Gardner write: “Take the price of oil, long a graveyard topic for forecasting reputations. The number of factors that can drive the price up or down is huge—from frackers in the United States to jihadists in Libya to battery designers in Silicon Valley—and the number of factors that can influence those factors is even bigger.”

Nevertheless, the television appearances of the commodity expert I talked about a little earlier, continue. And why is that the case? Tetlock and Gardner provide the answer: “Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: The consumers of forecasting—governments, businesses, and the public don’t demand evidence of accuracy. So there is no measurement. Which means no revision. And without revision, there can be no improvement.” And so the story continues.

One would like to believe that forecasts are made so that people can look into the future with greater clarity. But that is not always the case. Some forecasts are made for fun. Some other forecasts are made to fulfil the human need to know what is coming. Some other forecasts are made to advance political agendas.

And still some other forecasts are made to comfort people “by assuring [them] that their beliefs are correct and the future will unfold as expected,” Tetlock and Gardner, point out. Now only if it were as simple as that.

In fact, Tetlock spent close to two decades following experts and their forecasts. In the experiment, Tetlock chose 284 people, who made a living by predicting political and economic trends. Over the next 20 years, he asked them to make nearly 100 predictions each, on a variety of likely future events. Would apartheid end in South Africa? Would Michael Gorbachev, the leader of USSR, be ousted in a coup? Would the US go to war in the Persian Gulf? Would the dotcom bubble burst?

By the end of the study in 2003, Tetlock had 82,361 forecasts. What he found was that there was very little agreement among these experts. It didn’t matter which field they were in or what their academic discipline was; they were all bad at forecasting. Interestingly, these experts did slightly better at predicting the future when they were operating outside the area of their so-called expertise.

It is well-worth remembering these lessons the next time you come across a forecast. And that includes the forecasts made in The Daily Reckoning as well.

The column originally appeared on The Daily Reckoning on October 7, 2015