Experts are bad at forecasting: Remember this, next time you see a forecast

If I ever have the time, the money and the resources, I would like to carry out an experiment. Every day on business TV channels, experts offer their forecasts on stock prices, commodity prices, the direction of the economy, politics of the nation and so on.

There are other experts making forecasts through research reports. As the British economist John Kay writes in Other People’s Money: “Most of what is called ‘research’ in financial sector would not be recognised as research by anyone who has completed an undergraduate thesis.”

Getting back to the topic at hand, I would like to figure out how many of these forecasts eventually turned out to be correct. So, if an analyst says that he expects the price of HDFC Bank to cross Rs 1300 per share in a year’s time, did he eventually get it right.

Also, I would like to figure out whether the “so-called” forecasts were forecasts at all, in the first place? Saying that the HDFC Bank stock price will cross Rs 1300 per share, but not saying when, is not a forecast. As Philip Tetlock and Dan Gardner write in their new book Superforecasting—The Art and Science of Prediction: “Obviously, a forecast without a time frame is absurd. And yet, forecasters routinely make them.”

When it comes to the stock market, there are two kinds of experts who come under this category of making a forecast without a time-frame attached to it. One category is of those who keep saying that the bull market will continue, without really telling us, until when. “Predicting the continuation of a long bull market in stocks can prove profitable for many years—until it suddenly proves to be your undoing,” write Tetlock and Gardner.

The second category is of those who keep saying that the bear market is on its way, without saying when. “Anyone can easily “predict” the next stock market crash by incessantly warning that the stock market is about to crash,” write Tetlock and Gardner.

The broader point is that no one goes back to check whether the forecast eventually turned out to be correct. There is no measurement of how good or bad a particular expert is at making forecasts. I mean, if an expert is constantly getting his forecasts wrong, should you be listening to him in the first place.

But no one is keeping track of this, not even the TV channel.

As Tetlock and Gardner write: “Accuracy is seldom even mentioned. Old forecasts are like old news—soon forgotten—and pundits are almost never asked to reconcile what they said with what actually happened.” And since no one is keeping a record, it allows experts to keep peddling their stories over and over again, without the viewers knowing how good or bad their previous forecasts were.

The one undeniable talent that talking heads have is their skill at telling a compelling story with conviction, and that is enough. Many have become wealthy peddling forecasting of untested value to corporates executives, government officials, and ordinary people who never think of swallowing medicine of unknown efficacy and safety,” write Tetlock and Gardner.

In fact, in the recent past, many stock market experts were recommending midcap stocks. After the Sensex started crashing the same set of experts asked investors to stay away from midcap stocks as far as possible.

There is a great story I was told about an expert, who was the head of the commodities desk at one of the big brokerages. He was also a regular on one of the television channels as well. This gentlemen kept telling the viewers to keep shorting oil for as long as prices were going up and then when the prices started to fall, he asked them to start buying. This was exactly opposite of what he should have been recommending. Obviously anyone who followed this forecast would have lost a lot of money.

I can say from personal experience that predicting the price of oil is very difficult, given that there are so many factors that are at work. As Tetlock and Gardner write: “Take the price of oil, long a graveyard topic for forecasting reputations. The number of factors that can drive the price up or down is huge—from frackers in the United States to jihadists in Libya to battery designers in Silicon Valley—and the number of factors that can influence those factors is even bigger.”

Nevertheless, the television appearances of the commodity expert I talked about a little earlier, continue. And why is that the case? Tetlock and Gardner provide the answer: “Accuracy is seldom determined after the fact and is almost never done with sufficient regularity and rigor that conclusions can be drawn. The reason? Mostly it’s a demand-side problem: The consumers of forecasting—governments, businesses, and the public don’t demand evidence of accuracy. So there is no measurement. Which means no revision. And without revision, there can be no improvement.” And so the story continues.

One would like to believe that forecasts are made so that people can look into the future with greater clarity. But that is not always the case. Some forecasts are made for fun. Some other forecasts are made to fulfil the human need to know what is coming. Some other forecasts are made to advance political agendas.

And still some other forecasts are made to comfort people “by assuring [them] that their beliefs are correct and the future will unfold as expected,” Tetlock and Gardner, point out. Now only if it were as simple as that.

In fact, Tetlock spent close to two decades following experts and their forecasts. In the experiment, Tetlock chose 284 people, who made a living by predicting political and economic trends. Over the next 20 years, he asked them to make nearly 100 predictions each, on a variety of likely future events. Would apartheid end in South Africa? Would Michael Gorbachev, the leader of USSR, be ousted in a coup? Would the US go to war in the Persian Gulf? Would the dotcom bubble burst?

By the end of the study in 2003, Tetlock had 82,361 forecasts. What he found was that there was very little agreement among these experts. It didn’t matter which field they were in or what their academic discipline was; they were all bad at forecasting. Interestingly, these experts did slightly better at predicting the future when they were operating outside the area of their so-called expertise.

It is well-worth remembering these lessons the next time you come across a forecast. And that includes the forecasts made in The Daily Reckoning as well.

The column originally appeared on The Daily Reckoning on October 7, 2015

Why most economists did not see the rupee crash coming

rupeeVivek Kaul
Economists and analysts have turned bearish on the future of the rupee, over the last couple of months. But very few of them predicted the crash of the rupee. Among the few who did were,SS Tarapore, a former deputy governor of the Reserve Bank of India, and Rajeev Malik of CLSA.
Tarapore felt that the rupee should be closer to 70 to a dollar. As he pointed out in a column published in The Hindu Business Line on January 24, 2013 “
With the inflation rate persistently above that in the major industrial countries, the rupee is clearly overvalued. Adjusting for inflation rate differentials, the present nominal dollar-rupee rate of around $1 = Rs 54 should be closer to $1 = Rs 70. But our macho spirits want an appreciation of the rupee which goes against fundamentals.”
Rajeev Malik of CLSA said something along similar lines in a column published on Firstpost on January 31, 2013. “
The worsening current account deficit is partly signalling that the rupee is overvalued. But the RBI and everyone else are missing that clue,” he wrote. The current account deficit is the difference between total value of imports and the sum of the total value of its exports and net foreign remittances
What Tarapore and Malik said towards the end of January turned out to be true towards the end of May. The rupee was overvalued and has depreciated 20% against the dollar since then. The question is why did most economists and analysts not see the rupee crash coming, when there was enough evidence available pointing to the same?
One possible explanation lies in what Nassim Nicholas Taleb calls the turkey problem (something I have talked about in a slightly different context earlier). As Taleb writes in his latest book
Anti Fragile “A turkey is fed for a thousand days by a butcher; every day confirms to its staff of analysts that butchers love turkeys “with increased statistical confidence.” The butcher will keep feeding the turkey until a few days before thanksgiving. Then comes that day when it is really not a very good idea to be a turkey. So, with the butcher surprising it, the turkey will have a revision of belief—right when its confidence in the statement that the butcher loves turkeys is maximal … the key here is such a surprise will be a Black Swan event; but just for the turkey, not for the butcher.”
The Indian rupee moved in the range of 53.8-55.7 to a dollar between November 2012 and end of May 2013. This would have led the ‘economists’ to believe that the rupee would continue to remain stable against the dollar. The logic here was that rupee will be stable against the dollars in the days to come, because it had been stable against the dollar in the recent past.
While this is a possible explanation, there is a slight problem with it. It tends to assume that economists and analysts are a tad dumb, which they clearly are not. There is a little more to it. Economists and analysts essentially feel safe in a herd. As Adam Smith, the man referred to as the father of economics, once asserted,
“Emulation is the most pervasive of human drives”.
An economist or an analyst may have figured out that the rupee would crash in the time to come, but he just wouldn’t know when. And given that he would be risking his reputation by suggesting the obvious. As John Maynard Keynes once wrote
“Worldly wisdom teaches us that it’s better for reputation to fail conventionally than succeed unconventionally”.
An economist/analyst predicting the rupee crash at the beginning of the year would have been proven wrong for almost 6 months, till he was finally proven right. This is a precarious situation to be in, which economists/analysts like to avoid. Hence, they tend to go with what everyone else is predicting at a particular point of time.
Research has shown this very clearly. As Mark Buchanan writes in
Forecast – What Physics, Meteorology and the Natural Sciences Can Teach Us About Economics “Financial analysts may claim to be weighing information independently when making forecasts of things like inflation…but a study in 2004 found that what analysts’ forecasts actually follow most closely is other analysts’ forecasts. There’s a strong herding behaviour that makes the analysts’ forecasts much closer to one another than they are to the actual outcomes.” And that explains to a large extent why most economists turned bearish on the rupee, after it crashed against the dollar. They were just following their herd.
There is another possible explanation for economists and analysts missing the rupee crash. As Dylan Grice, formerly an analyst with Societe Generale, and now the editor of the Edelweiss Journal, put it in a report titled
What’s the point of the macro? dated June 15, 2010 “Perhaps a more important thought is that we’re simply not hardwired to see and act upon big moves that are predictable.”
A generation of economists has grown up studying and believing in the efficient market hypothesis. It basically states that financial markets are largely efficient,meaning that at any point of time they have taken into account all the information that is available. Hence, the markets are believed to be in a state of equilibrium and they move only once new information comes in. As Buchanan writes “the efficient market theory doesn’t just claim that information should move markets. It claims that
only information moves markets. Prices should always remain close to their so called fundamental values – the realistic value based on accurate consideration of all information concerning the long-term prospects.”
What does this mean in the context of the rupee before it crashed? At 55 to a dollar it was rightly priced and had incorporated all the information from inflation to current account deficit, into its price. And given this, there was no chance of a crash or what economists and analysts like to call big outlier moves.
Benoit Mandlebrot, a mathematician who spent considerable time studying finance, distinguished between uncertainty that is mild and that which is wild. Dylan Grice explains these uncertainties through two different examples.
As he writes “Imagine taking 1000 men at random and calculating the sample’s average weight. Now suppose we add the heaviest man we can find to the sample. Even if he weighed 600kg – which would make him the heaviest man in the world – he’d hardly change the estimated average. If the sample average weight was similar to the American average of 86kg, the addition of the heaviest man in the world (probably the heaviest ever) would only increase the average to 86.5kg.”
This is mild uncertainty.
Then there is wild uncertainty, which Dylan Grice explains through the following example. “For example, suppose instead of taking the weight of our 1000 American men, we took their wealth. And now, instead of adding the heaviest man in the world we took one of the wealthiest, Bill Gates. Since he’d represent around 99.9% of all the wealth in the room he’d be massively distorting the measured average so profoundly that our estimates of the population’s mean and standard deviation would be meaningless…If weight was wildly distributed, a person would have to weight 30,000,000kg to have a similar effect,” writes Grice.
Financial markets are wildly random and not mildly random, like economists like to believe. This means that financial markets can have big crashes. But given the belief that economists have in the efficient market hypothesis, most of them can’t see any crash coming.
In fact, when it comes to worst case predictions it is best to remember a story that Howard Marks writes about in his book The Most Important Thing (and which Dylan Grice reproduced in his report titled Turning “Minimum Bullish” On Eurozone Equities dated September 8,2011). As Marks writes “We hear a lot about “worst case” projections, but they often turn out not to be negative enough. I tell my father’s story of the gambler who lost regularly. One day he heard about a race with only one horse in it, so he bet the rent money. Halfway around the track the horse jumped over the fence and ran away. Invariably things can get worse than people expect. Maybe “worse case” means “the worst we have seen in the past”. But it doesn’t mean things can’t be worse in the future.” 
Disclosure: The examples of SS Tarapore and Rajeev Malik were pointed out by the Firstpost editor R Jagannathan in an earlier piece. You can read it here)
The article originally appeared on on August 26, 2013
(Vivek Kaul is a writer. He tweets @kaul_vivek)