fbpx
Menu Close

Business forecasting: use your own judgement

business-forecasting

All businesses must operate within societies which are rife with predictions on every side.

The predictions, on everything from the year-end level of the stock market to the withdrawal of US troops from Iraq, are produced by pundits. Some of them are organised into Think Tanks of one kind or another, some are professional commentators in the media, some are colleagues and competitors, and so on. But how good are their predictions? And is one type or set of pundits more reliable than others?

Don’t waste your time on trying to find answers. There aren’t any. A new book by Philip Tetlock is entitled ‘Expert Political Judgment: How good is it?’ How can we know? It is based on 20 years of study of 284 people who make money from ‘commenting or offering advice on political and economic trends’. He wanted answers to questions that we should all ask ourselves, but hardly ever do. Like:

  • How do I reach my judgments?
  • How do I react when my prediction proves wrong?
  • How do I evaluate new information that doesn’t support my views?
  • How do I assess the probability that rival theories and predictions are accurate?

Tetlock got to the heart of the matter by offering the 284 respondents ‘three possible futures’:

  • The status quo will continue much the same
  • Whatever is being studied will increase (say, growth)
  • It will decrease (say, recession)

Louis Menand, reviewing the book in The New Yorker, says that the results of this exercise were ‘unimpressive’. However, I am very impressed – by the general incompetence. The experts would have done better if they had just assigned an equal probability (33.33%) to the three alternatives. In other words, to quote Menand, the pundits ‘are poorer forecasters than dart-throwing monkeys’.

Some other choice insults: specialists are not significantly more reliable than non-specialists; the more famous the forecaster, the more over-blown the forecasts; expertise and experience do not make someone a better reader of the evidence; experts undermine themselves by falling in love with their supposed expertise; they judge contrary evidence much more harshly than evidence supporting their prediction’.

Then there’s hindsight. An outcome is always one of several possible futures. The past, by definition, only leads to that one future which actually occurred. Another fact is that scenarios with more variables (which most of us love more) are less likely to come true. If two independent things have to happen, the probability is mathematically less than that of either of the separate events.

There’s much more of this fascinating stuff. Menand’s final verdict is brisk: think for yourself. I warmly endorse that. Don’t be impressed by presumed expertise. Challenge every view and every alleged fact rigorously, including your own predictions. If you’re acting on forecasts, head for a failsafe position. Remember that the closer a predicted event, the higher the probability that it can be accurately ‘forecast’. Using that knowledge, concentrate on getting the most accurate reading of what is actually happening in the here and now. Getting the present wrong is inexcusable, but all too common.

Almost always, such misreadings arise from wishful thinking. The pundit wants something to happen (or not happen). In a famous example, the US car industry refused to accept that more women going out to work meant more two-car families, with a greater likelihood that one of the two would be a smaller, more economic model. The Japanese thus snatched a huge chunk of the market from under their noses. And then there’s self-fulfilling prophecy. The British car industry, not to be out-done, maintained that TV advertising couldn’t sell cars (because the car moguls didn’t want to pay for the ads). By not advertising, they ‘proved’ their case. When those same cunning Japanese took large advantage of this loophole (or loopiness), the theory was disproved in a trice.

Robert Heller