While no one can foresee the future with 100% accuracy, training your analysts to forecast objectively and work collaboratively can help you stay ahead of the competition.
You can improve your organisation’s forecasting accuracy by up to 14% with a few proven research practices.
If knowledge is power, then foreknowledge is a kind of superpower, enabling leaders to gain competitive advantage and avoid erroneous business decisions.
But are such superpowers the provenance of a gifted few, or can predictive skills be learned? According to Paul Schoemaker and Philip Tetlock, writing for Harvard Business Review, they can.
GOOD JUDGMENT
The Good Judgment Project (GJP) is a large-scale research initiative set up in 2011 as the result of a massive misjudgment – the US National Intelligence Council’s 2002 assertion that Iraq was actively preparing weapons of mass destruction. It’s aim was to determine whether it’s possible to enhance prediction performance.
Led by Philip Tetlock and Barbara Mellers of the Wharton School, the GJP was one of five academic research teams competing in a four-year tournament funded by the Intelligence Advanced Research Projects Activity (IARPA).
A WINNING TEAM
The forecasters answered questions ranging from whether Greece would exit the eurozone to whether a leadership turnover in Russia was imminent.
The GJP’s research team won the tournament, besting even the intelligence community’s seasoned analysts and proving that trained generalists can outperform specialists in forecast accuracy.
WHEN SUPERFORECASTING WORKS BEST
Before going on to describe how firms can acquire the superforecasting capabilities of the GJP team, the authors issue a caveat: it will only work for certain types of judgments.
Investing in superforecasting will give no advantage in situations where the available data already allows for accurate predictions. For instance, life insurance companies can already make reliable predictions on the life expectancy of policyholders using up-to-date mortality tables.
Conversely, where problems are highly complex and difficult to quantify (the authors cite the example of trying to predict cloud patterns for a given day), improving judgment will have little impact on the accuracy of a forecast.
THE SWEET SPOT
This technique works best for a “sweet spot” of forecasts for which some data, logic and analysis can be used, but also where seasoned judgment and precise questioning are crucial.
Predicting the commercial potential of drugs in clinical trials, for example, is one such instance. Making the right decision requires scientific expertise as well as good business judgment.
Assessing job candidates is another good example. It requires using formal scoring models as well as gauging intangibles such as cultural fit and personal chemistry.
LEARNING GOOD JUDGMENT
By applying the core practices developed by the GJP research team as detailed below, you can improve your organisation’s ability to predict uncertain outcomes and provide the right solutions to the right stakeholders at the right time.
The process entails:
1) Training forecasters to eliminate bias;
2) Assembling teams of ‘superforecasters’;
3) Managing the phases of the prediction process; and
4) Tracking prediction performance and providing feedback.
TRAINING
Even an hour’s training in reasoning and debiasing can improve a firm’s forecasting accuracy by around 14%, claim the authors.
1) Probability training. The first part of the GJP’s training methodology ensures a solid grounding in probability concepts such as regression to the mean and Bayesian revision (updating probability estimates in light of new data).
2) Recognising bias. Next, forecasters learn to identify and eliminate the key biases that skew forecasting, such as confirmation bias. They learn the psychological basis behind reliance on flawed intuition; for example, why we might see patterns in data that have no statistical basis.
3) Knowing what you don’t know. Here, participants learn how much they don’t know through confidence quizzes. For example, predictors are asked how confident they are that Martin Luther King died between age 40–55, and assign a 90% confidence vote to this prediction (the correct answer is 39).
“Participants commonly discover that half or more of their 90% confidence ranges don’t contain the true answer.”
4) Tailored training. Customising training to address areas where past performance has been poor (such as sales or R&D) can be particularly effective.
CHOOSING TEAMS
In each year of the IARAP’s tournament, GJP forecasters working collaboratively in teams consistently outperformed individual forecasters. By discussing among themselves, forecasters could eliminate any unconscious biases and maintain effective checks and balances, thus achieving enhanced accuracy.
When choosing your forecasting teams, bear in mind:
1) Team members. Choose forecasters who are “cautious, humble, open-minded, analytical – and good with numbers… [who] show an alertness to bias, a knack for sound reasoning, and a respect for data”.
2) Team composition. It is important for teams to be intellectually diverse. At least one member should have domain expertise (for example, a finance professional on a budget forecasting team), but non-experts are also essential, particularly those willing to challenge the experts.
3) Team dynamics. Trust among team members – and in the leadership managing the team – is essential to eliminating bias, particularly in view of ego or reputational concerns.
MANAGING TEAMS
A team’s forecasting success depends on tracking and managing three core prediction phases:
1) Diverging. This entails exploring assumptions and approaches to finding an answer from many different angles.
2) Evaluating. The evaluation phase includes time for productive disagreement (including testing assumptions and gathering new information).
3) Converging. In this final phase, the team agrees on a prediction.
These first two phases are essential for avoiding tunnel vision. Focusing on gathering new information, testing assumptions and being mindful of the dangers of ‘anchoring’ predictions on early estimates can help.
“In each of these phases, learning and progress are fastest when questions are focused and feedback is frequent.”
TRACKING PERFORMANCE
Tracking prediction performance and giving timely feedback will improve future forecasting performance. The tracking process is twofold:
1) Brier scoring. The performance of your team is tracked using a Brier score – an equation that allows managers to reliably and consistently rank forecasters on their predictions.
2) Process audit. It’s also important to conduct a process audit, asking why good or bad outcomes were achieved, and which steps led to good or bad predictions.
A process audit will uncover useful lessons for future performance that a Brier score could mask. For instance, a process audit might reveal a lucky forecast based on faulty process hiding behind a good Brier score.
At the other extreme, the Brier score for an accurate forecast based on sound process but disrupted by a black-swan event – such as a bomb threat causing a store closure and thus impacting sales figures – would indicate poor performance.
Process can be analysed through real-time accounts of how predictions were made, records of any data used and video transcripts of meetings. Post-audit, feedback can help identify and strengthen any areas of weakness.
LEADERSHIP OPPORTUNITY
By executing these processes in a customised business context, companies can take the lead in capturing and honing the competitive advantages of superforecasting.
“But companies will capture this advantage only if respected leaders champion the effort, by broadcasting an openness to trial and error, a willingness to ruffle feathers, and a readiness to expose ‘what we know that ain’t so’ in order to hone the firm’s predictive edge,” conclude the authors.