AI brings great benefits but the potential pitfalls are huge. Here’s how to reap the rewards of this new technology while managing its risks.
From shopping to the complex world of medicine, we’re already seeing major benefits from artificial intelligence and by 2030, the McKinsey Global Institute estimates that the new technology could contribute an extra $13 trillion to the world economy. But there’s a downside. We don’t yet understand the full impact AI will have on our lives. It comes with a health warning: Handle with care.
Writing for McKinsey Quarterly, Benjamin Cheatham, Kia Javanmardian and Hamid Samandari explain that if poorly adopted, AI has potential to cause privacy violations, discrimination, accidents, manipulation of political systems, and even loss of life and national security breaches. Here the authors point out the pitfalls that accompany artificial intelligence and provide an AI risk-management framework that works.
HOW AI GOES WRONG
When AI goes wrong, the reason why often seems so obvious that people rightly question why nobody saw the problem before it was too late. We don’t yet fully comprehend the nature and scope of AI, but we can identify some of its risks and begin to understand how they interrelate, as well as what drives them.
1) Technology and data. AI feeds on data, but with so many disparate sources – from the internet of things to social media, mobile apps and much more – how do you verify that the algorithms ingesting, sorting and linking data have done so without inadvertently revealing confidential information or contravening privacy regulations?
And what of the hardware and software running AI systems? It doesn’t always behave as it should – data can be missed or lost, and that impacts outcomes.
So far, the authorities implementing the EU’s General Data Protection Regulation (GDPR) and California’s Consumer Privacy Act have adopted a relatively lenient approach to data-management breaches relating to AI, but that’s already changing.
2) Fraud. Fraudsters use AI to steal individuals’ identities by piecing together the disparate fragments of their digital footprints. If you don’t treat all the information you collect to drive your own AI systems as sensitive data, you’re failing your customers and they and the regulators will make you pay when it all goes wrong.
3) Bias. When “a population is underrepresented in the data used to train the model”, results are skewed or unstable – particular groups have been refused finance with no explanation and recourse to a transparent appeals process.
If you think you’re in the minority of firms which do not use AI, or is only experimenting with the technology and cannot therefore be affected by bias flaws in the models used to create it, think again. Many software suppliers update their platforms with AI modules which might interact with your own data in unpredictable ways.
4) People. The design of the human-machine interface is fraught with difficulty: driverless cars have the potential to injure pedestrians but how and under what circumstances should a manual override be enabled?
Humans get things wrong too: “Behind the scenes in the data-analytics organisation, scripting errors, lapses in data management, and misjudgments in model-training data can easily compromise fairness, privacy, security and compliance.” People are biased – when a sales force is particularly good at selling to particular groups, they can skew the data being fed to AI models. Also people lie, and people sabotage.
HOW TO MITIGATE THE RISKS OF AI
Understanding the risks of AI means you have the opportunity to spot problems before they catch you out. Here’s how to foster an organisation-wide AI risk-management strategy.
1) Clarity. Assemble leaders from across the business for “a clear-eyed look at the company’s existing risks and how they might be exacerbated by AI-driven analytics efforts… at new risks AI enables, or that AI itself, could create”. Knowing your risk exposure enables you to prioritise – risk identification is a well understood business function and one that can be adapted to focus on AI.
2) Breadth: Your efforts to analyse and mitigate the risks of AI need to be broad-based if they are to be thorough. Invite everyone from C-suite leaders to experts in legal, IT, security and analytics to help you devise an AI risk strategy which encompasses both in-house and third-party-initiated AI. Your strategy should include scenario and fallback planning to enable swift identification and reaction to as wide a range of problems as possible.
3) Depth. Supplement company-wide AI governance with deeper investigation of AI on a case-by-case basis to determine and respond to the specific risks that come with each application of AI technology.
4) Training. Make sure staff know when and how AI is used within the organisation and what the governance framework is. Let your employees know how the organisation is acting to ensure fairness and protect customer data, and what their own role is in minimising risk.
5) Review. “AI-driven analytics is an ongoing effort rather than a one-and-done.” Regular reviews of your governance framework keeps you up to date with “regulatory changes, industry shifts, legal interpretations (such as emerging GDPR case law), evolving customer expectations and rapidly changing technology”.
We don’t yet fully understand either the true scope or the extent of the risks associated with AI. Engaging staff from across the organisation to help with developing better systems can help us recognise where something is likely to go wrong, and what might be done about it. That way, some of the risks of this exciting new technology can be averted.