All sorts of business decisions are increasingly reliant on the help of algorithms that extract information from sets of relevant data. Tuning that process well, in a people-centred way, is the key to safeguarding your organisation’s relationships and reputation, say David A Bray and R ‘Ray’ Wang, writing for MIT Sloan Management Review.
The AI behind it all is continually developing and progressing. It’s currently in its third phase – called “perception AI” – with the emerging “deep learning” neural networks lined up to make a major impact.
Deep learning puts data through more sophisticated levels of pattern recognition than earlier models.
“Like other forms of AI, deep learning tunes itself and learns by using data sets to produce outputs – which are then compared with empirical facts,” explain the authors.
But they warn that it’s vital to make sure your network is as accurately and finely tuned as possible to avoid making poor choices that could potentially damage customers, products and services.
PUT PEOPLE AT THE CENTRE OF AI DESIGN STANDARDS
Bray and Wang’s work guiding organisations in multiple industries through the adoption of AI has led them to develop design principles that put people and ethics at the heart of network tuning.
They have explored how to avoid bogus solutions in favour of creating standards that ensure benefits for the most number of people, both individually and collectively.
These are the three elements they consider the most important:
1) Be transparent. Give people working with deep learning as much information as they need about how it works and how it will affect them, including the way data sets tune algorithms.
2) Be understandable. All stakeholders and customers should comprehend how the algorithms were tuned, what kind of data was used and how the AI conclusions were used by the humans who make important decisions based on that information.
3) Be ready to reverse. It’s important in a deep learning effort to be able to step backwards and ‘unlearn’ certain data if there is a danger of bias.
HOW TO PUT DESIGN PRINCIPLES INTO ACTION
1) Appoint data ‘ombudsmen’ or advocates. They should engage human stakeholders to ensure that appropriate, sufficiently diverse, data sets are used for the questions being asked, in order to avoid poorly tuned networks.
2) Test data for biases with a system of ‘mindful monitoring’. This method uses three pools of data set – trusted, not fully vetted yet or ‘queued’ and unreliable – comparing them and weeding out irrelevant or problematic data.
3) Set boundaries around how much reliance is placed on data and what is done with it. Bray and Wang say: “The organisation may use data sets on financial transactions for the last seven years to inform what credit cards to offer customers – but it will not use its deep learning system to make credit card offers on the basis of gender or race, which would be immoral and illegal.”
At a time when companies are increasingly introducing AI and deep learning, putting these people-centred methods in place will protect your organisation from the serious risk posed by poor design and bad data.