Menu Close

How to turn AI from a rival into a teammate

A productive partnership between staff and AI is possible, but only if you implement it in stages, with full transparency, and always leave humans in charge.

AI promises efficiency, speed, and consistency, but people don’t trust it. According to the UK’s Guardian newspaper, six million UK workers fear losing their jobs to AI. Academics and executives echo this view. Writing for Harvard Business Review, Boris Babic, Daniel L. Chen, Theodoros Evgeniou and Anne-Laure Fayard present four steps to a human/AI partnership based on trust and transparency.

The dark side of AI

In 2016, investigative journalists from Propublica revealed that COMPAS, an AI program judges used to determine defendants’ likelihood of reoffending, was racially biased. Also, the algorithm underpinning the decisions was a trade secret belonging to the manufacturer – the criteria the system had used to make its judgements was hidden.

Stories like this fuel paranoia about the use of AI. In the workplace too, employees are suspicious of AI because they believe:

  1. AI is in competition with humans. Because AI can process vastly more data than humans, and operate with total consistency, many fear AI will replace them.
  2. AI replaces human agency with machine control. When, for example, an AI system red flags a bank transaction, if it’s impossible for staff to undo or override the decision, they may view AI as a tyrannical overseer.
  3. AI invades privacy. Staff “worry that if they freely interact with the system and make mistakes, they might later suffer for them”.
  4. AI is opaque. Humans cannot relate to a system whose “values, desires, and intentions” are unclear.

Some of our human worries are justified. “Over the long term, new technologies may create more new jobs than they destroy but meanwhile labour markets may be painfully disrupted.” But AI is here already and will only become more prevalent. Your task is to make sure that your AI program turns the technology from a threat to an opportunity staff can embrace.

A healthy human/AI partnership

Make the human/machine interface a partnership in which the human is in charge and transparency is guaranteed. AI implementation should be staged, involve your staff in its design, and its priority must be to increase human agency, not replace it. Here is a four-step process to achieve a healthy human/AI interaction.


Introducing AI to relieve employees of repetitive tasks so they can work on more interesting projects is a positive way to begin implementing the technology. At this first stage of relationship building, it’s important to win trust. Make it clear that the purpose of the AI is to release staff to be more productive in other areas.

AI can assist with:

  • Sorting data. This technology has been around since the 90s – Netflix and Amazon use it to sort through thousands of products to bring customers what is most relevant to them.
  • Natural language processing. AI can monitor news feeds for relevant developments and, by analysing the use of language, assess current sentiment around a particular topic. Marble Bar Asset Management (MBAM) uses the technology to “help portfolio managers filter through high volumes of information about corporate events, news developments, and stock movements”.
  • Making suggestions. Also called ‘judgemental bootstrapping’, AI makes suggestions based on an employee’s previous decisions. Algorithms help airline catering managers to optimise their orders for each flight based on “all relevant historical data, including food and drink consumption on the route in question and even past purchasing behaviour by passengers on the manifest for that flight”.

In all these cases, manual overrides ensure people remain in control and enable the AI technology to learn from its human counterparts.


“Judges grant political asylum more frequently before lunch than after, they give lighter prison sentences if their NFL team won the previous day than if it lost, and they will go easier on a defendant if it’s the latter’s birthday.”

AI helps iron out the flaws in human judgment by comparing each decision with both the decision-maker’s previous rulings, and a defined set of legal variables. When AI is employed in a monitoring role, it supports good decision making, but does not override it.

“Using [AI] should be like a dialogue, in which the algorithm provides nudges according to the data it has while the human teaches the AI by explaining why he or she overrode a particular nudge.”

Any monitoring service runs the risk of feeling intrusive. The user’s privacy is paramount, or they won’t trust either the system or the management team’s intentions. “A wall ought to separate the engineering team from management.”

AI systems must be consistent across the organisation, employing the same standards throughout. Build trust into the design of your AI by including service users in its development. “Engage [staff] as experts to define the data that will be used and to determine ground truth; familiarise them with models during development; and provide training and instruction as those models are deployed.”


“The only way to discover strengths and opportunities for improvement is through a careful analysis of key decisions and actions.” Manual staff appraisals are laborious, bias prone and, because managers administer feedback at a time and in a manner of their choosing, staff often feel disempowered. AI can do better.

At investment firm MBAM, a data analytics system captures decisions at an individual and a company-wide level. Not only does this reveal differences in risk aversion between portfolio managers (PMs), it also “provides personalised feedback that highlights behavioural changes over time, suggesting how to improve decisions”.

AI gives staff the choice of when and how to receive feedback, and provides autonomy in terms of how PMs decide to incorporate machine-generated analysis into future investment decisions. PMs also provide feedback, explaining why their decisions differed from those anticipated by the software, helping to train the technology to become more effective.

When designing an AI system that performs a coaching role, it’s vital to include staff in its design. For it to work, make sure its rationale is transparent, and that you deploy it in a way that enhances rather than degrades employees’ ability to think for themselves.

At MBAM, the firm’s leadership believes its AI program is a ‘trading enhancement’ which, “is becoming a core differentiator that both helps develop portfolio managers and makes the organisation more attractive”.


“External tools and instruments can, under the right conditions, play a role in cognitive processing and create what is known as a coupled system.” Imagine a purchasing manager who with one click can see what “a customized collective of experts” might pay for a good or service. At this point, AI evolves from a coach to a teammate.

This is the next phase of AI development. The technology is already there but, for employees to relate to AI as a teammate, they must trust it. Trust is based on understanding a person’s values, desires, and intentions, and believing they have your best interests at heart. AI must be accountable for its decisions and explain how it reached them.

Not only is the term ‘explanation’ difficult to define, identifying what makes a ‘good explanation’ is even harder. There are three approaches to this process.

  • A counterfactual explanation. Explains why an opposite decision would have resulted if different information had been input.
  • A logical explanation. A decision is reached because the data satisfies a number of clearly defined criteria.
  • A relative explanation. A case-based approach – a decision is made because the data matches that used to reach previous decisions.

Identifying which form of explanation is best suited to securing positive human engagement is the subject of ongoing research.

As a minimum, AI staff can trust, must be transparent and safeguard autonomy and privacy. To ensure positive engagement with, and to reap the rewards of this new technology, you should go further. Promote AI’s potential to release your employees to do more interesting work; involve staff in the design of your AI program; and take a mindful, staged approach to implementation, first as assistant, then monitor, next coach, and finally as teammate.

Source Article: A Better Way to Onboard AI
Author(s): Boris Babic, Daniel L. Chen, Theodoros Evgeniou and Anne-Laure Fayard
Publisher: Harvard Business Review