Gearing up for ethical and responsible use of AI
A new study shows that business leaders are taking steps to ensure responsible use of artificial intelligence (AI) within their organisations. Most AI adopters – which now account for 72% of organisations globally – conduct ethics training for their technologists (70%) and have ethics committees in place to review the use of AI (63%). This is typically more prevalent in the UK, with 80% of British companies saying that they conduct ethics training.
AI leaders – organisations rating their deployment of AI ‘successful’ or ‘highly successful’ – also take the lead on responsible AI efforts: Almost all (92%) train their technologists in ethics compared to 48% of other AI adopters – in the UK this rises to 71%.
The findings are based on a global survey among 305 business leaders, more than half of them chief information officers, chief technology offers, and chief analytics officers. The study, ‘AI Momentum, Maturity and Models for Success,’ was commissioned by SAS, Accenture Applied Intelligence and Intel, and conducted by Forbes Insights in July 2018.
AI now has a real impact on peoples’ lives which highlights the importance of having a strong ethical framework surrounding its use, according to the report.
“Organisations have begun addressing concerns and aberrations that AI has been known to cause, such as biased and unfair treatment of people,” said Rumman Chowdhury, Responsible AI Lead at Accenture Applied Intelligence. “These are positive steps; however, organisations need to move beyond directional AI ethics codes that are in the spirit of the Hippocratic Oath to ‘do no harm’.
Ray Eitel-Porter, head of Accenture Applied Intelligence UK, continued: “Businesses need to think about how they can turn theory into practice. They can do this through usage and technical guidelines enshrined in a robust governance process that ensures AI is transparent, explainable, and accountable. It’s been said that the UK has an opportunity to lead the world in AI ethics, and creating these guidelines for businesses and developers alike will help us to reach that pinnacle and avoid any unintended consequences of the technology. We are proud to be working with our clients in this area and helping to shape the public debate.” AI leaders also recognise the strong connection between analytics and their AI success. Of those, 79% report that analytics plays a major or central role in their organisation’s AI efforts compared to only 14% of those who have not yet benefited from their use of AI.
“Those who have deployed AI recognise that success in AI is success in analytics,” said Oliver Schabenberger, Chief Operating Officer and Chief Technology Officer at SAS. “For them, analytics has achieved a central role in AI.”
AI oversight is not optional
Despite popular messages suggesting AI operates independently of human intervention, the research shows that AI leaders recognise that oversight is not optional for these technologies. Nearly three-quarters (74%) of AI leaders reported careful oversight with at least weekly review or evaluation of outcomes (less successful AI adopters: 33%). Additionally, 43% of AI leaders shared that their organisation has a process for augmenting or overriding results deemed questionable during review (less successful AI adopters: 28%). The behaviour of UK companies is consistent with the global trends in these areas.
Still, the report states that oversight processes have a long way to go before they catch up with advances in AI technology.
“The ability to understand how AI makes decisions builds trust and enables effective human oversight," said Yinyin Liu, head of data science for Intel AI Products Group. "For developers and customers deploying AI, algorithm transparency and accountability, as well as having AI systems signal that they are not human, will go a long way toward developing the trust needed for widespread adoption.”
It stands to reason that companies are taking steps toward ethical AI and ensuring AI oversight because they know that faulty AI output can cause repercussions. Of the organisations that have either already deployed AI or are planning to do so, 60% stated that they are concerned about the impact of AI-driven decisions on customer engagement – for example, that their actions will not show enough empathy or customers will trust them less.
Other key findings from the survey include:
- Overall, 72% of organisations globally are now using AI in one or more business areas.
- More than half (51%) of AI adopters indicated their deployment of AI has been a real success – citing more accurate forecasting and decision-making, higher success at acquiring customers, and increased organisational productivity as the primary benefits.
- Nearly half (46%) of AI adopters overall said their organisation has fully deployed AI, either in one or multiple use cases.
- Respondents outside of the C-suite were more likely to see the impact of AI positively: More than half (55%) of non-C-level executives say their AI efforts have been ‘successful’ or ‘very successful.’ Only 38% of the C-suite reported the same.
- Many organisations see an advantage for their workforce by way of elevated roles. Sixty-four pecent strongly or completely agree they are already seeing the effects, as employees focus on more strategic tasks rather than operative ones, thanks to AI.
- However, nearly 20% identify “resistance from employees due to concerns about job security” as a challenge to their AI efforts. Plus, 57% agree or strongly agree with the statement, “We are concerned about the impact of AI on employee relations (employees might feel threatened or overstrained.)”
“As with any new technology that’s quickly gaining traction, there will be challenges to overcome,” said Ross Gagnon, research director at Forbes Insights. “But the opportunities AI presents are seemingly endless, from operational efficiencies to increased productivity and revenue. The question executives should be asking themselves is not whether to deploy AI, but how quickly?”