A new report has found that a lack of urgency around responsible Artificial Intelligence (AI) use is putting companies at risk.
The report by FICO and Corinium found that almost two-thirds (65%) of respondents’ companies can’t explain how specific AI model decisions or predictions are made whilst 73% have struggled to get executive support for prioritising AI ethics and responsible AI practices.
The study found that the lack of awareness of how AI is being used and whether it’s being used responsibly is concerning as 39% of board members and 33% of executive teams have an incomplete understanding of AI ethics.
While compliance staff (80%) and IT and data analytics team (70%) have the highest awareness of AI ethics and responsible AI within organizations, understanding across organizations remains patchy. As a result, there are significant challenges to build support to establish practices as the majority of respondents (73%) have struggled to get executive support for prioritizing AI ethics and responsible AI practices.
Scott Zoldi, Chief Analytics Officer at FICO said “Over the past 15 months, more and more businesses have been investing in AI tools, but have not elevated the importance of AI governance and responsible AI to the boardroom level.”
“Organisations are increasingly leveraging AI to automate key processes that – in some cases – are making life-altering decisions for their customers and stakeholders. Senior leadership and boards must understand and enforce auditable, immutable AI model governance and product model monitoring to ensure that the decisions are accountable, fair, transparent, and responsible.”
The study found that almost half (49%) of the respondents report an increase in resources allocated to AI projects over the past 12 months, followed by team productivity (46%) and predictive power of AI models (41%). Whereas, only 39% have prioritized increased resources to AI governance during model development and 28% have prioritized ongoing AI model monitoring and maintenance.
The study showed that there is no consensus among executives about what a company’s responsibilities should be when it comes to AI.
The majority of respondents (55%) agree that AI systems for data ingestion must meet basic ethical standards and that systems used for back-office operations must also be explainable. But this may partly reflect the challenges of getting staff to use new technologies, as much as wider ethical considerations.
Almost half (43%) of respondents say they have no responsibilities beyond meeting regulatory compliance to ethically manage AI systems whose decisions may indirectly affect people’s livelihoods – i.e. audience segmentation models, facial recognition models, recommendation systems.
Cortnie Abercrombie, Founder and CEO, AI Truth said “AI will only become more pervasive within the digital economy as enterprises integrate it at the operational level across their businesses.”
“Key stakeholders, such as senior decision makers, board members, customers, etc.; need to have a clear understanding on how AI is being used within their business, the potential risks involved and the systems put in place to help govern and monitor it. AI developers can play a major role in helping educate key stakeholders by inviting them to the vetting process of AI models.”