The double-edged sword of applying AI to compliance

Hitachi Vantara

By George Dragatsis, CTO & Director of Technical Sales ANZ, Hitachi Vantara
Wednesday, 13 September, 2023


The double-edged sword of applying AI to compliance

At its heart, compliance is a data problem.

It’s also becoming a problem for many more Australian organisations. As they become more data-driven, it exposes them to greater risks and brings them in closer contact with security, data handling, storage and privacy compliance regimes. These regimes, in turn, are in the process of evolving to meet government and community expectations.

The net result is more organisations are becoming regulated entities with intensive compliance requirements, and that change will continue over the next few years.

Given the resource-intensive nature of understanding large volumes of data, artificial intelligence (AI) has the potential to significantly improve compliance outcomes for organisations: such as by running checks on the extent to which data collection and storage meets various rules, regulations and internal policies, or by streamlining compliance reporting and auditability.

When help is also a hindrance

But as helpful as AI is, it could also pose a hindrance to compliance, particularly in circumstances where the AI itself is not documented well enough to vouch for the accuracy of its output, or where safeguards for its use are inadequate.

It is this juxtaposition that Australian organisations in regulated industries are now having to confront.

To an extent, this kind of conversation has already played out among regulated entities.

Automation and compliance already go hand-in-hand. Most organisations run automated checks or generate automated reports to meet compliance requirements, because the data volumes involved make it impossible to perform these tasks manually.

But these kinds of automated systems aren’t foolproof. In Australia in recent years, we’ve seen banks and casino operators penalised hundreds of millions of dollars — over a billion dollars in one case — for not having automated systems in a state that could provably examine the large volumes of data involved to correctly identify and report compliance issues and risks.

AI holds promise to be a useful augmentation or layer to existing compliance checking systems.

On paper, AI algorithms are well-suited to handling the deluge of data seen by regulated entities, regardless of what state the data is in: whether it’s structured, semi-structured or unstructured. AI also has particular classification characteristics that can be applied to data to make it discoverable and understandable.

Data management and AI systems can be used to shine a spotlight on data and particularly to identify compliance risks in the information a particular dataset contains. For example, there may be elements of personally-identifiable or sensitive health information embedded or buried in data fields. Without having AI to parse the dataset to flag these sensitivities, regulated entities may have to leave the identification and cleanup task to data stewards to perform manually, or to script searches themselves. Neither approach scales quite as well as an AI algorithm can.

The AI will still need some human-in-the-loop oversight, but this is likely to get progressively less and less as the AI gets better at detection and at generating automated reports that meet compliance needs.

Defining appropriate AI

Of course, to be comfortable that the AI is picking up on compliance risks with appropriate accuracy, there need to be some checks and balances in place.

The first of these is in understanding how the AI works: how it takes the business rules and compliance regulations, and translates them into the activity it performs on behalf of the organisation.

A recurrent pattern in AI conversations at present is that technology leaders are still unsure and unclear about how to treat AI. Many are not evaluating these AI models to any significant degree, running them as a trusted ‘black box’ without doing enough due diligence to confirm that trust is well-placed. They’re not doing this out of ignorance — it’s more a case of not knowing what they don’t know. The current generation of AI is relatively nascent and as such, many organisations are still trying to understand how AI truly applies in their specific business contexts.

Involving experts in the design and ongoing evolution of AI is crucial. These experts may be internal data stewards, who have the ‘tribal knowledge’ to determine whether the AI is producing recommendations, red flags or reports that are accurate. Their input is vital because it is the most likely path to identifying any biases that may be present in the AI algorithms — certainly, the AI itself is unlikely to be good at reflecting on its own biases or weaknesses, either by-design or learning.

By involving experts in the learning loop of the AI, they can also play a role in helping to train the algorithm over time, such that the accuracy is improved and the risks of using such automation are further reduced. The same experts may also ultimately be best-placed to assume or be assigned specific responsibility for AI used to manage compliance in their respective areas of domain expertise. This can also help to engender trust in the AI, while ensuring its design and operation is well-documented and explainable, should regulators or auditors seek that assurance.

Image credit: iStock.com/Serhii Bolshakov

Related Articles

Big AI in big business: three pillars of risk

Preparation for AI starts with asking the right questions.

Making sure your conversational AI measures up

Measuring the quality of an AI bot and improving on it incrementally is key to helping businesses...

Digital experience is the new boardroom metric

Business leaders are demanding total IT-business alignment as digital experience becomes a key...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd