Setting the stage for smarter, more transparent AI

Nintex Pty Ltd

By Keith Payne, Vice President APAC, Nintex
Friday, 11 October, 2024


Setting the stage for smarter, more transparent AI

Excitement for AI is palpable, but concerns over issues like privacy, security and bias are slowing down adoption. This was the recurring theme from the recent Senate Select Committee hearings on Adopting Artificial Intelligence, with the government’s Interim Response to Safe and Responsible AI highlighting this dilemma.

The Senate inquiry, along with the recent release of the government’s proposed mandatory AI guardrails, hint that future AI regulation will likely zero in on risk management and compliance. This makes sense: like most technology, AI that is left unchecked or poorly governed could deliver biased or incorrect results, which could have serious consequences for organisations and the people they serve.

This isn’t just an Australian issue either. The World Economic Forum ‘AI Governance Alliance: Briefing Paper Series’ highlights this challenge, calling for AI systems to be more transparent and accountable. Globally, governments are wrestling with the same questions. The EU for instance, has taken steps to address these risks with its Artificial Intelligence Act coming into force, which emphasises transparency and accountability. Meanwhile, in the US, debates over how to balance innovation with safety continue.

Australia is likely to follow suit, with a strong focus on data quality and risk management.

As a result, many companies are waiting for more legislative and regulatory clarity before they jump in on AI. But instead of waiting and being caught flat-footed, businesses should focus on improving the processes these systems are built on (and rely on) with automation today, so they can hit the ground running with AI tomorrow.

Clean data, clean AI

In recent months, the debate has been defined by the idea of ‘responsible AI’. It’s not just about the technology: AI is only as good as the data and processes that fuel it. If the data is flawed, the results will be, too.

However, legislative questions are not the only obstacle to widespread adoption. Many technical challenges remain. This is where we start hearing terms like “AI model collapse”, which is when an AI system’s outputs spiral into bias or inaccuracy because it’s trained on bad data. It’s becoming a real concern for companies rushing to implement AI without fully understanding how these systems make decisions or, more importantly, how reliable their data is.

At the heart of this dialogue is the realisation that AI isn’t just about improving outcomes: it’s also about avoiding failure. This is especially true in the long term as the scope of information fed to these models widens.

We’ve seen the impacts of this in the real world. Early chatbot forays from the world’s largest tech companies quickly descended into misinformation and bias-riddled responses. These past lessons underscore a critical point: without careful attention to the data and processes behind AI systems, the risk of failure is not just possible, but inevitable.

Laying foundations

In response, the way we leverage more developed technology, like automation, has taken on a new light.

While automation’s end goal is streamlining processes, doing so requires discovering, cleaning and implementing governance over an organisation’s data. With this in mind, automation can help pave the way for future AI projects by adding a deterministic governance foundation around AI capabilities.

Discovering all an organisation’s data requires breaking down the siloes between myriad data sources. This is essential to both automation and AI as siloed data means any algorithm is making decisions from an incomplete data source.

Once the siloes have been removed, implementing the proper controls and governance over data is not only much simpler, but there can be confidence that no sources have been missed and the right sources are informing the accuracy of AI models.

When it comes to concerns around governance and risk, organisations that already have automation underneath AI applications can benefit from an improved audit trail and overall traceability of outcomes. With these enhanced capabilities, organisations can have more confidence when applying AI across their end-to-end processes.

The other benefit of pursuing automation before AI is that it improves processes to prevent future inefficiency creep.

AI helps to scale processes. But if those processes are inefficient, those inefficiencies will be magnified. Laying the groundwork by first optimising business processes means future AI deployments can live up to their potential.

That’s why more attention needs to be focused on not just speeding things up but doing so safely. If we can standardise and automate how data is collected and processed, along with the processes supporting this, businesses will be in a better place to peer inside the AI black box. This is essential to ensuring that when AI does come into play, it’s working responsibly, with reliable information, and its responses can be trusted.

Image credit: iStock.com/Supatman

Related Articles

Why embedding trust in AI is critical to its future

The maturity of regulation and frameworks to effectively manage AI is still catching up with the...

Enter the IT leader: the evolving role of IT professionals

IT workers have evolved into strategic leaders within businesses, and moving forward they will...

Putting people first in the AI revolution will drive your innovation engine

The role of tech leaders is to enable an organisation's people to harness the transformative...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd