Lacklustre governance renders AI nothing more than biased bots

Boomi

By David Irecki*
Tuesday, 18 March, 2025


Lacklustre governance renders AI nothing more than biased bots

Regulatory discussions concerning AI are well underway, but despite attention from the government and private sector, governance efforts are struggling to keep pace with the speed of AI adoption.

Right now, 60% of Australian organisations are expanding their AI capabilities, according to CSIRO. At the same time, governance gaps, poor data quality and regulatory uncertainty are creating an environment where AI failures aren’t just possible — they’re inevitable.

A single bad output, a biased algorithm, or a security breach could bring a company’s AI ambitions crashing down overnight. The hard truth is AI isn’t just a tech upgrade — it’s a liability if companies don’t get serious about governance.

The Australian Government knows the risks, but regulation is still catching up. Minister Ed Husic has confirmed that the government is in the final stages of shaping what the nation’s regulatory response will be to AI, but companies waiting for a clear rulebook are setting themselves up for failure.

Right now, AI regulation in Australia is a patchwork of voluntary guidelines and best-effort policies. Meanwhile, companies are deploying AI systems at scale with little accountability.

The reality is that compliance won’t be optional for much longer. The government’s working away on its proposed AI guardrails for high-risk settings, mirroring global regulatory moves like the EU AI Act. But waiting for the ink to dry on legislation before taking action is like ignoring a fire alarm and hoping the flames go out on their own.

One of the biggest landmines in AI governance is that AI doesn’t forget. When sensitive data enters an AI system, there’s no magic eraser. If AI scrapes it, it keeps it. This means that everything from personally identifiable information (PII) to confidential company data can live inside an AI model indefinitely.

This makes the ‘right to be forgotten’ almost impossible in AI applications. Companies might train employees to avoid feeding sensitive data into AI models, but hoping people remember to follow protocol isn’t a governance strategy — it’s a gamble. AI data governance has to be baked into the system, not left to human error.

A 2024 audit by the Australian National Audit Office (ANAO) found 56 government departments and agencies were utilising artificial intelligence, yet less than 65% had established internal policies governing its use.

The report went on to further highlight that the lack of governance frameworks poses significant risks, including privacy concerns, data quality issues and potential security breaches, emphasising that without proper oversight, the rapid adoption of AI could lead to unintended consequences.

Addressing risks like these isn’t a waiting game, it’s one that requires proactive governance.

DeepSeek bans are a warning

If companies want an indication as to how far-reaching AI governance could go, they don’t need to look far. The widespread government ban on Chinese AI DeepSeek is a flashing red light.

NSW, Queensland, South Australia, Western Australia and the ACT have all banned DeepSeek on government devices following a mandatory direction from the Secretary of the Department of Home Affairs, due to serious concerns around data sovereignty and security risks.

If governments are banning AI tools over governance concerns, how long before companies face the same scrutiny? What’s happening with DeepSeek is proof that AI isn’t just a productivity tool — it’s a regulatory and reputational minefield. If companies don’t have clear AI governance frameworks in place, they’re playing Russian roulette with their compliance status.

Companies serious about scaling AI safely need to get their governance house in order, and that starts with clear AI accountability frameworks that make AI explainable and auditable from day one.

Companies also need tight data governance to ensure AI models aren’t fed junk data that leads to biased, misleading or outright dangerous outputs. ‘Data liquidity’ also ties in here — the ability to seamlessly access and analyse data from diverse sources. Too many companies are feeding AI systems outdated or fragment data, often trapped in legacy systems that compromise accuracy and create hidden risks.

Most importantly, however, companies need risk management strategies that include human oversight and transparency — because AI decisions don’t exist in a vacuum.

AI governance isn’t an abstract policy discussion, it’s a critical issue. Without guardrails, AI is just bias at scale, bad data in motion, and a liability waiting to happen. Companies that fail to act won’t just struggle to comply with future regulations: they’ll be dealing with the fallout of AI failures before those rules even arrive.

*David Irecki is CTO for Asia-Pacific Japan at Boomi, based in Sydney.

Top image credit: iStock.com/Sansert Sangsakawrat

Related Articles

Four ways CISOs can strengthen their influence in the boardroom

A disconnect in the boardroom can affect digital resilience or the ability of a business to...

How businesses can prepare for the age of agentic AI

As organisations adapt to AI-driven efficiencies, a new evolution is emerging to redefine...

Driving DEI and data innovation one step at a time

For International Women's Day, Keir Garrett of Cloudera reflects on her journey as a...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd