Who should take the lead in responsible AI?


By Tony Butler*
Wednesday, 09 April, 2025


Who should take the lead in responsible AI?

The AI arms race is in full swing, but one question remains unanswered: who’s responsible for making sure it doesn’t go off the rails? Should governments set the rules? Should AI labs embed responsible guardrails from day one? Or is it up to businesses deploying these systems to ensure they don’t unleash chaos?

With President Trump stripping back AI safety regulations, the debate has reignited: what does this mean for global AI policy, and how will it shape Australian regulations?

At first glance, the burden appears to be shifting to the private sector. But there’s a risk: without strong guidelines, companies will race to outpace competitors by cutting ethical corners. The truth, however, is that businesses that don’t embed responsible AI today risk building models that aren’t just legally indefensible, but permanently flawed. Once AI is deployed, there’s no ‘patch’ to undo poor ethical design.

The DeepSeek debacle

If the AI industry needed a reality check, DeepSeek delivered this in January. The Chinese AI disruptor took the world by storm, racking up over two million downloads in 48 hours — but beneath the hype was a security disaster waiting to happen. A data breach exposed over a million user records, API keys, operational metadata and plaintext chat logs, all left completely unprotected. No authentication, no safeguards — just an open database ripe for exploitation.

This wasn’t just a blip. DeepSeek’s rise showed how easily an AI model can upend the market, but the breach revealed something even more alarming: in the rush to push AI out the door, security and governance took a back seat. Businesses relying on third-party AI vendors should be asking themselves: how much do we really know about the models we’re using? Because when something goes wrong, it won’t be regulators picking up the pieces — it’ll be businesses and their customers.

That’s because pulling back on AI regulations doesn’t eliminate risk — it amplifies it. AI models trained on opaque datasets can leak sensitive data, triggering privacy violations and compliance nightmares. A chatbot revealing confidential medical records or legal documents could land a company in hot water overnight, and there’s already ongoing discussion around AI copyright infringement.

AI failures won’t be minor glitches; they’ll be corporate crises. A biased hiring algorithm, a chatbot revealing private medical records, or an AI system spreading false financial advice could destroy trust overnight. These failures will go viral in minutes, and the reputational damage will be swift and unforgiving.

Don’t wait for government intervention

Despite the government recently sharing that a risk-based model for regulating AI will be announced soon, developing policy takes time. Governments tend to move at a glacial pace when it comes to regulation, especially of emerging technology, which makes the odds of a unified global AI framework slim.

Geopolitical divides and rising nationalism are also making global AI regulation and alignment on AI ethics even harder. The US, the undisputed leader in AI at the moment, is doubling down on a hands-off approach, stripping back safety regulations to maintain its competitive edge. Meanwhile, China is charging ahead with state-backed AI initiatives, prioritising speed over transparency. This creates a dilemma for other regions like the EU and Australia: do they follow the US lead, easing regulations to stay competitive, or take a more responsible approach and risk falling behind?

While extreme use cases might see regulation introduced first, broader enforcement will be slow. And with AI shaping economic power, the pressure to keep pace with the US and China may force governments to opt for lighter regulation — whether they want to or not.

The bottom line? Businesses waiting for governments to provide clarity are waiting for a train that’s set to arrive very late. Instead of playing defence, they must take a proactive role in defining what responsible AI looks like within their own organisations. Selecting impactful use cases, strategically investing in the right AI architectures, and implementing robust governance controls — when anchored in a principled AI governance framework — form the cornerstone of business-ready AI implementation.

What businesses must do right now

AI failures are inevitable, so embedding responsible AI principles now — before disaster strikes — is the only way to stay ahead.

Businesses need complete visibility into their AI supply chains, understanding exactly what’s in their models and datasets before deployment. Security, ethics and compliance can’t be afterthoughts; they must be woven in from day one. If AI is making business-critical decisions, it must be explainable, accountable and ready for regulation, even if the rules aren’t in place yet.

And when things go wrong — when misinformation spreads, private data leaks or bias creeps in — companies must respond swiftly to contain the fallout, just as they would in a cybersecurity breach. Finally, staying ahead of regulatory trends isn’t optional. AI laws may take time to arrive, but when they do, they’ll come fast, and businesses caught unprepared will pay the price.

AI is the future, but it’s also an issues minefield. The companies that treat responsible AI as a necessity today will be the ones defining the industry tomorrow. Those that don’t? They’ll be the cautionary tales that fuel the next wave of regulations.

*Tony Butler is the Managing Director of data and analytics consultancy Decision Inc. Australia.

Image credit: iStock.com/Kwanchanok Taen-on

Related Articles

Why there's no efficient automation without integration

It's not enough for organisations to simply use AI: they must leverage it in a way that...

AI-driven observability: fundamental for business continuity?

The strategic blending of observability with AI is no longer a nice-to-have: it's necessary...

What two years of GenAI has taught us about unlocking value

While the end goal has always been about realising value, it's clear the two-year anniversary...


  • All content Copyright © 2025 Westwick-Farrow Pty Ltd