Why embedding trust in AI is critical to its future


By Tony Butler*
Tuesday, 01 October, 2024


Why embedding trust in AI is critical to its future

We’re now past the point of no return, and artificial intelligence (AI) is becoming ubiquitous, from consumer apps and devices to enterprise applications. But the ubiquity of AI has come much faster than most people’s ability to trust it — and there’s a risk we’re moving a little too quickly.

Take, for example, the sudden and ready availability of AI to the general public, including powerful capabilities to create content, images and videos. Soon, anyone will be able to make an AI image or video with nothing more than a prompt into the GenAI platform du jour.

The recent release of embedded AI in the Google Pixel’s Magic Editor is a great example. It can now enable someone to generate extremely realistic, but ultimately fake, photos — and it’s now embedded in a popular consumer device. What might seem like a harmless value-adding, fun application has become anything but; within days, it was clear that disturbing AI-generated images could be generated too easily, with some calling the guardrails within the application “far too weak”.

The timeliness from the Australian Government announcing that it is considering an AI Act to impose mandatory guardrails on the use of AI is impeccable, if entirely coincidental. Similarly in California, they recently debated — but ultimately vetoed — SB1047 (the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act) which, among other provisions, would have enforced developers of advanced AI models to adopt and follow defined safety protocols, and it would have legally put the technology industry on the hook for the risk of misuse of their AI models. Essentially it was vetoed because the Californian powers-that-be felt the legislation was either too early or too rigid, and the limitations on innovation too high.

Before too long, the Pixel Magic Editor example will accelerate the mistrust of photos and images as a truthful representation of what was taken. That same mistrust could, and likely will, extend to video and audio clips — deep fakes are already a concern among the populace and governments alike — and GenAI remains prone somewhat to hallucinations, that being the development of content that is factually incorrect.

The same holds true of businesses deploying AI applications: how sure will we be that something generated by AI is factual, or something submitted to us is authentic? As we continue on the current trajectory for AI, trust will become the most critical measure for businesses that implement AI — both internally for staff and externally for customers.

Embedding trust in enterprise-level AI

The maturity of regulation and frameworks to effectively manage AI is still catching up with the technology — even the proposed mandatory guardrails being considered by the government will take time to implement. However, trust needs to be embedded now, and it’s on the industry to get this right as once the trust is gone, we lose the benefits AI can bring.

Now, challenges to trusting enterprise-level AI may differ to trusting consumer-level AI tech, but it all falls under the same banner with the same ballpark issues to consider: data privacy, security, hallucinations and explainability are issues that cross the spectrum from consumer to enterprise. In business, hallucinations can lead to ill-informed business decisions that could do serious business harm, while lax security measures can lead to serious breaches of proprietary, confidential IP.

If this happens, trust is gone. And if you doubt it, put it this way: if an AI application hallucinated and cost you money, would you really trust AI again?

Furthermore, when it comes to early deployments of AI in business, one issue that is quickly emerging for many is that, in the short term, there is not always demonstrable value in what has been deployed. Some in the business may see it as a simple gimmick and not pay it much due, or a vendor will add nothing more than a chatbot, call it GenAI and see that as them having joined the AI hype wave.

This needs to shift. And if we start looking at the proper frameworks and take AI as a serious investment and not merely as something we see as our way to ‘tick the AI box’, we can unlock innovation and embed trust effectively.

What’s needed is structure and standards

Firstly, even for organisations looking to start small, it’s important to ensure appropriate structure is in place to drive and deliver innovation while managing risk.

We recommend forming an internal AI task force or committee — including technical, business, risk and executive representation — so that there are diverse voices in the room, while enabling a collaborative approach to development. This will ensure all issues regarding trust are addressed by a broad scope of internal stakeholders.

Next, the committee needs to look at processes that can benefit from AI, and take a risk assessment and use case assessment to ensure the effort into that process will be worth it from a productivity and ROI perspective, while maintaining an appropriate level of risk management.

Thirdly, take an approach that encourages innovation: call it a ‘centre of excellence’, a place where experimentation and ideas can flow.

Then it’s time to implement standards. For one, if the strategy is for ongoing investment in AI, then you need to consider building an effective AI management system and to align against a standard like ISO 42001.

Several standards and frameworks now exist to provide guidance on where to focus efforts. A framework like NIST considers risk management for AI, while IEEE Ethically Aligned Design provides a set of standards for ethical considerations in AI system design. New standards continue to evolve, and even the Australian Government’s proposed mandatory AI guardrails act can help provide some guidance in the future for business, in a similar vein to how the Essential Eight helped guide local businesses on best cybersecurity practice.

Finally, training employees on using the developed application – and by training, I mean continuous coaching, assurance and management — will help embed trust in its outcomes. A formal AI literacy program is often overlooked, but becoming more and more important to help users understand the limitations and capabilities, risks and issues with AI.

It’s an iterative, collaborative process — but it’s the only way we can work through the risks and ensure trust is embedded. If we trust the intention, we can work through any kinks to achieve the potential AI can provide a business.

*Tony Butler is Managing Director of Decision Inc. Australia.

Top image credit: iStock.com/Funtap

Related Articles

Enter the IT leader: the evolving role of IT professionals

IT workers have evolved into strategic leaders within businesses, and moving forward they will...

Putting people first in the AI revolution will drive your innovation engine

The role of tech leaders is to enable an organisation's people to harness the transformative...

Navigating tech catastrophes: five key lessons from the CrowdStrike outage

As organisations continue to recover from the CrowdStrike incident, it is essential to reflect on...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd