Navigating the challenges of AI and risk

OpenText

By George Harb, ANZ Vice-President, OpenText
Wednesday, 17 July, 2024


Navigating the challenges of AI and risk

There is no doubt that GenAI shows great promise, with organisations using it to predict market trends, detect fraud, personalise customer services and automate decision-making processes; however, its integration also introduces new risks. On September 19, the Senate is expected to reconvene to discuss the opportunities and impacts of AI technologies in Australia, with these conversations likely shaping future AI regulatory reforms. This means that organisations can't sit on their hands and wait for change; they must start, at the very least, looking at implementation strategies. From ethical AI frameworks to risk identification, without doing the proactive groundwork now many of the adoption challenges businesses face today will remain unsolved as the pressure to offer or integrate with AI grows.

The intersection of AI and financial risk

With more organisations looking to incorporate AI into their operations, the volume of sensitive data being processed and stored has skyrocketed. This makes businesses even greater targets for cyber attacks and makes navigating Australia’s changing regulatory landscape much more challenging and costly.

Collaborating with regulatory authorities and industry stakeholders is a clear way forward. This can be seen with the government’s establishment of a new Artificial Intelligence Expert Group, an industry representative group providing advice to the government to help ensure AI systems are safe; however, specific policy is still in the early stages. But shifting regulatory conversations don’t mean organisations can’t be proactive, especially in the areas of AI adoption risk identification. Assessing possible risks now means businesses can better understand what stage and for what services the best time to engage with AI solutions is.

Strategic risk identification

The first step for an organisation looking to leverage GenAI is to identify the strategic risks that it poses. This includes assessing the potential for data breaches, the integrity of AI-driven decisions and the reliability of AI systems. Understanding these risks allows organisations to develop effective strategies to safeguard high-value data.

Quantifying the financial impact of data risks is also crucial. Organisations can significantly mitigate risks by better understanding the nature of their data and implementing protective measures, particularly for GenAI. In this instance, effective discovery and protection techniques can drastically reduce disruptions and associated costs while also enhancing overall security.

Ethical AI practices

Ethical AI practices are paramount to maintaining trust and integrity, a concept made even more critical as rules and guidelines around AI evolve. Beyond compliance with regulatory bodies or industry associations, organisations should proactively develop or adopt ethical guidelines for the responsible use of AI. Given the volume of sensitive data being stored and processed today, organisations are increasingly expected to demonstrate the transparency of AI-driven decisions to keep teams and these models accountable.

As organisations navigate challenges surrounding AI implementation, there are three concepts to keep in mind to ensure ethical practices are adhered to:

  • Does the AI model being used have a bias? This question needs to be answered, especially if the AI is performing a task previously done by humans, and a plan should be put in place to rectify it.
  • Is the AI model being used leaking information? There needs to be the understanding that information might leak to other places if necessary precautions are not taken from the beginning, such as limiting the information AI can access.
  • Do you have rights to the data on which the AI model you’re using was trained? Often organisations will import models that have already been trained, which opens up several issues including legal ownership and copyright of information.
     

The journey towards a secure, private and well-governed AI-driven landscape is not just about compliance — it’s about building a resilient ecosystem that fosters trust and innovation while minimising risk. As the technology and use cases continue to evolve, it’s an early opportunity for Australian organisations to consider proactive strategies and ultimately position themselves better for the future.

Image credit: iStock.com/Sefa kart

Related Articles

Why trusted data is mission-critical for building ethical AI

Low trust continues to impact the rate of adoption of artificial intelligence.

Privacy Act reforms: how to innovate with less risk

With the OAIC calling for board members to be held personally liable for breaches, organisations...

Why diversity of thought is a foolproof recipe for business success

The case for continuing to build a truly diverse workforce in all areas grows more compelling...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd