Big AI in big business: three pillars of risk

Interactive

By Fred Thiele, Chief Information Security Officer at Interactive
Friday, 26 April, 2024


Big AI in big business: three pillars of risk

The flood of artificial intelligence (AI) tools over recent years has delivered the potential for many Australian businesses to transform the way they work. According to McKinsey, AI adoption globally is on the rise, with global spending projected to reach US$110 billion by the end of 2024.

As the rest of the world surges ahead in its AI development journey, research suggests that AI adoption in Australia varies widely and has been slow off the mark. CSIRO estimates that 44% of businesses in Australia have already deployed IT into their operations while the Australian Bureau of Statistics found that only 1% of businesses have adopted AI.

At the same time, all modern businesses deal with sensitive information to some degree and there is a strong need to ensure platforms and tools are secure.

As AI systems become more sophisticated and pervasive, concerns surrounding data privacy, algorithmic bias and ethical implications have come to the forefront. High-profile incidents, such as data breaches and algorithmic errors, have underscored the importance of implementing robust governance frameworks and risk management protocols.

As such, technology and business leaders are facing mounting pressure to strike a delicate balance between harnessing the transformative power of AI and mitigating inherent risks to ensure responsible and ethical AI deployment.

A study by PwC estimates that AI could contribute up to US$15.7 trillion to the global economy by 2030, underscoring its potential to drive significant value creation for businesses worldwide.

But first, there are three critical pillars of risk that demand our immediate attention as we embark on this AI journey.

1. Data privacy

The sheer volume of data accessible by AI systems underscores serious concerns about data governance and privacy. If businesses haven’t already embarked on their data governance and privacy journey, the time to do so is now.

We need expertise and robust protocols in place to ensure that sensitive information remains protected and compliant with regulatory standards and internal company policy.

2. Access control

Access control has always been an area of concern for organisations — but generative AI capabilities have the potential to bring together vast amounts of data in easily queryable formats. This convenience also comes with considerable dangers if not handled correctly. The challenges are similar to those of internet search engines, which index the information on the internet for people to easily access. The difference is that generative AI is more skilled at comprehending human desires by accurately parsing natural language, making this data more reachable to a larger audience. Unlike a search engine, which requires very specific parameters to return these results, generative AI has the ability to interpret and infer — much like a human.

Whether it’s developing chatbots or bespoke AI applications, strict access controls are imperative to prevent inadvertent exposure of confidential information. The last thing any business wants is to grant AI access to a spreadsheet with sensitive data, like the salary information of every employee in the business. This is why access management is of paramount importance; AI makes it easier to find data that you otherwise wouldn’t have access to.

3. Ethics

Ethics and compliance loom large on the AI landscape. Questions surrounding the ethical implications of AI decisions and actions are becoming increasingly complex. Do we allow AI to discuss sensitive topics like gender equality? How do we ensure the accuracy and integrity of AI-generated responses?

These are not merely technical dilemmas but profound ethical considerations that demand thoughtful deliberation. Monitoring the usage of AI systems and implementing mechanisms for compliance oversight are also essential steps in mitigating risks and ensuring accountability.

This will look different for every organisation. It could require a lot of supervision — for instance, think about having to record all conversations with a chatbot, flagging anything that could be seen as ethically questionable, then having a compliance team assess the purpose of the question to decide whether that activity is ethically acceptable to the organisation.

Building the right foundations

The ‘challenge vs opportunity’ dynamic in the realm of AI is undeniable. While AI holds immense potential for driving innovation and growth, it also presents challenges. Instead of viewing them as insurmountable obstacles, we must embrace them as opportunities for growth and advancement.

Technology leaders have a crucial role to play in navigating the complexities of AI risk management. It’s not enough to rely solely on technological prowess — we must also cultivate a culture of responsibility and accountability within our organisations. This requires proactive measures such as investing in robust data governance frameworks, strengthening access controls and fostering a culture of ethical AI usage.

By prioritising privacy, access control and ethics in AI deployment, we can harness the full potential of this transformative technology while safeguarding against potential pitfalls.

The time to act is now, lest we find ourselves grappling with the consequences of unrecognised AI risks in the future.

How to prepare for AI

Preparation starts with asking the right questions. Now is a great time to start assessing internal organisational needs — what are we doing now and how will we incorporate policies and guidelines for responsible AI use into our current technology usage policies?

We must ensure we have allocated resources to support strong access control technologies to make sure our people can access the appropriate data — a combination of robust identity and access management systems.

Finally, and perhaps most importantly, we can train our people about AI, its benefits and limitations, so we can enter this new AI world with full awareness.

Image credit: iStock.com/wildpixel

Related Articles

Is the Australian tech skills gap a myth?

As Australia navigates this shift towards a skills-based economy, addressing the learning gap...

How 'pre-mortem' analysis can support successful IT deployments

As IT projects become more complex, the adoption of pre-mortem analysis should be a standard...

The key to navigating the data privacy dilemma

Feeding personal and sensitive consumer data into AI models presents a privacy challenge.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd