Enterprise AI isn't autopilot: it's cruise control that CISOs need to steer
While the calendar might have ticked over, the speed on the AI hype train hasn’t slowed down.
Large language models (LLMs), in particular, have captured the imagination of business leaders in a way not seen since the advent of the smartphone.
As with all new technologies, precautions must be taken to mitigate the risks LLMs bring with them. When it comes to tempering the business’s enthusiasm for all things AI and balancing it with a sober understanding of the potential pitfalls, this task often falls on the shoulders of the chief information security officer (CISO).
But with something as novel and powerful as LLMs, where does a CISO begin to start when it comes to assessing risk? Below are three of the most pressing security vulnerabilities LLMs bring, along with strategies for how organisations can defend against them.
Poisoning the (data) well
The superpower of LLMs is that they are extremely good at analysing complex datasets and delivering answers with human-like responses. However, it is important to understand that LLMs are not sentient.
So, what happens when the datasets LLMs rely on are compromised?
LLMs can only produce answers based on the data they are fed and trained on. By poisoning the datasets used to train an AI model, attackers can manipulate that model so it produces flawed outputs. Depending on the model’s function, the consequences could be dire.
For example, an LLM might start producing inaccurate predictions about the business, or a customer service chatbot might start spewing false information and hate speech to customers. An extension of this risk is the possibility that attackers could insert backdoors into the model code, allowing them to capture sensitive company data as it’s being processed by the LLM.
Mitigating this risk comes down to implementing a robust governance and compliance framework around the datasets LLMs use. Rather than retrofit a compliance framework after the fact, these should be implemented and understood before an LLM is introduced into the business.
In other words, you need a secure data strategy before you can achieve an effective and secure AI strategy.
Comprehensive protection measures should be implemented across the entire AI model life cycle to prevent adversaries from infiltrating the supply chain and compromising LLMs.
Human oversight is crucial too. Despite the hype, LLMs should not be seen as an autopilot; they’re more like cruise control. Particularly in critical functions like financial reporting and code validation, a human is still required to make the final decision — or steer the vehicle.
Wouldn’t post it? Don’t paste it
Public LLMs have received the most fanfare, but these carry with them distinct security implications for any business exploring their use.
Organisations have no direct control over the security of public LLM services, so they’re trusting the provider to protect any data their employees may enter. A similar concern applies to third-party services that use generative AI for tasks like transcribing audio and summarising conference calls.
A good rule of thumb with public LLMs is if you wouldn’t post it to social media, don’t paste it into an LLM’s query screen. For that matter, don’t use an LLM to transcribe it.
Public LLMs are trained on the queries and data entered into them. Adding a confidential report or conference call into one of these models could allow that data to be queried by another user — regardless of whether they’re in the organisation or not.
The seeming simplicity of these public services can lead to employees using it outside of the IT team’s knowledge.
To get ahead of this risk of ‘Shadow AI’, organisations must monitor for unapproved AI technology to guard against data leakage. CISOs should understand the policies of these services and limit access to a small handful that align with their privacy and security policies.
Taking this a step further, employees should never have access to sensitive data they don’t need. Where access is required, they must be educated about not sharing sensitive information with public services.
The cyber arms race
Security tools increasingly use AI and machine learning to analyse things like network traffic and log files to identify anomalies that might indicate a security breach. In the same way an LLM’s dataset could be poisoned, attackers could compromise a security LLM so that it no longer detects particular types of events or patterns.
Malicious actors could also train an LLM to evade detection by generating traffic patterns that appear benign, cloaking an attack.
Rather than implementing such security tools and believing security is now on autopilot, it’s critical to understand — again — it is on cruise control at best.
Generative AI provides a powerful way to upskill security analysts and ease the shortage of skilled talent. Training and investing in less experienced analysts to extract meaningful insights from these tools can help simplify the process of querying and analysing data.
Further, AI tools can help security teams identify vulnerabilities and respond to incidents more quickly — as long as they are not viewed as a silver bullet.
Keep your hands on the wheel
AI and LLMs are not going anywhere. That much is clear.
To extract the most value, while mitigating the potential risks, CISOs need to ensure the appropriate guardrails and governance policies are in place. Critically, these policies must be seen and understood to be dynamic.
AI is advancing at such a rapid rate that emerging risks must be constantly monitored and data governance policies should be regularly reviewed and updated in tandem. In other words, CISOs need to keep their eyes on the road and hands on the wheel.
![]() |
Why Macs could become an Achilles heel for businesses in 2025
As Macs continue to gain traction in the corporate world, their appeal to cybercriminals will...
Building a critical infrastructure security dream team
Today it's essential to have a strong cyber strategy, with all corners of the business aware...
The AI regulation debate in Australia: navigating risks and rewards
To remain competitive in the world economy, Australia needs to find a way to safely use AI systems.