Why AI isn't keeping me up at night
Artificial intelligence is cybersecurity’s latest bogeyman. With the recent surge of hype surrounding China’s DeepSeek AI, I can sense the panic intensifying. Across Asia Pacific, AI-powered attacks are often seen as an emerging threat, with governments and businesses scrambling to defend against potential breaches.
A wake-up call in Australia
Australia’s government isn’t wasting time; it’s already taken action. The decision to ban DeepSeek AI from all government devices and systems has sparked plenty of debate across the region. While Australia’s response is significant, other parts of Asia Pacific are also becoming more aware of the implications of AI for cybersecurity. In countries like Japan and South Korea, the use of AI in critical infrastructure is under serious scrutiny. Governments are even exploring ways to regulate AI-powered tech to minimise the risks it brings.
That said, the region isn’t hitting the brakes on AI. Investments are still flowing, especially in high-stakes industries like finance, health care and telecommunications. The concerns are real, but so is the drive for AI innovation. The challenge isn’t stopping AI; it’s finding the right balance.
Why AI-powered threats aren’t the end of the world
People are worried that AI-powered attacks will overwhelm defences, making cybercrime more dangerous than ever. The headlines paint a picture of digital chaos, but AI isn’t keeping me up at night.
Zero Trust makes AI-powered threats far less of a concern. Whether attackers use AI, quantum computing or a room full of cybercriminals, Zero Trust stops them before they get anywhere. Attackers don’t get in just because they’re using smarter tools. They need access, and Zero Trust makes sure they don’t get it.
Every breach has one thing in common: there was a policy that allowed it. All bad things happen inside of an allow rule. That’s the hard truth of cybersecurity. No matter how sophisticated the attack is, it only works if there’s an open door. The problem is that traditional security models assume everything inside the perimeter is safe. That’s a mistake.
Zero Trust flips this mindset. It denies all access by default: if someone or something isn’t explicitly allowed, it’s blocked.
This means an attacker using AI to craft the most advanced phishing email or the most convincing deepfake still runs into the same problem. They don’t have permission to access what matters: the Protect Surface. Their AI-powered attack is useless if it can’t reach the target.
AI isn’t magic: it can’t defy the laws of cyber physics.
People talk about AI like it’s unstoppable, capable of breaking through any security barrier. But that’s not how it works. AI still plays by the same rules, and strong security controls, like Zero Trust, keep it in check. AI still operates within the constraints of cybersecurity’s foundational rules — protocols like TCP/IP.
Think of it like tenpin bowling. If you roll a ball down the lane, it follows the defined path. You can’t magically teleport the ball five lanes over to knock down another set of pins. In the same way, attackers using AI can’t escape the reality of network protocols. Zero Trust policies act like the bumpers in a bowling alley, keeping everything in strict lanes. Attackers can try all the tricks they want, but if the policy says ‘no’, the attack goes straight into the gutter.
The real AI risk? Not securing your own AI systems
I’m not worried about AI-powered attacks breaking through Zero Trust — but I am worried about organisations failing to protect their own AI.
AI isn’t just an attack vector; it’s a powerful tool for defenders. It helps security teams analyse data, detect anomalies and strengthen Zero Trust policies. But if organisations don’t treat their AI models as Protect Surfaces, they risk being manipulated, poisoned or stolen.
That’s why AI itself must be secured within a Zero Trust framework. Asia Pacific organisations need to:
- identify AI models as Protect Surfaces and apply least-privilege access controls;
- monitor AI inputs and outputs to prevent poisoning attacks; and
-
segment AI systems so that even if attackers breach one part of the network, they can’t move laterally to compromise AI-driven decision-making.
Zero Trust ensures AI strengthens security rather than becoming a liability.
Zero Trust already changed the game
AI doesn’t change the game, because Zero Trust already did. Cybercriminals will always evolve. AI just makes their job easier — but only if organisations continue relying on outdated security models. Zero Trust changes the game by eliminating the attacker’s ability to move freely. It doesn’t care how an attacker operates, whether they’re using AI, brute force or social engineering. If they don’t have explicit access, they don’t get in.
With its dynamic tech landscape and growing dependence on AI, Asia Pacific is on the front lines of this evolution. The key to thriving in this environment is embracing a proactive cybersecurity strategy like Zero Trust, which ensures that AI-powered threats remain manageable.
So no, I’m not losing sleep over AI. Because with Zero Trust, attackers won’t win.
Why AI-powered DevSecOps is the future of cybersecurity in Australia
With 70% of Australian organisations feeling their security measures are falling behind,...
UNICEF Australia boosts data governance to maintain supporter trust
UNICEF Australia has boosted its ability to respond to a data breach incident —...
Enterprise AI isn't autopilot: it's cruise control that CISOs need to steer
AI is advancing at such a rapid rate that CISOs need to keep their eyes on the road and hands on...