Navigating the AI-powered cyber attack landscape

Forescout Technologies, Inc.

By Daniel Dos Santos, Head of Security Research, Forescout
Wednesday, 06 September, 2023


Navigating the AI-powered cyber attack landscape

Finding malicious code has become all too easy in today’s digital landscape, especially when it comes to targeting OT, IoT and various embedded and unmanaged devices. Cybercriminals persistently focus on public exploit proofs-of-concept (PoCs), typically enhancing them to be more effective or less conspicuous by incorporating harmful elements, packaging them as malware modules or adapting them to function in different execution environments.

Alarming as it is, this adaptation process amplifies the adaptability and harm potential of existing malicious code, thereby heightening the threat level for organisations. Historically, it required some degree of time and effort from threat actors to exploit these PoCs. However, with the emergence of AI, we can anticipate a significant shift in this dynamic.

The growing threat posed by large language models

Large language models (LLMs) represent a major breakthrough in the field of artificial intelligence, with prominent names like OpenAI’s ChatGPT and Google’s PaLM 2 taking centre stage in both the AI landscape and news headlines. These widely publicised tools offer immense utility in answering various questions and performing diverse tasks simply by using straightforward prompts.

However, as with any technological advancement, the threat of malicious use looms. Cybercriminals, academic researchers and industry experts are all diligently exploring how the surging popularity of LLMs will impact cybersecurity. Some of the primary offensive applications involve the development of exploits, social engineering tactics and information gathering. On the defensive front, LLMs find utility in generating code for threat detection, explaining reverse-engineered code in natural language, and extracting insights from threat intelligence reports.

Consequently, organisations have already witnessed the initial wave of attacks facilitated by LLMs. While the cybersecurity community has so far seen limited utilisation of this capability in operational technology (OT) attacks, it’s only a matter of time before cybercriminals leverage it. The ease with which LLMs can convert code for existing OT exploits into different languages presents a substantial implication for the future of both cyber offensive and defensive capabilities.

The imperative of employing AI for vulnerability discovery

As witnessed with OT:ICEFALL, offensive OT cyber capabilities are easier to develop than previously suspected when using traditional reverse engineering and domain knowledge. However, using AI to enhance offensive capabilities has only further lowered the difficulty.

Organisations are now at a crucial juncture where they must harness the power of AI to proactively identify vulnerabilities within their source code or through patch analysis before cybercriminals seize this advantage. With AI at their disposal, malicious actors can not only craft exploits from scratch but also generate queries to pinpoint online devices with vulnerabilities ripe for exploitation.

Australia has experienced an exponential surge in vulnerability numbers, largely due to the proliferation of diverse devices connecting to computer networks. This trend has created an environment where cybercriminals actively seek out devices with inadequate security measures. The integration of AI in the quest to identify and exploit vulnerabilities in unmanaged devices is poised to significantly amplify these ongoing trends.

In the grand scheme of cyber operations, AI and automation stand to propel threat actors swiftly through various stages of the cyber kill chain. This acceleration particularly impacts domains like operational technology/industrial control systems, where human input remains pivotal in phases such as reconnaissance, initial access, lateral movement, and command and control. AI brings the potential to:

  • clearly explain outcomes for an attacker unfamiliar with a specific environment;
  • identify the most valuable assets within a network to target or those likely to result in critical damage;
  • offer hints and recommendations for subsequent steps in an attack;
  • establish connections between outputs, streamlining much of the intrusion process.
     

Beyond exploiting conventional software vulnerabilities, AI introduces the prospect of novel attack methodologies. Large Language Models represent a facet of the broader wave of generative AI, encompassing techniques for image, audio and video generation. These capabilities can enhance the sophistication of social engineering, rendering a scammer’s efforts more convincing and deceptive.

Preparing for the oncoming wave of AI-enhanced cyber attacks

As AI-assisted cyber attacks become increasingly prevalent and begin to manifest in unforeseen ways, it is imperative for every organisation to prioritise the fortification of its cybersecurity measures in anticipation of these impending threats.

The encouraging news is that best practices retain their relevance. Security fundamentals such as cyber hygiene, defence-in-depth, least privilege, network segmentation and the adoption of a zero-trust approach all maintain their effectiveness. While the ease with which AI can generate malware may lead to an upsurge in attack frequency, the foundational defences themselves remain unaltered. What has intensified is the urgency to implement these measures dynamically and with utmost effectiveness.

Amid the ever-evolving landscape of ransomware and other cyberthreats, the core cybersecurity principles endure for organisations:

  • Comprehensive asset inventory: Maintain an exhaustive inventory of all network assets, encompassing OT and unmanaged devices.
  • Risk assessment and compliance: Gain a comprehensive understanding of the risks, exposure and compliance status associated with these assets.
  • Advanced threat detection and response: Establish the capability to automatically identify and counter advanced threats that specifically target these assets.
     

These three pillars serve as a solid foundation for organisations to prepare for the forthcoming battle against malicious AI-driven cyber attacks.

Image credit: iStock.com/wildpixel

Related Articles

Secure-by-design software development for digital innovation

The rise of DevSecOps methodologies and developments in AI offers every business the opportunity...

Bolstering AI-powered cybersecurity in the face of increasing threats

The escalation of complex cyber risks is becoming a pressing issue for those in business...

How attackers are weaponising GenAI through data poisoning and manipulation

The possibility for shared large language models to be manipulated through data poisoning...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd