GenAI supercharging pretexting attacks

Verizon Business

By Robert Le Busque, Regional VP, Asia Pacific, Verizon Business
Monday, 11 September, 2023


GenAI supercharging pretexting attacks

Australians are more concerned than ever about their privacy, with 76% saying they experienced harm due to the recent spate of data breaches, according to the latest Australian Community Attitudes to Privacy Survey by the Office of the Australian Information Commissioner (OAIC).

It’s a wake-up call for businesses to take action to protect themselves, their workforces and customers against increasing threats to their data, as cybercriminals turn to emerging technologies to aid in executing breaches — key among them is generative AI.

A popular tactic used by cybercriminals to gain access to sensitive data is through the exploitation of human nature, with the human factor accounting for 74% of breaches analysed in Verizon’s 2023 Data Breaches Investigation Report (DBIR).

Social engineering attacks, which refer to manipulating an organisation’s sensitive information, are growing largely due to pretexting — the practice of employing a fabricated story (or pretext) — to trick a user into sharing such data.

Generative AI has the potential to make pretexting an even greater threat for three reasons: it can make pretexting appear more credible, enable such attacks to be scaled significantly and decrease the time it takes to execute them by systematically scanning an organisation for weaknesses.

Generative AI supercharges pretexting attacks

The success of a pretexting attack hinges on its credibility; the more authentic it looks, the more likely a time-poor person is to click a link or respond to an email without investigating, believing it is legitimate.

Generative AI creates greater realism for pretexting attacks by harnessing increasingly sophisticated natural language processing capabilities, which allow criminals to mimic the writing styles of an organisation or individuals with ease.

Additionally, generative AI can translate pretexting attacks into different languages on the attacker’s behalf, allowing cybercriminals to cast wider nets and reach larger target demographics.

Cybercriminals are also using generative AI to systematically scan an organisation for weaknesses, making it efficient and effective for an individual to achieve the same result as a whole team of hackers.

Businesses can expect to see an increase of all forms of pretexting attacks, with business email compromise incidents doubling in the last year, representing over half of all social engineering attacks according to the 2023 DBIR.

As generative AI technologies continue to develop and play a greater role in social engineering attacks, organisations with distributed or remote workforces face a challenge that is taking on greater importance: the creation and strict enforcement of human-centric better practices in cybersecurity.

Taking a human-centric cybersecurity protocol approach

Thanks to this dangerous union of pretexting attacks, coupled with generative AI technologies, business leaders are becoming a key target for cybercriminals, as they hold the key to an organisation’s most sensitive and lucrative data but are often exempted from standard security protocols.

For instance, C-level executives are often granted exceptions in areas such as establishing and updating credentials, using their preferred devices for business, and policies around personal and professional device usage away from the office.

Even when organisations have invested in training and protecting their workforces, it is undermined when key leaders remain vulnerable to data breaches.

The solution starts with eliminating high-risk exceptions to security protocols and holding key executives to the same rigorous standards applied throughout an organisation’s network. Businesses have in recent times focused heavily on training and educating workforces about cybersecurity protocols, but these must be consistently applied to remain effective.

Human error and pretexting attacks will continue to play a role in the vulnerability of organisations to data breaches, but cybersecurity threats can be minimised by utilising the same tools wielded by cybercriminals, including generative AI.

In the same way that it can streamline hacking, generative AI can improve an organisation’s cybersecurity defences — it is only as good or nefarious as those who use it. Data breaches may start with people in many cases, but so does the solution.

Image credit: iStock.com/MF3d

Related Articles

Secure-by-design software development for digital innovation

The rise of DevSecOps methodologies and developments in AI offers every business the opportunity...

Bolstering AI-powered cybersecurity in the face of increasing threats

The escalation of complex cyber risks is becoming a pressing issue for those in business...

How attackers are weaponising GenAI through data poisoning and manipulation

The possibility for shared large language models to be manipulated through data poisoning...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd