Best of 2019: Email providers' phishing nets have "big holes"


By Dylan Bushell-Embling
Monday, 06 January, 2020


Best of 2019: Email providers' phishing nets have "big holes"

The phishing filters used by various email service providers are able to detect only a minority of attacks, and flag only a slim minority as potentially malicious, research shows.

Researchers from the University of Plymouth in the UK assessed the effectiveness of phishing filters by sending two sets of messages to dummy victim accounts, using text taken from archives of reported phishing attacks.

The first set of messages was sent as plain text with links removed, while the second had links retained and pointing to their original destination.

The research found that the potential phishing messages made it into inboxes in the significant majority of cases — 75% of the emails without links and 64% of those with links. Even more concerningly, only 6% of messages were explicitly flagged as potentially malicious.

“The poor performance of most providers implies they either do not employ filtering based on language content, or that it is inadequate to protect users,” noted Professor Steven Furnell, Leader of the university's Centre for Security, Communications and Network (CSCAN) and research lead.

“Given users’ tendency to perform poorly at identifying malicious messages, this is a worrying outcome. The results suggest an opportunity to improve phishing detection in general, but the technology as it stands cannot be relied upon to provide anything other than a small contribution in this context.”

Phishing attacks have been growing at a feverish pace since they were first recorded in 2003. Kaspersky Lab recently reported that its antiphishing system was triggered more than 482.4 million times in 2018 — almost double the rate of 2017.

“Phishing has now been a problem for over a decade and a half. Unfortunately, just like malware, it’s proven to be the cybersecurity equivalent of an unwanted genie that we can’t put back in the bottle,” Furnell said.

“Despite many efforts to educate users and provide safeguards, people are still falling victim. Our study shows the technology can identify things that we would ideally want users to be able to spot for themselves — but while there is a net, it clearly has big holes.”

Image credit: ©stock.adobe.com/au/Sergey Nivens

This article was first published on 5 August, 2019.

Related Articles

The AI regulation debate in Australia: navigating risks and rewards

To remain competitive in the world economy, Australia needs to find a way to safely use AI systems.

Strategies for navigating Java vulnerabilities

Java remains a robust and widely adopted platform for enterprise applications, but staying ahead...

Not all cyber risk is created equal

The key to mitigating cyber exposure lies in preventing breaches before they happen.


  • All content Copyright © 2024 Westwick-Farrow Pty Ltd