‘Conversation Overflow’ Cyberattacks Bypass AI Security to Target Execs – Dark Reading

3 minutes, 31 seconds Read

A novel cyberattack method dubbed “Conversation Overflow” has surfaced, attempting to get credential-harvesting phishing emails past artificial intelligence (AI)- and machine learning (ML)-enabled security platforms.

The emails can escape AI/ML algorithms’ threat detection through use of hidden text designed to mimic legitimate communication, according to SlashNext threat researchers, who released an analysis on the tactic today. They noted that it’s being used in a spate of attacks in what appears to be a test-driving exercise on the part of the bad actors, to probe for ways to get around advanced cyber defenses.

As opposed to traditional security controls, which rely on detecting “known bad” signatures, AI/ML algorithms rely on identifying deviations from “known good” communication.

So, the attack works like this: cybercriminals craft emails with two distinct parts; a visible section prompting the recipient to click a link or send information, and a concealed portion containing benign text intended to deceive AI/ML algorithms by mimicking “known good” communication.

The goal is to convince the controls that the message is a normal exchange, with attackers betting humans won’t scroll down four blank pages to the bottom to see the unrelated fake conversation meant for AI/ML’s eyes only.

In this way, the assailants can trick systems into categorizing the entire email and any subsequent replies as safe, thus allowing the attack to reach users’ inboxes.

Once these attacks bypass security measures, cybercriminals can then use the same email conversation to deliver authentic-looking messages requesting that executives reauthenticate passwords and logins, facilitating credential theft.

Exploiting “Known Good” Anomaly Detection in MLs

Stephen Kowski, field CTO for SlashNext, says the emergence of Conversation Overflow” attacks underscores cybercriminals’ adaptability in circumventing advanced security measures, particularly in the era of AI security.

“I’ve seen this attack style only once before in early 2023, but I’m now seeing it more often and in different environments,” he explains. “When I find these, they are targeting upper management and executives.”

He points out that phishing is a business, so attackers want to be efficient with their own time and resources, targeting accounts with the most access or most implied authority possible.

Kowski says this attack vector should be seen as more dangerous than the average phishing attempt because it exploits weak points in new, highly effective technologies that companies might not be aware of. That leaves a gap that cybercriminals can rush to take advantage of before IT departments cop on.

“In effect, these attackers are doing their own penetration tests on organizations all the time for their own purposes to see what will and won’t work reliably,” he says. “Look at the massive spike in QR code phishing six months ago — they found a weak point in many tools and tried to exploit it fast everywhere.”

And indeed, use of QR codes to deliver malicious payloads jumped in Q4 2023, especially against executives, who saw 42 times more QR code phishing than the average employee.

The emergence of such tactics suggests constant vigilance is needed — and Kowski points out no technology is perfect, and there is no finish line.

“When this threat is well understood and mitigated all the time, malicious actors will focus on a different method,” he says.

Using AI to Fight AI Threats

Kowski advises security teams to respond by actively running their own evaluations and testing with tools to find “unknown unknowns” in their environments.

“They can’t assume their vendor or tool of choice, while effective at the time they acquired it, will remain effective in time,” he cautions. “We expect attackers to continue to be attackers, to innovate, pivot, and shift their tactics.”

He adds that attack techniques are likely to become more creative, and as email becomes more secure, attackers are already shifting their attacks to new environments, including SMS or Teams chat.

Kowski says investment in cybersecurity solutions leveraging ML and AI will be required to combat AI-powered threats, explaining the volume of attacks is too high and ever-increasing.

“The economies of the security world necessarily requires investment into platforms that allow relatively expensive [human] resources to do more with less,” he says. “We rarely hear from security teams that they are getting a bunch of new people to address these growing concerns.”

This post was originally published on the 3rd party site mentioned in the title of this this site

Similar Posts