Technology Executive Council

AI tools such as ChatGPT are generating a mammoth increase in malicious phishing emails

Key Points
  • Since the fourth quarter of 2022, there's been a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing in particular, according to a new report by cybersecurity firm SlashNext.
  • Cybercriminals are using generative artificial intelligence tools such ChatGPT to help write sophisticated, targeted business email compromise (BEC) and other phishing messages.
  • The report findings highlight just how rapidly AI-based threats are growing, especially in their speed, volume and sophistication.
The ChatGPT chat screen on a laptop computer and logo on a smartphone arranged in the Brooklyn borough of New York, US, on Thursday, March 9, 2023.
Gabby Jones | Bloomberg | Getty Images

Here's one group that's leveraging generative artificial intelligence tools successfully: cybercriminals.

Since the fourth quarter of 2022, there's been a 1,265% increase in malicious phishing emails, and a 967% rise in credential phishing in particular, according to a new report by cybersecurity firm SlashNext.

The report, based on the company's threat intelligence and a survey of more than 300 North American cybersecurity professionals, notes that cybercriminals are leveraging generative artificial intelligence tools such as ChatGPT to help write sophisticated, targeted business email compromise (BEC) and other phishing messages.

On average, 31,000 phishing attacks were sent on a daily basis, according to the research. Nearly half of the cybersecurity professionals surveyed reported receiving a BEC attack, and 77% of them reported being targets of phishing attacks.

"These findings solidify the concerns over the use of generative AI contributing to an exponential growth of phishing," said Patrick Harr, CEO of SlashNext. "AI technology enables threat actors to increase the speed and variation of their attacks by modifying code in malware or creating thousands of variations of social engineering attacks to increase the probability of success."

The report findings highlight just how rapidly AI-based threats are growing, especially in their speed, volume and sophistication, Harr said.

"It is not a coincidence that the launch of ChatGPT at the end of last year coincides with the timeframe in which we saw exponential growth of malicious phishing emails," Harr said. "Generative AI chatbots have significantly lowered the bar for entry for novice bad actors and have provided more skilled and experienced attackers with the tools to launch targeted, spear-phishing attacks at scale."

Billions of dollars in losses

Another reason for such a high increase in phishing attacks is because they are working, Harr said. He cited the FBI's Internet Crime Report, which said BEC alone accounted for about $2.7 billion in losses in 2022 and another $52 million in losses from other types of phishing.

"With rewards like this, cybercriminals are increasingly doubling down on phishing and BEC attempts," Harr said.

While there has been some debate about the true influence of generative AI on cybercriminal activity, "we know from our research that threat actors are leveraging tools like ChatGPT to deliver fast moving cyber threats and to help write sophisticated, targeted [BEC] and other phishing messages," Harr said. 

For example, in July, SlashNext researchers discovered a BEC that used ChatGPT and a cybercrime tool called WormGPT, "which presents itself as a black hat alternative to GPT models, designed specifically for malicious activities such as creating and launching BEC attacks," Harr said.

After the emergence of WormGPT, reports started circulating about another malicious chatbot called FraudGPT, Harr said. "This bot was marketed as an 'exclusive' tool tailored for fraudsters, hackers, spammers, and similar individuals, boasting an extensive list of features," he said.

Another grave development that SlashNext researchers discovered involves the threat of AI "jailbreaks," in which hackers cleverly remove the guardrails for the legal use of gen AI chatbots. In this way, attackers can turn tools such as ChatGPT into weapons that trick victims into giving away personal data or login credentials, which can lead to further damaging incursions.   

"Cyber criminals are leveraging generative AI tools like ChatGPT and other natural language processing models to generate more convincing phishing messages," including BEC attacks, said Chris Steffen, research director at analyst and consulting firm Enterprise Management Associates.

"Gone are the days of the 'Prince of Nigeria' emails that presented broken, nearly unreadable English to try to convince would-be victims to send their life savings," Steffen said. "Instead, the emails are extremely convincing and legitimate sounding, often mimicking the styles of those that the bad guys are impersonating, or in the same vein as official correspondence from trusted sources," such as government agencies and financial services providers.

"They can use AI to analyze past writings and other publicly available information to make their emails extremely convincing," Steffen said.

For example, a cybercriminal might use AI to generate an email to a specific employee, posing as the individual's boss or supervisor and referencing a company event or a relevant personal detail, making the email seem authentic and trustworthy.

Cybersecurity leaders can take a number of steps to counteract and respond to the increased attacks, Steffen said. For one, they can provide continuous end-user education and training.

"Cybersecurity professionals need to make [users] constantly aware of this threat; a simple one-time reminder is not going to accomplish this goal," Steffen said. "They need to be building on these trainings and establish a security awareness culture within their environment, one where the end users view security as a business priority, and feel comfortable reporting suspicious emails and security related activities."

Another good practice is to implement email filtering tools that use machine learning and AI to detect and block phishing emails. "These solutions need to be constantly updated and tuned to protect against constantly evolving threats and updates to AI technologies," Steffen said.

Organizations also need to conduct regular testing and security audits of systems that can be exploited. "They need to test to identify vulnerabilities and weaknesses in the organization's defenses — as well as with employee training — while addressing known issues promptly to reduce the attack surface," Steffen said.

Finally, companies need to implement or enhance their existing security infrastructure as needed. "No solution is likely to catch all AI-generated email attacks, so cybersecurity professionals need to have layered defenses and compensating controls to overcome initial breaches," Steffen said. "Adopting a zero trust strategy [can] mitigate many of these control gaps, and offer defense-in-depth for most organizations."

AI and Cybersecurity
VIDEO25:4925:49
AI and Cybersecurity