How GenAI is Supercharging Zero-Day Cyberattacks

Author Avatar

by

Posted on September 19, 2023

Want to interview Peter?

Contact

Ever since ChatGPT was launched in November 2022, it’s been hard to find anyone who isn’t using it. From writing emails to generating code, or even developing creative marketing campaigns, GenAI has been sprinkling productivity magic across industries and roles.

But nice guys aren’t the only ones looking for ways to become more productive. Bad actors are into productivity, too. Of course, GenAI isn’t a free-for-all – mainstream GenAI platforms like ChatGPT and Google’s Bard have guardrails in place to discourage criminals from using the platforms for malicious purposes, such as creating malware. The malware prevention tools in the GenAI platforms aren’t foolproof and some criminals, especially experienced ones, are able to circumvent them. But bad actors do have to work to get around the protections, which reduces the productivity advantages of mainstream GenAI when it’s used for malicious purposes.

However, the risks of generative AI grew exponentially in the summer of 2023, when two guardrail-free GenAI platforms explicitly trained for malicious purposes were introduced – WormGPT and FraudGPT. FraudGPT’s creator, who uses the pseudonym “CanadianKingpin12,” boasts that the platform was created exclusively to support bad actors including hackers and spammers. That means that the risk is no longer only threat actors abusing legitimate GenAI platforms and evading protections, but rather malicious GenAI tools specifically designed to facilitate criminal activities. And they may only be the tip of the iceberg. The now infamous CanadianKingpin12 has boasted that they are working on two additional malicious GenAI tools called “DarkBART” and “DarkBERT.”

The plot thickens. CanadianKingpin12 didn’t actually create DarkBERT. DarkBERT is a large language model (LLM) that was originally created to fight cybercrime, based on research conducted by S2W, a South Korean cybersecurity company, and the Korea Advanced Institute of Science and Technology (KAIST). The idea behind the project was to use GenAI to stay a step ahead of the bad guys by penetrating the dark web – a space that general web search engines don’t usually access – where cybercriminals share tactics and jargon that they use when conducting fraud. The researchers wanted to train DarkBERT using language used by cybercriminals on the dark web, so it could be used to prevent credential theft and provide phishing attack protection, along with similar white-hat activities.

All good, right? Not exactly. CanadianKingpin12 claims to have gained access to DarkBERT, which theoretically allows them to use the well-intentioned work done by the South Korean team to exploit vulnerabilities in computer systems, including critical infrastructure, and create and distribute malware and ransomware. Even more concerning, CanadianKingpin12 claims that their version of DarkBERT and its counterpart DarkBART are integrated with Google Lens, which means that they can utilize both images and texts in exploits.

The new frontier of malicious GenAI

So, how could these platforms be used? Just like ChatGPT can help people who are mediocre writers become great writers, DarkBERT and its parallels could help mediocre cybercriminals become great ones, essentially lowering the bar for aspiring threat actors.

Let’s look at phishing, for example. In most cases, phishing campaigns include clues, things that seem slightly off, like spelling mistakes or awkward phrasing. If the recipient has gone through anti-phishing training or some type of phishing prevention program, these clues can tip them off – if they are alert– to the fact that they may be facing a scam.

But black hat GenAI systems like DarkBERT can learn from those user behaviors if they are fed numerous examples of successful and unsuccessful phishing emails, and build an understanding of how to appear completely legit to their victims. Within a short time, they will be able to hone phishing emails, making them truly indistinguishable from safe emails, even for the most aware and savvy users. Anti-phishing training is never a perfect solution – even with training, users click on the links of 3-5% of phishing emails. Yet GenAI platforms exacerbate the problem since they learn faster than any human can. They will always be one step ahead, rendering any anti-phishing training irrelevant.

​​Chatbots are a target as well. They have become increasingly popular because they can handle basic customer support, freeing up human support agents to focus on advanced issues. Yet they, like any digital tool, can be trained to implement malicious phishing and social engineering campaigns using unrestricted data sets sourced from the dark web, or on LLMs like DarkBert.

Cybersecurity experts predict that GenAI can and will be used to develop highly sophisticated phishing and social engineering campaigns to steal personal information like passwords and credit card details. It can be used to create both the emails/text messages sent to the recipient and the landing pages they lead to. GenAI can even be used in business email compromise (BEC) to gain access to systems and networks. FraudGPT, for instance, seems to excel at corporate social engineering, such as creating emails and landing pages that can be used in a corporate phishing attack.

And it’s not only phishing and social engineering – GenAI is also very good at coding and creating malware. Traditional cybersecurity or anti-phishing software identifies malware by its signature – specific patterns in its code or instructions. Once the signature has been identified, cybersecurity teams create patches that prevent the vulnerability from being exploited by malware with that signature.

To evade the various cybersecurity mechanisms, hackers are constantly designing malware with new, previously unknown signatures, known as zero-day exploits, so detection-based tools can’t stop them. It takes time for cybersecurity professionals to find the exploits and patch them, and even when patches exist, not everyone implements them right away. Hackers take advantage of the lag to wreak havoc. Once the patches are up and implemented, they move to on the next exploit.

The problem (for hackers at least), is that creating zero-day exploits takes time and energy, and their resources are limited. Enter GenAI. Hackers and cybercriminals can manipulate GenAI to create endless, innovative, and dangerous types of malware with undetectable signatures – in other words, an endless number of zero-day exploits. As the black hat GenAI models are trained to find vulnerabilities in systems and software more effectively, they could theoretically release hundreds, or even thousands of exploits each day. Traditional cybersecurity identification and patching systems couldn’t keep up before GenAI, and there is simply no way they can match the pace of GenAI attacks.

None of the above scenarios are hypothetical. Tools like WormGPT, FraudGPT, and DarkBert/Bart are already being advertised on the dark web and on encrypted channels like Telegram. The number of use cases is growing as well. With an ever-growing number of GenAI solutions now available, new exploits are only a matter of time.

Future proof protection

When it comes to AI-generated malware, most of the strategies cybersecurity solutions rely upon are no longer effective. With one exception, that is – Zero Trust. The Zero Trust security approach is based on the assumption that every software, content, or actor could be a threat, not only malware with signatures that were previously identified as malicious. The problem is that although there are many Zero Trust solutions on the market, most don’t truly adhere to Zero Trust principles when it comes to internet use.

The true Zero Trust approach for securing internet and email access is remote browser isolation (RBI). When websites are opened using RBI, all active content is prevented from reaching the endpoint browser, regardless of whether it’s safe or not. There’s no need to identify specific malware signatures, pick up on sophisticated phishing tactics, or differentiate between dangerous content that shouldn’t be downloaded and safe content that can be because everything is isolated from the get-go.

That protection is critical because the vast majority of zero-day attacks – whether generated by GenAI or humans – are delivered by web or email. For example, if the link from a phishing email opens a login form on a spoofed or unrecognized site such as a Google or Microsoft login lookalike – RBI-based Ericom Web Security opens it in read-only mode so users can’t enter their credentials. This type of spoofing protection guards against the threat of credential theft.

In addition, the Ericom Web Security solution deconstructs and scrubs all web attachments and downloads of malware in a process known as Content Disarm and Reconstruction (CDR), then reconstructs the documents with functionality intact before downloading them to user devices.

The bottom line is that the risks of generative AI misuse by threat actors simply cannot be addressed by standard malware prevention, ransomware protection, or anti-phishing tools, not even the most advanced ones on the market. Ericom’s comprehensive Web Security solution offers effective RBI protection, delivering Zero Trust capabilities in an affordable cloud-based solution. It lets you use the internet safely, even in the uncharted new world of malicious GenAI.

 


Share this on:

Author Avatar

About Peter Silva

Peter Silva is the Sr. Product Marketing Manager at Ericom, the Cybersecurity Unit of Cradlepoint. After a decade working in the professional theater, he became one of the first 6 Internet Specialists at AT&T, focusing on access and security. Over the years, he’s been recognized for his stellar record of captivating audiences with engaging presentations and high-quality content. Along with AT&T, he’s held positions at Verio, Exodus, Pacific Wireless Corp and F5.

Recent Posts

Air Gapping Your Way to Cyber Safety

Physically air gapping enterprise networks from the web is a great way to protect operations, keep data safe … and squelch productivity. Virtual air gapping is a better approach.

Motion Picture Association Updates Cybersecurity Best Practices

The MPA recently revised its content security best practices to address, among other challenges, the issue of data protection in the cloud computing age.

FTC Issues Cybersecurity Warning for QR Codes

QR codes on ads are a simple way to grab potential customers before they move on. No wonder cybercriminals are using QR codes, too.