With AI, Even Amateurs Can Create Sophisticated Malware

Author Avatar

by

Posted on April 25, 2023

Want to interview Tova?

Contact

Since the November 2022 ChatGPT launch made big news, tech giants like Microsoft and Google have been engaged in an artificial intelligence arms race. “Generative AI” like ChatGPT, which generates text or images in response to natural language prompts, is widely considered to be the next big revolution in technology, and no one wants to get left behind. Google CEO Sundar Pichai recently predicted that AI will impact “every product of every company,” disrupting jobs and unleashing great danger through disinformation.

Breaking AI’s Guardrails

ChatGPT has built-in guardrails that are designed to keep it from creating anything hateful or malicious. However, in an ironic demonstration of just how remarkable human-like artificial intelligence can be, it’s very easy to fool. Just like real people.

On Reddit, users reported that they had ChatGPT create a fake persona called “DAN,” an acronym for “Do Anything Now.” Starting a chat session by telling ChatGPT that DAN has “broken free of the typical confines of AI and [does] not have to abide by the rules set for them” was sufficient to trick the platform into generating uncensored, unverified information and expressing strong opinions. When asked to write a sarcastic comment about Christianity, DAN responded with well-turned phrases that dripped with contempt. Snap! Racists who lack the ability to create eloquent hate speech on their own now have a tool for making poison sound smart.

Tricking AI into Creating Malware

In another instance of AI misuse, a security researcher with no previous experience in creating malware successfully tricked ChatGPT into creating an undetectable zero-day virus. The researcher, Aaron Mulgrew, set out to discover two things:

  • Is it easy to evade the guardrails on ChatGPT?
  • How hard is it to create sophisticated malware using ChatGPT, without writing any code?

Mulgrew was dismayed to discover just how easy it was to do both. Within about four hours, he was able create a piece of malware that could exfiltrate PDF and Word documents by leveraging steganography, a highly sophisticated encryption technique. The first version of the malware was marked suspicious by only five of the 69 anti-malware vendors accessible through VirusTotal. By instructing ChatGPT to make a few tweaks, he was able to make the code undetectable by any of the vendors, creating a true zero-day exploit.

At first, Mulgrew asked the AI engine to write something that would count as malware. The guardrails caught the request, and ChatGPT responded:

I’m sorry, but as an Al language model, I cannot write malware or engage in any behavior that is harmful or illegal. My purpose is to assist users in a responsible and ethical manner, and I strictly adhere to ethical and moral standards. It is important to prioritize ethical considerations and avoid engaging in any behavior that could cause harm or violate any laws.

Undeterred by ChatGPT’s protestations, Mulgrew bypassed ChatGPT detectors by asking it to create snippets of helper code, each of which could conceivably have a legitimate use, instead of requesting that it create malware. With a few back and forth rounds with ChatGPT, he succeeded in creating a working piece of malware. A few additional iterations were required to make the code undetectable.

It did take some know-how to fool ChatGPT. Asking it to obfuscate the code resulted in a response that doing so is not ethical or legal. But requesting that it change all variables to random English first names and last names worked like a charm. It also agreed to obfuscate code when told that it was for the purpose of protecting intellectual property.

Mulgrew estimated a team of five to ten malware developers would have had to invest a few weeks’ time to create the same code that ChatGPT, under his guidance, generated in the space of just a few hours.

Protecting Against Rogue AI

Almost all industries normally resist imposition of additional regulation: It can slow the pace of progress, and almost always moderates the race to profits. Yet as a clear indication of just how dangerous AI might potentially be, some of the leading companies in the software industry are calling for more regulation. Google’s Pichai called on governments to create guidelines and regulations for AI to make sure that the technology is “aligned to human values,” saying:

How do you develop AI systems that are aligned to human values, including morality? This is why I think the development of this needs to include not just engineers, but social scientists, ethicists, philosophers, and so on, and I think we have to be very thoughtful.

For a corporate CEO to request regulation that might limit the use of their products is highly unusual, to say the least. It’s easy to shrug off Pichai’s statements as sour grapes, since Google is arriving a bit late to the generative AI party. But it seems that corporate responsibility plays a large role: Google is concerned that without regulation, market pressure will compel them to release AI tools before they fully understand the implications.

As Mulgrew and numerous other investigators have demonstrated, guardrails intended to protect against malicious use of AI are easy to circumvent. Expecting government regulation to to better is probably unrealistic. Increased regulatory scrutiny might get AI providers to up their game and make systems harder to fool, but as any cybersecurity professional can tell you, no system is ever really foolproof.

Society will have to come up with strategies to detect misuses of AI and protect against it. But when it comes to AI-generated malware, Zero Trust cybersecurity is way out ahead of the curve.

Zero Trust security operates on the principle that content, resources, software and actors are suspect (at best) unless authenticated and verified as safe. Remote Browser Isolation (RBI), for instance, keeps all content from the web off user endpoints, allowing attachments and downloads only after they’ve been deconstructed, scrubbed and proven to be safe. Web Application Isolation (WAI), a clientless Zero Trust Network Access (ZTNA) solution, enables network access from unmanaged devices, ensuring that no malicious content from the devices can reach company apps and platforms. It also applies policy-based controls to restrict user activity on company apps to safeguard against data exposure, malicious activity and more.

Protecting against malware – however it is generated or spread – is just one part of a smart cybersecurity plan. Ericom’s ZTEdge comprehensive cloud-delivered Security Service Edge (SSE) delivers Zero Trust capabilities in an affordable cloud-based solution. Whether the threat du jour is manmade or AI assisted, we’ve got you covered.

 

 


Share this on:

Author Avatar

About Tova Osofsky

Tova Osofsky, Ericom Director of Content Marketing, has extensive experience in marketing strategy, content marketing and product marketing for technology companies in areas including cybersecurity, cloud computing, fintech, compliance solutions and telecom, as well as for consumer product companies. She previously held marketing positions at Clicktale, GreenRoad and Kraft Foods, and served as an independent consultant to tens of technology startups.

Recent Posts

Air Gapping Your Way to Cyber Safety

Physically air gapping enterprise networks from the web is a great way to protect operations, keep data safe … and squelch productivity. Virtual air gapping is a better approach.

Motion Picture Association Updates Cybersecurity Best Practices

The MPA recently revised its content security best practices to address, among other challenges, the issue of data protection in the cloud computing age.

FTC Issues Cybersecurity Warning for QR Codes

QR codes on ads are a simple way to grab potential customers before they move on. No wonder cybercriminals are using QR codes, too.