MaximAlert

Thought Leadership

Implications of ChatGPT for Cybercrime

Abhishek Rai

October 11, 2024

Visit Count : 58

We have seen and heard a lot recently about how ChatGPT is going to revolutionize the cybercrime landscape, but it can be hard to distinguish the facts from the fiction. In this blog I am going to share some of my thoughts on this hot topic and try and analyze if these claims are really true.

AI can help anyone to develop advanced malware

This is one of the claims that seems to be everywhere. We have all read multiple posts about how this can help anybody program or code an advanced malware.

The first problem with this claim is that ChatGPT is simply not good at coding. If you ask it to generate a Python script for a web page, it can probably do that. If you ask it to generate a file encryptor, it can probably do that too. But when it comes to building any kind of complex code, it does not help. The more parameters you add, the more confused it gets.

blog

While you can sometimes get ChatGPT to generate a very basic example of an individual malware component, it's far from capable of building a fully functional piece of malware. The second you start trying to assemble multiple components together, it loses track of what it's doing and fails. In fact, even if ChatGPT did have the capability to work well with code, the character/token limit would prevent inputting enough data to generate anything beyond things that you could find on Google.

Something also worth noting is that ChatGPT generates different responses to the same prompts. I think this is due the fact that Large Language Models work on statistic models that work on probabilities of one word following the next. So when using ChatGPT to generate code, it will generate different code each time we ask. This makes it nearly impossible to generate, debug, and assemble multiple piece of code.

Another myth is around using Chat GPT to bypass AV systems by generating codes for polymorphic malwares. Modern security products don't rely on code signature based detection like they did earlier when polymorphism was an issue. Nowadays, anti-malware systems use multiple technologies such as behavioral detection, emulation, and sandboxing. None of which are vulnerable to polymorphism.

blog

Using ChatGPT to create advanced phishing emails

I've seen two main claims about how ChatGPT could enhance phishing. The first is that it would enable non-English speaking cybercriminals to write phishing emails in perfect English. The second is it could enable people unfamiliar with phishing to easily write phishing emails.

In 2016 Google quietly released a sophisticated AI service that allows cyber criminals to translate phishing to/from any language, Google Translate, which was explicitly designed for language translation, and while ChatGPT can sort of do it too, it's not particularly good. It's also somewhat unclear why someone would want to ask ChatGPT to say, write a phishing email in a language they don't speak, having no idea what it says, when they could simply write the exact email they want in their native language and have it translated.

Using ChatGPT simply does not make much sense for translation, which wasn't something that needed solving anyway. ChatGPT itself is actually extremely useful for many tasks, but the headlines have been plagued by cybersecurity marketing teams trying to get in on the hype by inventing problems for the AI to solve.

Evidence of ChatGPT use in cybercrime

In several cases I've seen links to posts on hacking forums as proof that the predictions were true and ChatGPT actually is being used by cybercriminals. This, however, is simply just evidence of circular reporting. If I claim that I have hidden 10 Lac Rs in a park, it can be expected that there would be a lot of people looking for that money in parks. Nobody is going to find that money, because it does not exist, but I could certainly use forum posts discussing them as evidence they're real. The same is true for ChatGPT.

The cybersecurity industry has spent months portraying ChatGPT as a game-changing tool for hackers, so it's not surprising that cybercriminals have also taken an interest. However, all the examples I've encountered fall into one of three categories: those capitalizing on the hype by selling services that provide access to ChatGPT, experienced coders using ChatGPT to create projects and sharing them online to attract attention, inexperienced individuals posting non-functional code snippets and seeking help when their attempts fail.

In most cases, examples are in Python or PHP, languages which are non-native to Windows, and therefore rarely used for malware due to impracticality. This is likely because ChatGPT struggles with native languages, but does slightly better with scripting ones due to the abundance of examples online.

ChatGPT Filtering

Another thing often not mentioned is ChatGPT attempts to filter out and prevent malicious requests. Whilst you can get around the filters, it's time-consuming. In most cases, I was able to find the same example on Google in less time than I was able to get ChatGPT to produce it.

OpenAI is undoubtedly going to keep adding more safeguards to prevent the misuse of ChatGPT for harmful activities. Currently, the platform allows free queries, has minimal filtering, and offers open access to all users. Yet, even with these conditions, ChatGPT remains largely ineffective for individuals who lack the fundamental skills necessary for conducting cybercrime. While many argue that ChatGPT's abilities will continue to improve-a point I agree with-these improvements will be accompanied by stricter filtering. This will elevate the difficulty far beyond what would benefit the so-called minimally skilled hackers that some believe it enables.

Conclusion

While many of the widely discussed ways that ChatGPT might be used for cybercrime are far-fetched, there are legitimate risks that could emerge. As a Large Language Model (LLM), ChatGPT might be valuable for streamlining operations within more advanced, large-scale organizations that rely heavily on natural language processing. In theory, LLMs could automate certain aspects of these activities, provided that access to the AI remains more cost-effective than hiring workers in developing countries. Regardless, it will be interesting to observe how the threat intelligence industry evolves to detect and counter potential misuse of AI

 

contact