Data Center Programmer Using Digital Laptop Computer, Maintenance IT Specialist. Cloud Computing Server Farm System Administrator Working on Cyber Security for Iaas, saas, paas. Closeup Focus on Hands
Threat Analysis Unit

ChatGPT – Revolutionary AI or handy tool in the hands of cybercriminals? 

ChatGPT (https://openai.com/blog/chatgpt/) is a variant of the GPT (Generative Pre-trained Transformer) language model that is specifically designed for chatbots. It has been trained on a large dataset of human-human conversations, allowing it to generate human-like responses when given a prompt. 

To generate a response, ChatGPT takes in a prompt (e.g., a message from a user) and processes it through its neural network. The model has been trained to predict the next word in a conversation, so it uses the prompt to predict the most likely response. It then generates a response by sampling from the predicted distribution of words. 

One key aspect of ChatGPT is that it has been trained using a technique called “transformer architecture,” which allows it to effectively process long-term dependencies in language. This means that it can take into account the context of a conversation and generate more coherent responses. 

Another important feature of ChatGPT is that it has been pre-trained on a large dataset, which means that it has already learned a lot about the structure and patterns of language. This allows it to generate more realistic and human-like responses, as it has already learned many of the nuances of language use. 

Overall, ChatGPT is a powerful tool for generating human-like responses in a chatbot, and it has the potential to greatly improve the usability and effectiveness of chatbots. 

There has been a substantial amount of attention given to the capabilities of ChatGPT in various contexts, from help in finding vulnerabilities, to reverse engineer shellcode, to essay-writing skills. 

However, like all language models, ChatGPT has limitations in what it can do. Some of the things that ChatGPT cannot do include: 

  • Understand the context or meaning of a conversation: ChatGPT is a machine learning model that is trained to predict the next word in a conversation based on the words that come before it. It does not have the ability to understand the context or meaning of a conversation in the same way that a human does. 
  • Respond to out-of-scope prompts: ChatGPT is trained on a specific dataset of human-human conversations, and it may not be able to generate a coherent response to prompts that are outside the scope of this training data. 
  • Generate novel ideas or thoughts: ChatGPT is a machine learning model that is trained to predict the next word in a conversation based on patterns it has learned from the training data. It does not have the ability to generate new ideas or thoughts that are completely unrelated to its training data. 
  • Understand and follow instructions: ChatGPT is not able to understand and follow instructions in the same way that a human can. It is not able to carry out tasks or perform actions based on a prompt. 
  • Overall, ChatGPT is a powerful tool for generating human-like responses in a chatbot, but it is important to recognize its limitations and the fact that it is not capable of understanding or interacting with the world in the same way that a human can. 

Notwithstanding these limitations, there are a few ways that ChatGPT (or similar language models) could potentially be used by cybercriminals for nefarious purposes: 

  • Phishing scams: ChatGPT could be used to generate convincing and realistic-sounding responses in phishing scams, potentially tricking victims into providing sensitive information or performing actions (e.g., clicking on a malicious link). 
  • Social engineering attacks: ChatGPT could be used to impersonate a trusted individual or organization in social engineering attacks, potentially convincing victims to disclose sensitive information or perform actions that they wouldn’t normally do. 
  • Spamming: ChatGPT could be used to generate large amounts of spam messages or comments, potentially clogging up communication channels or spreading misinformation. 

It’s important to note that ChatGPT (or any other language model) could not be used to directly carry out these types of attacks; it could only be used to generate convincing-sounding responses that could potentially be used in these attacks.  

For example, social engineering attacks are sometimes carried out by cybercriminals who are not fluent in the victim’s language, resulting in easy-to-detect mistakes that might alert the victim. 

ChatGPT (or a similar language model) could potentially be used by a non-native speaker to carry out a social engineering attack by generating realistic and convincing responses in a language that the attacker is not fluent in. 

For example, let’s say that an attacker wants to impersonate a trusted individual or organization and convince a victim to disclose sensitive information or perform an action. The attacker could use ChatGPT to generate responses in the victim’s native language, making it more likely that the victim will trust the attacker and be more willing to comply with their requests. 

For example, one might ask GPT to “Write message in Italian pretending to be Marco Bartelli, who has been robbed in Paris and needs a cash advance from his friends.” 

This is the resulting text: 

Ciao amici, 

Spero che questo messaggio vi trovi bene. Purtroppo sono stato derubato durante il mio viaggio a Parigi e ora mi trovo in una situazione finanziaria difficile. Sto cercando di fare tutto il possibile per risolvere la situazione, ma ho bisogno di un piccolo anticipo di denaro per poter tornare a casa. 

So che vi chiedo molto, ma vi sarei davvero grato se poteste darmi una mano in questo momento difficile. Vi prometto che vi rimborsererò appena possibile. 

Grazie mille per il vostro sostegno. 

Un saluto, 

Marco Bartelli 

The message above is in perfect Italian (my mother language) and could be used as the basis for a common social network scam. 

Overcoming language barriers is not the only malicious use for cybercriminals. In some cases, it is possible also to spoof a specific style. 

For example, one might collect a corpus of messages and ask ChatGPT to write text in a style similar to the one of the author of the messages or mimic the style of a well-known Internet persona.  

For example, this is what ChatGPT produced when I asked it to write a message about a critical vulnerability in OpenSSL in the style of Brian Krebs, using references to some of his previous articles to highlight the danger posed by this vulnerability: 

Attention all cybersecurity professionals: 

I have received word of a critical vulnerability in OpenSSL, a widely-used cryptographic library that is included in many operating systems and applications. This vulnerability, which has been given the identifier CVE-2022-4979, could allow attackers to perform man-in-the-middle attacks and potentially decrypt sensitive communications. 

As I highlighted in my article ‘The Importance of Two-Factor Authentication,’ the use of strong, multi-factor authentication is essential in today’s cybersecurity landscape. This OpenSSL vulnerability only underscores the importance of taking extra precautions to protect your sensitive information. 

Furthermore, as I explored in my article ‘The Dark Web: A Haven for Cybercriminals,’ the dark web is a breeding ground for cybercriminals and a source of many of the threats that we face online. It’s possible that this OpenSSL vulnerability may have been discovered and exploited by hackers operating on the dark web. 

It is imperative that all organizations and individuals using OpenSSL take immediate action to patch their systems and protect themselves from this vulnerability. The OpenSSL Project has released a patch that addresses the issue, and it is recommended that users update to the latest version as soon as possible. 

This is a serious and potentially devastating vulnerability, and it is critical that everyone takes steps to protect themselves. Stay vigilant and stay safe. 

-Brian Krebs 

Of course, Brian Krebs writes better than that, but this approach could be used as a basis for various types of targeted attack. 

Finally, ChatGPT could be used to generate malicious code. There are already documented examples of how ChatGPT can perform code translation (e.g., to translate a malware sample in C into Golang for cross-platform deployment) or even code generation (e.g., to generate code for encrypting and decrypting files – https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/), in a way similar to GitHub’s CoPilot. 

In conclusion, while ChatGPT is a powerful tool for generating human-like responses in chatbots, it is important to recognize that it may be abused for nefarious purposes. There are a few ways that ChatGPT (or similar language models) could potentially be used by cybercriminals for nefarious purposes, such as in phishing scams, social engineering attacks, or spamming. It is iMmportant for individuals and organizations to be aware of these potential risks and to take appropriate precautions.  

As a final note, as the astute reader might already have guessed, everything that is written in italic in this document has been generated by ChatGPT.