Skip to content

How ChatGPT, an AI Chatbot, Can Be Used to Compromise Cybersecurity

Published
Apr 3, 2023
Share

By Michael Francis and Salman Shaikh 

Artificial intelligence (AI) is becoming increasingly prevalent in today’s IT landscape and will continue to gain traction in the coming years. In November 2022, Open AI released their project ChatGPT (Chat Generative Pre-Trained Transformer), which is an AI chatbot that can provide instant answers to basic and complex questions. It has revolutionized the way tasks are completed in various industries. Despite its benefits, it is also used by bad actors as a method for delivering viruses, concealing them from security tools and making them harder to be detected.

How It’s Commonly Used

The use cases for the software are impressive and continue to evolve as more people interact with it, due to its ability to learn from prior queries and conversations. Capabilities of the chat bot include parsing text, language translation and even the ability to generate code in various programming languages. ChatGPT has been used to respond to emails, solve math equations and much more. The application has exploded in popularity given its ability to handle a variety of instructions. The basic version of the app is free to use which adds to its mass appeal. However, despite all the benefits, users need to be aware of the cybersecurity risks surrounding it. 

How Hackers Use It to Avoid Security Measures

First and foremost, AI has no moral or ethical code to follow as it is not sentient and cannot distinguish between right and wrong. Therefore, it can be used to accomplish good as well as nefarious tasks. Despite the fact that Open AI put safeguards in place on their bot to mitigate the instances of misuse, according to media outlets such as Digital Trends and Ars Technica, hackers have already discovered ways to circumvent those safeguards to use this tool for criminal mischief.

Hackers have already been able to create a “dark” version of ChatGPT, used for the direct intent of generating malicious code and delivering viruses.  This version of the app can also be used to improve bad actors’ social engineering techniques, by letting them expertly imitate the writing styles of specific people and businesses when creating phishing emails. It will also provide tips on how to make phishing emails more appealing and convincing.  This version of the bot has a very low monthly fee ($6.00) and the subscription includes 100 queries. 

How You Can Protect Yourself

Although AI is continuing to grow in popularity and bringing with it new threats every day, we are not defenseless. There are simple steps people can take to protect themselves and their devices. By following the steps below, you can better protect yourself from AI-based cyberattacks and reduce the risk of being a victim of identity theft, data breaches, or other types of cyber crime.  

  • Always use strong and unique passwords on your accounts;
  • Use multi-factor authentication for all your accounts;
  • Keep your software up-to-date;
  • Be wary of suspicious emails and messages;
  • Use a VPN (virtual private network) when connecting to corporate resources;
  • Back up your data frequently; and
  • Be cautious of using public Wi-Fi.

In the short span of time since the introduction of ChatGPT, AI applications have multiplied. AI isn’t going away anytime soon, and the benefits of the tool will be equally rivalled by the detriments of misuse created by bad actors. Therefore, it is important to be aware of these sophisticated techniques and their growing use among hackers. Practicing the preventative measures mentioned above is an excellent place to start. This way you can better protect yourself and your computers as best you can.

Contact EisnerAmper

If you have any questions, we'd like to hear from you.


Receive the latest business insights, analysis, and perspectives from EisnerAmper professionals.