It’s easy to see why there has always been some skepticism and uncertainty about the emergence of AI technology. However, the moment we are faced with an advanced technology capable of doing its own thinking, we must take a necessary step back before diving right in.
While making our lives so much easier in many aspects, the AI technology that we possess today just keeps improving which means we may have dire consequences for the future of cybersecurity – hence the existence of ChatGPT malware.
In 2022, the American company OpenAI released its latest AI chatbot, ChatGPT, taking the world by storm in just weeks. Within minutes, the program could mimic human conversations and draft plausible and efficient pieces of literature, music, and code.
The rendering of a few basic prompts has made ChatGPT a worldwide sensation – but does such a sophisticated program carry more risks than benefits?
As well as some precautions to take when using ChatGPT, here are some ways attackers exploit its capabilities:
- Finding Vulnerabilities
With ChatGPT, programmers can uniquely debug code. The chatbot will yield a surprisingly accurate result of bugs or problems in the provided source code by making a simple request to debug the code, followed by the code in question. Unfortunately, it is also possible for attackers to use this capability to identify security vulnerabilities.
When Brendan Dolan-Gavitt asked the chatbot to find a vulnerability, it provided the source code to solve the capture-the-flag challenge. After several follow-up questions, the bot identified the buffer overflow vulnerability with incredible accuracy. Besides giving the solution, ChatGPT also explained its thought process for educational purposes.
ChatGPT exploits a buffer overflow