OpenAI addressed multiple severe vulnerabilities in the popular chatbot ChatGPT that could have been exploited to take over accounts.
OpenAI addressed multiple severe vulnerabilities in ChatGPT that could have allowed attackers to take over user accounts and view chat histories.
One of the issues was a “Web Cache Deception” vulnerability reported that could lead to an account takeover by the bug bounty hunter and Shockwave founder Gal Nagli.
The expert discovered the vulnerability while analyzing the requests that handle ChatGPT’s authentication flow. The following GET request caught the attention of the expert:
https://chat.openai[.]com/api/auth/session
“Basically, whenever we login to our ChatGPT instance, the application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below” Nagli wrote on Twitter detailing the bug.
Basically, whenever we login to our ChatGPT instance, they application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below: pic.twitter.com/m0R0C1Eu2e
— Nagli (@naglinagli) March 24, 2023
The expert explained that to exploit the flaw, a threat actor can craft a dedicated .css path to the session endpoint (/api/auth/session) and send the link to the victim. Upon visiting the link, the response is cached and the attacker can harvest the victim’s JWT credentials and take full control over his account.
Attack Flow:
1. Attacker crafts a dedicated .css path of the /api/auth/session endpoint.
2. Attacker distributes the link (either directly to a victim or publicly)
3. Victims visit the legitimate link.
4. Response is cached.
5. Attacker harvests JWT Credentials.Access Granted.
— Nagli (@naglinagli) March 24, 2023
Nagli praised the OpenAI security team that quickly addressed the issue by instructing the caching server to not catch the endpoint through a regex.
Vulnerability Disclosure Process from @OpenAI:
1. Email sent at 19:54 to disclosure@openai.com
2. First response 20:02
3. First fix attempt 20:40
4. Production fix 21:31— Nagli (@naglinagli) March 24, 2023
The bad news is that the mitigation implemented by the company only partially addressed the issue. The researcher Ayoub Fathi discovered that it is possible to bypass authentication targeting another ChatGPT API. An attacker can exploit this bypass technique to access to a user’s conversation titles.
Update:
Couple of hours after my tweet I was made aware by a fellow researcher @_ayoubfathi_ and others that there were a number of bypasses to the regex based fix implemented by @OpenAI (which didn't surprise me).
I notified the team ASAP once again and https://t.co/g1GJMtxG4E… pic.twitter.com/SDdn2VU0EA
— Nagli (@naglinagli) March 25, 2023
How could I have Hacked into any #ChatGPT account, including saved conversations, account status, chat history and more!
A tale of 4 ChatGPT vulnerabilities <img src=\"https://s.w.org/images/core/emoji/14.0.0/72×72/1f447.png\" alt=\"
Source: lmth.sgub-revoekat-tnuocca-tpgtahc/gnikcah/481441/moc.sriaffaytiruces