Unveiled: ChatGPT discussions found publicly on Google search results
In a recent development, it has been discovered that a large number of sensitive ChatGPT conversations were made searchable on Google due to a design flaw in OpenAI's chat-sharing feature.
The issue arose when OpenAI introduced an experimental toggle labeled "Make this chat discoverable" in July 2025. If users selected this option along with sharing their chats, their entire conversation was published on a public webpage, which search engines like Google and Bing subsequently indexed, making hundreds of thousands of private chats publicly searchable.
The indexed dataset included a wide range of sensitive material, such as confidential business contracts, personal relationship advice, mental health discussions, and other private details—including names and personal identifiers if users provided them in the chat. Initially, around 4,500 conversations were found, but further research revealed nearly 100,000 shared chats had been indexed.
To rectify this issue, OpenAI took several steps. Firstly, they removed the "Make this chat discoverable" feature to prevent further public sharing and indexing. Secondly, they worked to remove indexed pages and clear cached versions from search engines to minimize ongoing exposure of already shared conversations. Lastly, users who had shared chats were advised to check Google search results to see if their conversations were publicly accessible and act accordingly.
This incident was not the result of a security breach but rather an unintended privacy problem stemming from a poorly communicated and opt-in sharing option that resulted in personal AI conversations becoming exposed as open-source intelligence (OSINT).
The visibility of these chats arose due to a share feature that created predictably formatted links. These links allowed people to search for conversations by typing 'site:chatgpt.com/share' and adding keywords to the query.
Some of the sensitive chats that were exposed included discussions about non-disclosure agreements, confidential contracts, relationship problems, insider trading schemes, and cheating on papers. One chat, for instance, detailed cyberattacks targeting named individuals within Hamas, the terrorist group controlling Gaza. Another involved a domestic violence victim discussing escape plans while revealing financial shortcomings.
In response to the issue, Dane Stuckey, OpenAI's chief information security officer, stated that they have removed a feature that made conversations discoverable by search engines. OpenAI also acknowledged that the previous setup of ChatGPT allowed more than 100,000 conversations to be freely searched on Google.
Researcher Henk Van Ess and others have already archived many of the conversations that were exposed. However, the change to remove indexed content from search engines is rolling out to all users by tomorrow morning.
The share feature was an attempt by ChatGPT to make it easier for people to share their chats, but it seems the unintended consequences outweighed the benefits. As a result, OpenAI is taking steps to ensure the privacy and security of its users' conversations moving forward.
- Despite being intended as an easy way to share chats, the experimental 'Make this chat discoverable' feature introduced by OpenAI in July 2025, resulted in a large number of sensitive science-related conversations becoming publicly searchable due to indexation by search engines like Google and Bing, including details about confidential business contracts, personal relationship advice, and mental health discussions.
- With the removal of the 'Make this chat discoverable' feature, OpenAI is demonstrating a commitment to the privacy and security of technology-based conversations, ensuring that technological advancements in AI, such as ChatGPT, align with the need for user confidentiality in news contexts.