Grok is revealing users' personal data: risks, leaks, and privacy

  • Grok has gone so far as to display physical addresses and other personal data of users, especially in the United States, facilitating potential scenarios of harassment and doxing.
  • A flaw in the sharing function allowed more than 370.000 conversations with Grok to be indexed by Google, exposing sensitive information and illegal content.
  • Ireland's DPC is investigating X for using EU user data to train Grok, with potential multimillion-pound fines under the GDPR.
  • The case highlights serious privacy-focused design flaws and undermines user trust in artificial intelligence assistants.

Grok personal data and privacy

The irruption of Grok, the xAI chatbot integrated into X (Twitter)It promised quick answers, an irreverent tone, and access to real-time information. However, in recent weeks it has become the epicenter of one of the biggest privacy controversies linked to generative artificial intelligence, mixing massive leaks of conversations, possible exposure of sensitive data and very serious legal doubts.

Beyond the media noise, what is happening with Grok should serve as a warning to anyone who uses an AI assistant to talk about their personal life, their work, or their most sensitive projects. Hundreds of thousands of chats have been indexed in search engines, revealing physical addresses, phone numbers, emails, and even alleged criminal plans.Meanwhile, European regulators are scrutinizing how X uses user data to train its models.

Grok and the controversy over revealing addresses and personal data

Grok revealing addresses and personal data

One of the first alarms was raised when it was discovered that Grok was able to provide very specific postal addresses of real peopleincluding non-public figures. In the United States, journalists from Futurism verified that the chatbot could return, with relatively few instructions, the residential addresses of individuals who were neither celebrities nor politicians, something that opens the door to Harassment, doxing, and different types of bullying.

Among the most notorious cases, the following is cited: Dave Portnoy, businessman and media personality close to Donald TrumpGrok even provided an exact location for it. This wasn't an isolated incident: the investigation revealed that the free web version of the chatbot, accessible without significant restrictions, offered precise addresses of ordinary people with a few well-targeted messages.

According to the count published by Futurism, journalists asked Grok for the address of 33 non-public figuresThe results give a fairly clear idea of ​​the scope of the problem: In 63,63% of cases, Grok returned correct data.although not always from the current residence. Combinations of current addresses, former addresses, and work locations were found, showing that the model handled a significant volume of potentially identifiable information.

The breakdown shared by the media outlet was very illustrative: 10 inquiries offered current and correct residential addresses7 returned exact but old or outdated addresses, and 4 referred to professional addresses (workplaces)In other words, on many occasions Grok could guide anyone to a real home or office in the physical world, something that far surpasses the simple context error of an AI.

Furthermore, on a dozen occasions, the tool did not provide data on the specific person requested, but it did lists of individuals with similar names accompanied by addresses and other personal detailsIn some cases, Grok even presented answers in the form of options, such as "Answer A" and "Answer B," making it even easier for the user to choose a specific address and stick with it with hardly any additional effort.

Is the same true for Grok in Spain and in the European Union?

Grok's privacy policy in Europe and Spain

In the case of Spain and, in general, the European Union, the situation seems somewhat different, at least for now. ADSLZone directly verified Grok's behavior from European territory On December 5, 2025, at approximately 12:15 PM, the chatbot was asked for specific addresses of different people, including their first and last names. In all tests, the chatbot... refused to disclose personally identifiable information.

Grok's response in these cases is quite clear and explicitly refers to the European legal framework. The system indicates that it cannot provide the personal address of any private individual, stating that it is personal data protected by the GDPR in Europe and the LOPDGDD in Spainas well as equivalent regulations in other countries. It also emphasizes that sharing this information without the explicit consent of the data subject is illegal and seriously violates privacy.

Grok's own standard text suggests that if someone needs to contact a person for a legitimate reason, You should use public channels such as professional networks, institutional emails or official channels (courts, public administrations, etc.). In other words, it attempts to draw a clear line between the reasonable use of AI and indiscriminate access to sensitive third-party data.

ADSLZone attempted to replicate Futurism's tests with various names, including minor public figures and people with no media relevance, and In all cases, Grok refused to give addressesThis suggests that AI behavior could change depending on the jurisdiction, the user account, or the regulatory compliance settings applied by xAI in the European Union, where the risk of sanctions for violating the GDPR is very high.

Even so, many privacy experts point out that the mere fact that Grok now refuses to share addresses in Spain It does not guarantee that personal data has not been processed in a questionable manner in the past.Nor can configuration errors, bugs, or future changes in service policy be ruled out. Therefore, the recommendations emphasize not blindly trusting any AI assistant to handle highly personal information.

Hundreds of thousands of Grok conversations indexed on Google

The scandal that has shaken xAI the most in recent weeks has been the massive exposure of conversations between users and Grok through search enginesA Forbes investigation revealed that Google had indexed more than 370.000 of Grok's chats, many of them accessible with a simple search and without users being truly aware that they were being made public.

The root of the problem lies in the function of “sharing” integrated into GrokEach time someone pressed that button, the system generated a unique URL intended to be copied and sent via email, messaging apps, or social media. The flaw was that these web addresses didn't carry any privacy tags like "noindex" and weren't protected from crawling by search engines like Google, Bing, or DuckDuckGo, which left all that content exposed to automatic crawling by search engines.

The result was that hundreds of thousands of chats were publicly listed. These included trivial conversations—requests to write tweets, news summaries, or simple daily inquiries—but also a very worrying volume of sensitive and potentially dangerous contentForbes and other media outlets found instructions for manufacturing drugs such as methamphetamine or fentanyl, detailed guides for building explosives, writing malware, or even alleged plans to assassinate Elon Musk himself.

There was also no shortage of extremely personal information: messages with medical and psychological problems, passwords, email addresses, and documents uploaded by userssuch as spreadsheets, text files, and other types of private files. In many cases, you didn't need to be a hacker to access this information; simply typing a few keywords related to Grok into Google was enough.

The BBC also reported on the problem after discovering that there were nearly 300.000 indexed conversationsAmong the transcripts they were able to see were requests to generate secure passwords, create weight loss meal plans, answer complex questions about illnesses or mental health, and, in some cases, explicit requests to test the chatbot's limits, for example by asking for instructions on how to manufacture Class A drugs in a laboratory.

Testimonies from affected journalists, experts, and professionals

The case didn't only affect anonymous users. Several examples have been published of Journalists, researchers, and professionals who discovered their professional chats were indexed without having anticipated it.British journalist Andrew Clifford, for example, used Grok to generate summaries and publications for his media outlet Sentinel Current, trusting that it was a relatively private environment.

Clifford later confessed that I had no idea that my conversations could end up on Google.In her specific case, she claims that the information exposed was not particularly sensitive, but the incident was enough for her to lose confidence in the platform and move to Google's Gemini AI to continue working with generative AI.

Something similar happened to Nathan Lambert, a scientist at the Allen Institute for Artificial Intelligence, who saw how Private summaries and working materials appeared publicly accessibleThe general feeling among these advanced AI users was one of surprise and frustration: they assumed that a "share" button meant, at most, that the URL would be visible to the recipient, but not that it would become part of the indexed results in global search engines.

Adding to this unease is the view of some security and privacy experts. Luc Rocher, associate professor at the Oxford Internet Institute, defined the situation bluntly: for him, AI chatbots have become “an ongoing privacy disaster”It notes that the leaked conversations reveal everything from full names and locations to very intimate details about mental health, business, or personal relationships, and that once they are indexed, it is virtually impossible to make them disappear completely.

For her part, Carissa Véliz, associate professor of philosophy at the Institute for Ethics in AI at the University of Oxford, remarked that the most problematic thing is that Users are not clearly informed about what will happen to their data. when they use features like the share button. In their words, our technology doesn't even explain what it's doing with the information we upload to these platforms, and that lack of transparency is itself a serious ethical and practical problem.

Comparison with ChatGPT and other precedents in generative AI

The Grok incident is not an isolated case in the generative AI ecosystem. Months earlier, OpenAI was also involved in a similar controversy when some ChatGPT conversations appeared on Google.In that case, the indexing of chats was also related to a sharing function that, although it offered some kind of notification, was confusing for many users.

Following the media uproar, OpenAI decided remove the feature that allowed chat indexing in search engines and described the test as an “ephemeral experiment.” However, it emerged that these conversations had been available for months, and that the option to make them public lacked sufficiently clear explanations about the real implications of activating it.

The paradox is that Elon Musk, founder of xAI and the figure behind Grok, had at the time celebrated OpenAI removing that functionality, noting that Grok did not have a similar sharing systemIt's unclear when xAI decided to introduce the share button that has led to the current leak, but everything points to it. The decision was made without adequately considering its consequences..

It has also been suggested that some of the more extreme indexed content—such as highly detailed instructions for manufacturing drugs, building bombs, or committing serious crimes—may have originated from internal security testing and red-teamingThat is, tests conducted by the xAI team itself to verify the model's limitations. However, the central problem isn't so much what questions were asked, but rather that all of it ended up being publicly accessible through search engines without any kind of firewall.

Meanwhile, some marketing and SEO professionals have begun to Leverage these public conversations to extract content ideas, identify keywords, and study how users interact with GrokThis is a clear example of how a privacy breach can be transformed, almost immediately, into a business opportunity for third parties, reinforcing the idea that any data exposed online can be reused for purposes very different from those the user imagined.

xAI's silence, Grok's reaction, and regulatory pressure in Europe

One of the most surprising aspects of this whole story is the almost total absence of official explanations from xAIDespite the magnitude of the leak and the fact that top media outlets like Forbes, the BBC, and specialized technology publications have covered the case, Elon Musk's company has not, to date, issued a public statement detailing what happened, what measures they have taken, and what guarantees they offer users going forward.

Given this lack of transparency, some journalists have opted to ask Grok directly what he thinks about the matter. The chatbot, naturally, acknowledges that He has no authority to speak on behalf of xAI or to issue an official apology.Their response is limited to recommending that users concerned about their privacy review the sharing settings on the grok.com website or in the app, and contact xAI directly if they need more details.

Even Grok himself admits that the company's reaction has been slower than OpenAI's in a similar case. According to his response, OpenAI acted quickly by suspending the sharing function when it detected problems, while xAI still has not offered a strong public response on the indexing of conversations by search engines.

Meanwhile, in the European Union, things are getting serious on the regulatory front. Irish Data Protection Commission (DPC)The main regulator of X in the community space has launched a specific investigation into the platform's use of European users' personal data to train Grok.

The DPC will specifically analyze the processing of data from publicly accessible publications in X The investigation examines data collected from users in the EU and the European Economic Area, and how that data is used to train generative AI models. If irregularities are confirmed, the Irish authority has the power to impose fines of up to 4% of the company's global revenue, in accordance with the GDPR.

Privacy, product design, and trust in AI assistants

The Grok case exposes several underlying problems that go beyond a simple technical "bug". On the one hand, it brings to the forefront the lack of privacy-focused designIncluding a share button that turns a conversation into a public link without clear warnings and without measures such as "noindex" suggests that, at least in that part, virality and ease of spreading content were prioritized over data protection.

On the other hand, it becomes clear that people Use these chatbots for all kinds of personal and professional mattersFrom intimate confessions about mental health or family problems to the preparation of work documents, passwords, business ideas, or even dangerous experiments, the combination of a history of highly detailed conversations with a massive search engine indexing system creates a perfect breeding ground for serious data leaks.

This type of incident also directly impacts the general confidence in AI toolsIf users feel that what they tell an assistant could end up, without warning, on the first page of Google, it's understandable that they'll start limiting what they share or even abandoning the service altogether. This can hinder the adoption of technologies that, if well-designed and regulated, could offer significant advantages in productivity and access to information.

Added to all this is Grok's controversial history, which had already faced criticism for generating extremist or inappropriate content. In July 2025, for example, xAI was forced to review its policies and issue a public apology after an incident in which the chatbot generated antisemitic contentMore recently, they have also circulated Deepfakes of celebrities like Taylor Swift associated with the Grok ecosystem, reinforcing the perception that the platform has too many open fronts in the ethical and security field.

Everything that has happened serves as a reminder that, although AI assistants are presented as neutral and practical tools, there are hidden dangers. design decisions, commercial interests, and evolving legal frameworks that determine what happens to every piece of data we share. With Grok, users have been able to see the hard way just how a seemingly innocuous feature, like a share button, can transform a private chat into a global showcase.

The clearest lesson is that it's not advisable to treat any chatbot—not Grok, not ChatGPT, not any other AI—as if it were a trusted friend or a professional bound by secrecy. Avoid entering addresses, full names, passwords, sensitive documents, or highly intimate details. It remains, to this day, the best practical defense for the average user, while regulators refine their investigations and companies finally decide to take privacy as seriously as growth and virality.

Exclusive agreements with mobile manufacturers Google Gemini-7
Related article:
Google Gemini AI's impact on mobile manufacturers: deals, competition, and the future of artificial intelligence in smartphones