A courtroom fight involving Elon Musk and OpenAI has produced an unusual piece of evidence: notes about Musk that OpenAI president Greg Brockman reportedly wrote to an AI chatbot. Those private-sounding reflections are now being examined in a legal setting, underscoring a simple but often-missed point reported by the Guardian: AI chatbots are not therapists, and they are not confidential diaries.
The episode is a concrete reminder that conversations with tools from OpenAI, Google and other companies can be stored, reviewed, and potentially disclosed — including in court.
What happened in the Musk–OpenAI dispute
The Guardian reported that, as part of ongoing legal proceedings involving Elon Musk and OpenAI, OpenAI president Greg Brockman has had to confront text he once shared with an AI chatbot. Those messages reportedly included his thoughts about Musk, who was an early backer of OpenAI before their relationship deteriorated.
In court, these chatbot exchanges have been treated as evidence, much like emails or internal messages might be. The Guardian’s account frames this as a kind of “oh dear diary” moment: words that may have felt like private reflections at the time are now being read and scrutinized in a formal legal process.
The reporting does not suggest that the chatbot itself malfunctioned or did anything unusual. Instead, the incident highlights how human expectations — that a chatbot might function like a confidential journal or even a therapist — can collide with how these systems and their data are actually handled.
Why chatbots feel private — but aren’t
According to the Guardian’s coverage, the core risk is psychological as much as technical. People often interact with chatbots in an intimate, confessional way. The interface resembles a one‑to‑one conversation, and the system responds in fluent, sometimes empathetic language. That can make it feel similar to talking to a counselor or writing in a locked diary.
But the underlying reality is different:
- Conversations are data. Messages sent to tools from OpenAI, Google and others can be logged and stored on company servers. They may be used to improve models, troubleshoot problems, or comply with legal requests, depending on the provider’s policies.
- Humans may review snippets. The Guardian notes that companies can allow human reviewers — often contractors — to examine small slices of user conversations to evaluate system quality or investigate misuse.
- Data can surface in disputes. As the Musk–OpenAI case illustrates, records of what was said to a chatbot can be pulled into legal or regulatory processes, just as other digital communications can.
The Guardian’s reporting emphasizes that none of this is hidden in principle; companies publish privacy policies and usage terms. But users often do not read those documents closely, and the natural language style of chatbots can encourage people to overshare.
Not a therapist, and not bound by therapist rules
The Guardian’s article stresses that AI chatbots are not mental health professionals, even when they are used to talk about emotions, stress or personal crises.
In most jurisdictions, licensed therapists are bound by strict confidentiality rules and professional ethics. They are trained to handle sensitive disclosures, and there are clear legal frameworks around what can and cannot be shared, and under what circumstances.
By contrast:
- Chatbots are software services, not licensed clinicians. They do not hold professional licenses, cannot form a therapeutic relationship in the legal sense, and are not subject to the same confidentiality obligations.
- Provider policies, not clinical ethics, set the rules. What happens to your data is governed by the company’s terms of service and privacy policy, which can change over time.
- Crisis responses are limited. Many general‑purpose chatbots include warnings that they are not suitable for emergency help and may direct users to hotlines or local services instead.
The Guardian’s reporting frames the Brockman example as a cautionary tale: treating a chatbot like a shrink can lead to misplaced expectations about privacy and protection.
What OpenAI and Google users should keep in mind
The Guardian’s account focuses on OpenAI, but it notes that similar concerns apply to other major providers, including Google. While specific policies differ by company and product, the broad patterns are similar enough that users are urged to be careful.
Based on the Guardian’s reporting, several practical points emerge:
- Assume anything typed into a chatbot could be seen again. If you would not be comfortable seeing a message quoted in a workplace dispute or legal filing, think twice before sending it to a general‑purpose AI assistant.
- Separate experimentation from sensitive use. Using a chatbot to draft emails or summarize articles is different from using it as a place to process trauma, relationship conflicts or workplace grievances.
- Check the product’s data settings. Some services offer options to limit how conversations are stored or reused, but those settings are not always obvious, and the Guardian’s reporting indicates that many users do not adjust them.
The Guardian article does not claim that OpenAI or Google have secretly violated their own policies in this case. Instead, it highlights a gap between what the policies allow and what users assume when they pour out their thoughts to an apparently sympathetic machine.
Why this matters for public services and policy
The Guardian’s coverage suggests that incidents like the Brockman messages could influence how public institutions think about AI tools.
If senior figures at major technology companies can see their chatbot conversations surface in court, public‑sector officials, clinicians, and educators may be especially cautious about using such tools for sensitive discussions. The Guardian points out that expectations of privacy around AI assistants are still forming, and episodes like this one could shape future rules.
For policymakers, the case raises concrete questions:
- Should there be clearer, standardized disclosures about how chatbot conversations are stored and when they can be accessed?
- How should public agencies treat AI chat logs that might contain personal or confidential information?
- Do existing privacy and data‑protection laws adequately cover these new forms of communication?
The Guardian’s reporting does not offer firm answers, and independent corroboration of broader implications is still limited. But the Musk–OpenAI dispute has already supplied one vivid example of how AI conversations can leave a durable, discoverable trail.
The takeaway for everyday users
The courtroom spotlight on Greg Brockman’s chatbot musings about Elon Musk has turned an abstract warning into a concrete scenario. As reported by the Guardian, words that may have felt like private reflections to an AI assistant are now part of a legal record.
For people using tools from OpenAI, Google and others, the practical lesson is straightforward: treat chatbots more like email than like a locked diary or a therapist’s office. They can be useful, responsive and even comforting, but they are ultimately software services governed by company policies and legal obligations.
As legal cases and regulatory debates continue, those policies may evolve. For now, the safest assumption is that what you tell your AI chatbot might not stay between you and the machine.




