AI Data Privacy on Trial: The Precedent of the OpenAI vs. New York Times Legal Battle

A legal demand for 20 million private chats raises new questions about AI data privacy, challenging the trust required for sensitive applications like healthcare.

PPeter Bencsikon November 16, 2025
AI Data Privacy on Trial: The Precedent of the OpenAI vs. New York Times Legal Battle

Key Developments

OpenAI is actively opposing a legal demand from The New York Times (NYT) requesting 20 million private ChatGPT conversations. The NYT claims this access is necessary to find evidence of users attempting to bypass its paywall. OpenAI has described the demand as an “overreach” that disregards long-standing privacy protections.

This is not the first attempt; the NYT previously sought 1.4 billion conversations, which OpenAI successfully opposed. The current dispute involves a random sample of consumer chats from December 2022 to November 2024. OpenAI has confirmed that chats from Enterprise, Edu, Business, and API customers are not impacted by this specific demand.

Why This Matters

Building trust in AI systems starts with protecting the data behind them. This legal proceeding introduces an alarming vulnerability. While data access by state actors or for internal training under terms of service are known risks, it is concerning that millions of private conversations could potentially be accessed by news outlets and their consultants as part of a legal process. This case highlights a significant threat to data privacy that extends beyond typical security breaches.

The Broader Context

The implications are significant given the expanding application of AI. Artificial intelligence tools are increasingly used in sensitive domains, including healthcare, for tasks like detecting mental health conditions, predicting treatment responses, and monitoring prognosis. The security of the data underpinning these tools is fundamental. Users must trust that their data will not be exposed, whether by tech giants, state actors, or through litigation.

Looking Ahead

In response, OpenAI states it is accelerating its security roadmap, including developing client-side encryption to make chats inaccessible even to OpenAI itself. The company is also de-identifying the data sample in question. As this legal battle continues, it forces a critical question: If private chats can be drawn into legal discovery, will users continue to trust AI with their most sensitive information?

https://openai.com/index/fighting-nyt-user-privacy-invasion/

Photo by Noelle Otto: https://www.pexels.com/photo/photography-of-person-peeking-906018/