November 17, 2025
OpenAI wants to transform business. Many of its users just want life hacks

Welcome to AI Decoded, Fast Company’s weekly newsletter that breaks down the most important news in the world of AI. I’m Mark Sullivan, a senior writer at Fast Company, covering emerging tech, AI, and tech policy.

This week, I’m focusing on how OpenAI’s recent study makes the business look more like a consumer tech company. I also look at how recent decisions in lawsuits against Anthropic and Meta set a precedent for future AI training cases.

Sign up to receive this newsletter every week via email here. And if you have comments on this issue and/or ideas for future ones, drop me a line at [email protected], and follow me on X (formerly Twitter) @thesullivan.


New data makes OpenAI look more like a consumer tech company

During its early years, OpenAI looked like it might build a business selling access to its increasingly powerful AI models to Fortune 500 companies. But when ChatGPT launched (almost by surprise) in late 2022, the startup suddenly had a breakout consumer product—one that raced to 100 million users in just a few months, faster than any app in history. Overnight, OpenAI became a consumer tech brand and, most importantly, the poster child for generative AI in the minds of everyday users.

Today, ChatGPT has more than 700 million weekly active users worldwide, according to OpenAI. And the way those people use the chatbot suggests the company may be drifting further toward the consumer market. This week, OpenAI released a study of 1.5 million user chat logs between May 2024 and June 2025, revealing that nearly three-quarters (73%) of chats were personal rather than work-related. Just a year earlier, in June 2024, personal and work prompts had been roughly equal. (That data excludes OpenAI’s API customers, who are largely developers and enterprises.)

The report comes at a time when, across industries, many enterprises are growing skeptical about how—and when—AI tools might deliver the efficiencies they were promised, the kind executives can tout on earnings calls. Despite the hype, by most objective accounts, the AI transformation hasn’t yet materialized. An August MIT report, for example, found that 95% of enterprise AI pilot projects have stalled. Meanwhile, talk of an AI bubble continues, with critics raising an eyebrow at bullish startup valuations and tech stock prices.

Click here for more on OpenAI’s projected sustainability.

Are the legal tides turning in AI’s favor when it comes to data copyright?

The biggest potential roadblock to the AI boom so far is lawsuits over AI training data. The major labs have routinely scraped vast amounts of online content to train their models, operating under the assumption that the practice falls under the “fair use” clause of the Copyright Act. That assumption is now being tested in lawsuits from publishers and creators, many still moving through the courts. Some key cases, however, have already been decided, and on the core question of whether scraping copyrighted data for training counts as fair use, the momentum appears to favor the AI companies.

The most consequential decision to date came this summer in Bartz v. Anthropic, which Anthropic plans to settle. Judge William Alsup ruled that Anthropic’s use of digitized books as training data qualifies as “fair use” under the Copyright Act. Crucially, he determined that Anthropic’s use was “transformative”—the models weren’t simply regurgitating the books’ content and format, but instead using the text to learn how to predict the next most likely word in a sequence. That’s the basic mechanism by which LLMs generate language.

Judge Vince Chhabria reached a similar conclusion in Kadrey v. Meta (a class action in which Sarah Silverman and two other authors sued for copyright infringement), finding that Meta’s use of the books was transformative—the fair use clause’s primary test. But Chhabria also cautioned that transformative use alone may not always be sufficient to secure fair use protection. The effect on a work’s market value could also factor in. His ruling suggested some reluctance to set a broad precedent for future AI training cases.

Click here for more on how legal cases are shaping industry behavior.


More AI coverage from Fast Company: 

Want exclusive reporting and trend analysis on technology, business innovation, future of work, and design? Sign up for Fast Company Premium.

link

Leave a Reply

Your email address will not be published. Required fields are marked *