Anthropic Revokes OpenAI’s Access to Claude Models Amid Competitive Tensions

Published On: Aug 03, 2025

SAN FRANCISCO, CA - Aug 03, 2025 - In a significant escalation of competition within the artificial intelligence (AI) sector, Anthropic has revoked OpenAI’s access to its Claude family of AI models, citing a violation of its commercial terms of service. The decision, reported by Wired on August 2, 2025, comes as OpenAI prepares to launch its next-generation model, GPT-5, highlighting the growing tensions between two of the industry’s leading players.

Anthropic, founded in 2021 by former OpenAI researchers, including Dario Amodei and Daniela Amodei, has rapidly emerged as a key competitor in the AI landscape. Its Claude models, particularly Claude Code, are renowned for their advanced capabilities in natural language processing and coding assistance, positioning them as a preferred tool for developers worldwide. OpenAI, the creator of ChatGPT and earlier GPT models, remains a dominant force, with its upcoming GPT-5 expected to feature enhanced coding and reasoning capabilities.

Both companies have historically collaborated to some extent, with mutual API access allowing for benchmarking and safety evaluations. However, Anthropic’s recent move signals a shift toward protecting its proprietary technology in a highly competitive market.

Details of the Access Revocation

Anthropic cut off OpenAI’s access to the Claude API on Tuesday, July 29, 2025, after discovering that OpenAI’s technical staff were using Claude’s coding tools in ways that violated Anthropic’s terms of service. According to Wired, OpenAI integrated Claude into internal tools via special developer APIs, enabling systematic comparisons of Claude’s performance against its own models in areas such as coding, creative writing, and safety-related prompts, including sensitive topics like child sexual abuse material (CSAM), self-harm, and defamation.

Anthropic’s commercial terms explicitly prohibit customers from using its services to “build a competing product or service,” train rival AI models, or reverse-engineer its technology. Anthropic spokesperson Christopher Nulty stated, “Claude Code has become the go-to choice for coders everywhere, and so it was no surprise to learn OpenAI’s own technical staff were also using our coding tools ahead of the launch of GPT-5. Unfortunately, this is a direct violation of our terms of service.

In response, OpenAI’s Chief Communications Officer, Hannah Wong, defended their actions, saying, “It’s industry standard to evaluate other AI systems to benchmark progress and improve safety. While we respect Anthropic’s decision to cut off our API access, it’s disappointing considering our API remains available to them.” Despite the revocation, Anthropic has agreed to maintain OpenAI’s access for “ ” a standard practice in the industry, though it remains unclear how this will be implemented.

Broader Industry Context

This incident is not an isolated event but part of a broader trend in the tech industry where companies are increasingly protective of their intellectual property. Anthropic’s decision mirrors actions by other tech giants, such as Facebook’s restriction of API access to Vine and Salesforce’s recent limitations on competitors’ use of Slack data. Earlier in 2025, Anthropic restricted access to its models for Windsurf, an AI coding startup rumored to be an OpenAI acquisition target, which later merged with Cognition after its leadership joined Google. Anthropic’s Chief Science Officer, Jared Kaplan, remarked at the time, “I think it would be odd for us to be selling Claude to OpenAI,” underscoring the company’s cautious stance toward competitors.

The timing of the revocation is notable, coinciding with OpenAI’s preparations for GPT-5, expected to launch in August 2025 with advanced coding and reasoning features. Anthropic’s move may hinder OpenAI’s ability to refine its models using Claude’s outputs, potentially giving Anthropic a competitive edge. Additionally, on July 28, 2025, Anthropic announced new weekly rate limits for Claude Code due to “explosive usage” and terms violations, citing seven major or partial outages in the prior month caused by heavy use from a small group of users. This suggests Anthropic is proactively managing access to its services to maintain performance and protect its technology.

Industry Implications

The revocation highlights the delicate balance between collaboration and competition in the AI industry. Benchmarking competitors’ models is a common practice to assess progress and ensure safety, as OpenAI noted. However, Anthropic’s strict enforcement of its terms reflects the growing value of proprietary AI models as companies vie for dominance in a market projected to reach $1 trillion by 2030. This incident may prompt other AI firms to tighten their API access policies, potentially reducing the open collaboration that has historically driven innovation in the field.

Analysts see parallels with past tech industry disputes, such as Facebook’s API restrictions and Salesforce’s data access controls, indicating a shift toward more proprietary development. As noted in Wccftech, Anthropic’s action could prevent OpenAI from developing more capable large language models (LLMs) using Claude’s data, affecting the competitive landscape.

The incident also raises ethical questions about how AI companies manage access to their technologies. The use of Claude for safety-related prompts involving sensitive content underscores the importance of responsible AI development. As the industry evolves, clearer guidelines and ethical standards may be needed to navigate the complex dynamics of competition and collaboration.

Looking Ahead

Anthropic’s decision marks a pivotal moment for the AI industry, potentially setting a precedent for how companies manage access to their models. As OpenAI prepares to launch GPT-5, the loss of Claude access could impact its development timeline or capabilities, though the extent remains unclear. The move may also influence broader industry practices, encouraging stricter access controls and prompting discussions about the need for standardized API agreements.

For now, Anthropic’s limited allowance for benchmarking and safety evaluations suggests a willingness to maintain some level of industry cooperation. However, as competition intensifies, the AI sector may see a shift toward more closed ecosystems, where proprietary technologies are closely guarded. This could slow the pace of collaborative innovation but may also drive companies to invest more heavily in their own advancements.

Source : Wired