Anthropic’s Claude pricing shift triggered a short suspension of OpenClaw’s creator—highlighting friction between agent tooling and model billing

This article was generated by AI and cites original sources.

Anthropic temporarily suspended the account of OpenClaw creator Peter Steinberger from accessing Claude, then reinstated it within a few hours after the post went viral on X, according to a TechCrunch report published April 10, 2026. The suspension followed a pricing policy change announced the previous week: Claude subscriptions would no longer cover “third-party harnesses including OpenClaw,” pushing OpenClaw users to pay separately via Claude’s API based on consumption.

The episode points to a practical technical issue that is increasingly central to AI platforms: how vendors meter and enforce usage for agent-style software that can run continuous reasoning loops, retry tasks, and integrate with other tools—patterns that differ from one-off prompts. It also illustrates how quickly agent developers can run into account-level enforcement when billing and “usage patterns” don’t align.

Pricing policy change: from subscription coverage to API metering

TechCrunch’s account of the timeline ties the suspension to Anthropic’s late-stage shift in how it treats third-party “harnesses including OpenClaw.” As described in the report, Anthropic said subscriptions to Claude would no longer cover those harnesses, and that OpenClaw users would instead need to pay for the usage separately, based on consumption, through Claude’s API.

In effect, the report frames this as a new billing path for agent orchestration software. OpenClaw, which Steinberger created, is positioned as a way to run Claude through a third-party wrapper or harness. Anthropic also offers its own agent product, Cowork, and the report notes that Steinberger characterized the change as a “claw tax” in terms of what OpenClaw users would pay separately.

Anthropic’s justification, according to the source, was that subscriptions “weren’t built to handle the ‘usage patterns’ of claws.” The report specifies why those patterns are different: claws can be more compute-intensive than prompts or simple scripts because they may run continuous reasoning loops, automatically repeat or retry tasks, and tie into a lot of other third-party tools.

For developers, the key technical takeaway is that agent runtime behavior can look materially different to the model vendor’s billing and safety systems than standard request/response usage. If a system is designed around metering prompt-like workloads, an agent that repeatedly calls into a model while coordinating external actions can stress those assumptions.

The suspension: “suspicious” activity after Steinberger followed the new rule

On X, Steinberger posted early Friday morning that it would be “harder in the future to ensure OpenClaw still works with Anthropic models.” In the same post, he shared a photo of a message from Anthropic indicating his account had been suspended over “suspicious” activity, as quoted in the TechCrunch report.

TechCrunch reports that the ban was short-lived: a few hours later, after the post went viral, Steinberger said his account had been reinstated. The report also notes that while many comments arrived amid broader speculation, an Anthropic engineer responded directly, telling Steinberger that Anthropic has never banned anyone for using OpenClaw and offering to help.

However, the source emphasizes uncertainty: “It’s not clear if that was the key that restored the account,” and TechCrunch says it had asked Anthropic about the situation.

Technically, the mismatch matters. Steinberger said he was following the new pricing rule and using the API, yet he was still banned. The source reports that Anthropic attributed the policy change to subscription limitations and usage patterns, but the suspension suggests that enforcement may also depend on additional signals beyond billing category—such as what Anthropic’s systems interpret as “suspicious” behavior during agent-like activity.

Agent features and “usage patterns”: why claws can look different than prompts

Anthropic’s stated reason for the pricing change was grounded in how agent harnesses behave. TechCrunch reports that claws can:

Run continuous reasoning loops, rather than issuing a single prompt.

Automatically repeat or retry tasks, increasing model call volume relative to a simple script.

Tie into many other third-party tools, expanding the surface area of system interactions around the model.

These characteristics are not just product descriptions; they translate to different runtime profiles—more frequent calls, longer-running sessions, and potentially more complex patterns of requests. The report frames Anthropic’s argument as: subscriptions were not designed to handle these patterns, so consumption-based API billing is required.

Steinberger, though, expressed skepticism. TechCrunch reports that he posted, “Funny how timings match up, first they copy some popular features into their closed harness, then they lock out open source.” The report does not claim the underlying accusation is correct; it reports Steinberger’s framing and indicates he “may have been referring to features added to Claude’s Cowork agent.”

Specifically, the source mentions that Claude Dispatch rolled out a couple of weeks before Anthropic changed OpenClaw’s pricing policy. Dispatch is described in the report as allowing users to remotely control agents and assign tasks. If Steinberger’s comment connects those events, it suggests an ecosystem shift: closed agent tooling may be receiving more attention while open harnesses are being moved to consumption-based pathways.

Even if Anthropic’s technical billing rationale is accepted, observers may watch for how enforcement systems classify agent traffic. The short suspension after Steinberger said he used the API implies that “usage patterns” and “suspicious activity” signals may not map cleanly onto developer expectations.

OpenClaw testing, competing AI ecosystems, and what the episode signals

Beyond billing and enforcement, the report includes an operational detail: when multiple people asked why Steinberger uses Claude instead of his employer’s models at all, he explained that he uses Claude only for testing. TechCrunch says he stated he wants to ensure OpenClaw updates won’t break things for Claude users.

Steinberger’s explanation also separates roles. The report quotes him saying: “You need to separate two things.” He described his work at the OpenClaw Foundation, where he wants OpenClaw to work with “any model provider,” and his job at OpenAI to help with future product strategy.

The report also notes that some commenters connected the situation to Steinberger’s move to OpenAI. One person implied that he chose the wrong employer, prompting Steinberger to reply that “One welcomed me, one sent legal threats.” The source does not provide further detail on those claims, but it frames the exchange as part of the broader controversy around OpenClaw’s access to Anthropic’s models.

From a technology-industry perspective, the incident underscores the growing complexity of integrating agent frameworks with model vendors’ commercial and security systems. If subscriptions exclude third-party harnesses, developers must route through APIs and accept consumption-based metering. If enforcement then triggers “suspicious” flags, agent developers may need clearer documentation of what traffic patterns are acceptable.

At minimum, the episode suggests that agent tooling—especially harnesses that run loops, retries, and tool integrations—can fall into edge cases when vendors adjust pricing and enforcement. Industry watchers may look for follow-up from Anthropic on how it distinguishes between legitimate harness usage and activity it considers suspicious, and whether account-level enforcement will be better aligned with the newly stated billing model.

Source: TechCrunch