Illinois SB 3444 Puts AI Liability Frameworks in the Spotlight—Anthropic and OpenAI Take Opposing Stances

This article was generated by AI and cites original sources.

Illinois lawmakers are considering SB 3444, a proposed AI liability bill backed by OpenAI that would shield AI labs from liability in certain large-scale harm scenarios. Anthropic has publicly opposed the bill, arguing that transparency should be paired with accountability rather than liability protections. The dispute, reported by WIRED, is drawing regulatory battle lines between two leading US AI labs, even as policy experts suggest the bill has only a remote chance of becoming law.

What SB 3444 would change in AI accountability

At the center of the controversy is a core question: who should be legally responsible when an AI-enabled system contributes to catastrophic outcomes. According to WIRED, SB 3444 would shield an AI lab from liability if its system is used to cause large-scale harm—such as mass casualties or more than $1 billion in property damage.

The bill’s structure, as described by WIRED, would shield an AI lab from liability if it drafted and published a safety framework on its website. Under this approach, an AI lab would not be responsible in scenarios where a “bad actor” uses an AI model to cause harm, provided the lab completed these documentation steps.

The competing regulatory approaches: liability versus documentation

The disagreement between OpenAI and Anthropic centers on the mechanism used to enforce AI safety. WIRED frames the dispute as a clash over who should be liable when frontier AI systems are involved in widespread societal harm.

OpenAI supports SB 3444, arguing that it reduces the risk of serious harm from frontier AI systems while still enabling deployment. According to WIRED, OpenAI says the bill would allow this technology to reach the hands of people and businesses in Illinois. OpenAI spokesperson Liz Bourgeois, as quoted by WIRED, states: “In the absence of federal action, we will continue to work with states—including Illinois—to work toward a consistent safety framework. We hope these state laws will inform a national framework that will help ensure the US continues to lead.”

Anthropic’s position differs. WIRED reports that Anthropic argues companies developing frontier AI models should be held at least partially responsible if their technology is used for widespread societal harm. Anthropic confirmed its opposition to SB 3444 and said it has held conversations with the bill’s sponsor, state senator Bill Cunningham, about using the bill as a starting point for future AI legislation.

Anthropic’s statement, quoted in WIRED, states: “We are opposed to this bill. Good transparency legislation needs to ensure public safety and accountability for the companies developing this powerful technology, not provide a get-out-of-jail-free card against all liability.” Anthropic’s Cesar Fernandez also says the company expects to work with Cunningham on changes that would “pair transparency with real accountability for mitigating the most serious harms frontier AI systems could cause.”

Lobbying, legislative odds, and what comes next

Beyond the policy disagreement, WIRED reports that the bill is becoming a focal point for regulatory strategy between OpenAI and Anthropic. According to people familiar with the matter, Anthropic has been lobbying state senator Bill Cunningham and other Illinois lawmakers to either make major changes to SB 3444 or oppose it as currently written.

OpenAI’s backing of the bill reflects the growing competition over AI regulation. WIRED notes that policy experts say SB 3444 has only a remote chance of becoming law, but the dispute is exposing political divisions between the two labs that could become increasingly significant as the companies expand their lobbying activity across the country.

Governor JB Pritzker‘s office also weighed in. A spokesperson, as quoted by WIRED, said: “While the Governor’s Office will monitor and review the many AI bills moving through the General Assembly, governor Pritzker does not believe big tech companies should ever be given a full shield that evades responsibilities they should have to protect the public interest.”

Source: WIRED