FTC Settlement Targets Ad Agency Coordination on Brand Safety Filters

This article was generated by AI and cites original sources.

The Federal Trade Commission (FTC) and eight states have announced a proposed settlement with major ad agencies that would block them from coordinating with each other to steer ad buying away from certain platforms based on political viewpoints. The case centers on the technology and workflow of brand safety—the rules and third-party signals used to decide where ads can appear—and on whether shared brand-safety standards can function as anticompetitive steering in digital advertising.

The FTC’s Antitrust Claims

According to the FTC complaint summarized by The Verge, the agencies violated antitrust rules by agreeing to a common set of brand safety rules that would disfavor services deemed to contain content such as misinformation. The proposed settlement would, if approved by a federal judge, require changes to how ad buyers collaborate, impose reporting obligations, and require independent monitoring for several years. The Verge also connects the dispute to earlier litigation involving X and brand-safety coordination efforts.

Brand Safety as an Ad-Tech Control Layer

In modern ad tech, “brand safety” operates as a control layer between advertisers, agencies, and publishers. The FTC’s theory focuses on agencies agreeing to a shared set of rules that would disfavor sites and services based on content categories—specifically including content “like misinformation.” The complaint also points to industry coordination mechanisms such as groups used to align brand-safety decisions.

The FTC’s complaint mentions the World Federation of Advertisers’ now-defunct Global Alliance for Responsible Media (GARM) as an example of an organization established to coordinate collective brand safety efforts. Brand-safety coordination often translates into repeatable decision logic: publishers and content are assessed, then ad buying is constrained accordingly. When multiple agencies use the same rule set, it can standardize the ad distribution outcome across the market.

The settlement’s proposed restrictions target agreements that would constrain ad buying based on publisher content tied to “news and political or social commentary content.” If approved, the order would prevent WPP, Publicis, and Dentsu from making agreements with other ad agencies that limit buying ads on publishers with respect to that content category. In practical terms, it targets the coordination layer that can shape which inventory is eligible for ads.

Proposed Compliance Requirements

If a federal judge approves the order, ad agencies WPP, Publicis, and Dentsu would be prevented from making agreements with other ad agencies that limit ad purchases on any publisher “with respect to its news and political or social commentary content.” The agencies would also need to file annual compliance reports for five years after the order is finalized, and they must appoint a compliance monitor who would serve for five years.

This combination of restrictions and oversight represents a direct intervention in how advertising systems manage eligibility rules. The compliance reporting and long-duration monitoring indicate the FTC seeks verifiable process controls over how agencies coordinate, particularly where coordination could influence ad placement decisions.

Dentsu spokesperson Jeremy Miller stated: “Dentsu remains fully committed to operating transparently, with integrity, and in strict compliance with all applicable laws.” A WPP spokesperson declined to provide an attributed statement to The Verge.

Third-Party Content Classifiers and Disputes

The complaint described by The Verge includes the role of organizations that classify content categories. The FTC alleges that groups such as NewsGuard, the Global Disinformation Index (GDI), Check My Ads, and Media Matters for America classified “disfavored opinions as ‘misinformation’” and lobbied ad buyers to “demonetize sites that hosted or shared such content.”

From a technology standpoint, brand safety often relies on third-party signals—ratings, categorizations, and metadata—that ad platforms and buyers use to filter inventory. The dispute is tied to how those signals map to ad eligibility, particularly when classification decisions overlap with political or social viewpoints.

The Verge reports that Media Matters was sued by X for reporting that it had found ads for major brands alongside pro-Nazi content. This reflects the broader ecosystem where brand-safety claims, measurement, and enforcement can intersect—particularly when different parties interpret the same ad placement outcomes differently.

NewsGuard co-CEO Gordon Crovitz said: “This is supposed to be a collusion case, but we don’t work with the ad agencies signing the consent order and our ratings are based on non-partisan journalistic standards, not political orientation.” A GDI spokesperson pointed The Verge to a Financial Times opinion article from January by GDI co-founder and CEO Clare Melford, in which Melford stated: “A fully informed transaction between buyers and sellers is a key tenet of the free market. Far from coercion or censorship, GDI is simply one member of a community that seeks to make the internet a safer place for businesses and citizens alike.”

Regulatory Context and Precedent

FTC Chair Andrew Ferguson stated: “This unlawful collusion not only damaged our marketplace, but also distorted the marketplace of ideas by discriminating against speech and ideas that fell below the unlawfully agreed-upon floor. The proposed order remedies the dangers inherent to collusive practices and restores competition to the digital news ecosystem.”

While the settlement is framed as an antitrust remedy, its implications for ad tech are likely to be process-oriented. The proposed five-year compliance reporting and monitoring regime suggests that regulators expect ongoing scrutiny of how agencies manage shared rules and how those rules affect ad eligibility for publishers in news and political or social commentary categories.

Last year, the FTC approved a merger between Omnicom and IPG if it included a similar ban on steering ad buys based on “political or ideological viewpoints.” That earlier condition, combined with the current proposed settlement, suggests a pattern in enforcement priorities: ad-tech steering decisions tied to political or ideological content categories may be treated as especially sensitive under competition law.

Connection to Prior Litigation

The settlement is connected to prior litigation involving X. GARM was named as one of a group of defendants in a 2024 lawsuit where X alleged they were violating antitrust laws and holding an “illegal boycott.” The organization disbanded shortly after. A judge dismissed the lawsuit with prejudice, stating: “the very nature of the alleged conspiracy does not state an antitrust claim.” The existence of both the dismissed lawsuit and the new proposed FTC settlement suggests regulators and courts may be evaluating similar coordination claims through different legal lenses.

Source: The Verge