Meta is facing a coordinated challenge over a proposed face-recognition capability for its Ray-Ban and Oakley smart glasses. According to WIRED, more than 70 civil liberties, domestic violence, reproductive rights, LGBTQ+, labor, and immigrant advocacy organizations are urging Meta to abandon the feature—reportedly known inside the company as “Name Tag”—arguing it would enable silent identification of strangers in public and increase risks for people who are targeted for stalking, harassment, or abuse. The organizations also say internal documents suggest Meta considered using a “dynamic political environment” to reduce scrutiny during rollout.
A feature built into consumer eyewear
The core technology at issue is an AI-assisted face recognition workflow integrated into Meta’s smart glasses. WIRED reports that “Name Tag” would work through the artificial intelligence assistant built into the Ray-Ban and Oakley devices, allowing wearers to pull up information about people in their field of view. The feature was revealed in February by The New York Times, and the advocacy coalition’s letter to CEO Mark Zuckerberg describes face recognition in consumer eyewear as something that “cannot be resolved through product design changes, opt-out mechanisms, or incremental safeguards.”
WIRED also reports that internal engineering discussions considered two versions of the capability. One version would identify only people the wearer is already connected to on a Meta platform. A broader version, the report says, would recognize anyone with a public account on a Meta service such as Instagram. The distinction matters technically because it changes the scope of matching and the potential for identification beyond a known network—though WIRED’s account centers on the broader privacy and consent concerns raised by the coalition.
In the coalition’s view, bystanders in public would have “no meaningful way to consent to being identified.” That consent premise is central to how the groups frame the technology’s safety and legitimacy, even if opt-out mechanisms exist for the wearer’s own usage. The complaint is not just about whether the wearer can choose to use the feature, but whether the people being scanned can meaningfully control whether they are recognized.
Why the coalition says the risks can’t be designed away
WIRED says the coalition wants Meta to scrap the feature entirely and sent its request in a letter to Zuckerberg on Monday. The groups argue that the problem is structural: face recognition in inconspicuous consumer eyewear, in their framing, cannot be mitigated through interface tweaks or incremental safeguards. They also argue that public bystanders lack an actionable consent path.
The organizations further ask Meta to provide specific disclosures and commitments. WIRED reports the letter urges Meta to disclose any known instances of its wearables being used in stalking, harassment, or domestic violence cases. It also calls for disclosure of past or ongoing discussions with federal law enforcement agencies, including Immigration and Customs Enforcement and Customs and Border Protection, about use of Meta wearables or data from them.
On process, the letter requests that Meta commit to consulting civil society and independent privacy experts before integrating biometric identification into any consumer device. The coalition’s argument implies that the decision involves more than model accuracy or matching logic; it involves governance and accountability for biometric systems once they are deployed in everyday contexts.
WIRED quotes the groups warning that people should be able to move through daily life without fear that stalkers, scammers, abusers, federal agents, and activists across the political spectrum are “silently and invisibly verifying their identities” and potentially linking names to information about “habits, hobbies, relationships, health, and behaviors.” The technology claim here is about the pairing of recognition with information retrieval—the glasses would not only identify, but also “pull up information” via the AI assistant.
“Dynamic political environment” and the rollout strategy question
Beyond safety concerns, the coalition’s pressure campaign includes scrutiny of rollout strategy. WIRED reports that internal documents surfaced showing Meta hoped to use the current “dynamic political environment” as cover for the rollout. The documents reportedly show Meta betting that civil society groups would have their resources “focused on other concerns.”
A key reference is the May 2025 memo from Meta’s Reality Labs that The New York Times obtained. WIRED says the memo stated Meta would launch “during a dynamic political environment where many civil society groups that we would expect to attack us would have their resources focused on other concerns.”
For the industry, this is a reminder that biometric features in consumer devices are as much about timing, oversight, and stakeholder response as about the underlying AI pipeline. If the memo accurately reflects internal planning, observers may watch how companies manage public scrutiny when deploying technologies that can affect identification and privacy in public spaces.
Regulatory attention and the privacy baseline for existing glasses
WIRED adds that the Electronic Privacy Information Center (EPIC) sent letters to the Federal Trade Commission (FTC) and state enforcers in February urging them to investigate and block Name Tag’s rollout. EPIC’s warning, as summarized by WIRED, is that real-time face recognition would compound the privacy risks of the existing Ray-Ban Meta glasses.
Specifically, WIRED says EPIC describes the existing glasses as able to covertly record bystanders with no warning beyond a small light that it says is easily hidden. EPIC’s argument, per WIRED, is that real-time face recognition would “destroy[] the concept of privacy or anonymity in public spaces.” The report cites EPIC’s concern that people could be identified at protests, places of worship, support groups, and medical clinics.
WIRED also notes that Meta did not immediately respond to WIRED’s request for comment, and that EssilorLuxottica—which owns Ray-Ban and Oakley and manufactures the smart glasses with Meta—also did not immediately respond to a request for comment. The lack of immediate response leaves the technical debate unresolved: whether the feature is limited by connectivity constraints, how recognition accuracy is handled in varied environments, and what governance controls would be effective once a biometric system is integrated into consumer eyewear.
What this could mean for wearable AI
As framed by the coalition and EPIC, the central technology question is how quickly identity-matching capabilities can move from controlled settings into everyday consumer hardware. If “Name Tag” functions as described—using an AI assistant to recognize people in view and retrieve information—then the wearable would effectively turn public sightlines into a searchable identity surface. That suggests a potential shift in how privacy expectations are operationalized for bystanders, not just users.
More broadly, the dispute highlights how biometric identification in wearable form factors raises questions that design controls alone may not address, at least according to the groups involved. The coalition’s insistence that the issue “cannot be resolved” via product design changes, opt-out mechanisms, or incremental safeguards suggests they believe the consent and power imbalance inherent in ambient scanning would remain even if Meta adjusted user settings.
In the near term, the immediate impact is political and legal pressure: a large coalition is asking for feature cancellation, disclosures about law enforcement discussions and past misuse, and consultation before biometric integration. Technically, the controversy may also influence how companies document system behavior, audit real-world risks, and define what “inconspicuous” means for biometric features in consumer devices—especially when the devices are capable of both recording and identification.
Source: WIRED