Meta Denies Using Downloaded Porn for AI Training, Cites Personal Use

This article was generated by AI and cites original sources.

Meta, formerly known as Facebook, is currently involved in a legal dispute over allegations that it used illegally downloaded pornographic content to train its artificial intelligence models. The tech company has firmly denied these claims, asserting that the downloaded material was for ‘personal use’ rather than AI training purposes.

The lawsuit, brought by Strike 3 Holdings, accused Meta of torrenting adult films to develop an adult version of its AI model. However, Meta refuted these accusations, arguing that the downloads occurred several years before the company’s official AI research efforts commenced, making it unlikely that the content was used for training.

Additionally, Meta pointed out that its terms explicitly prohibit the creation of adult content, further undermining the notion that the downloaded material was utilized for AI development. The company also emphasized that the flagged content was only a small fraction of the total downloads, and it contended that the materials were obtained for personal, private use rather than professional purposes.

Meta dismissed the allegations as baseless and criticized Strike 3 Holdings for what it described as ‘extortive lawsuits.’ This legal dispute raises important questions about the ethical use of data and the challenges tech companies face in ensuring compliance with copyright laws. It underscores the need for stringent protocols and oversight to prevent unauthorized use of content in AI training processes.

Source: WIRED