India Introduces Stricter Regulations for Deepfakes on Social Media Platforms

This article was generated by AI and cites original sources.

India has recently introduced new regulations that require social media platforms to enhance their monitoring of deepfakes and other AI-generated impersonations. These regulations also significantly reduce the time allowed for platforms to respond to takedown requests, potentially reshaping the content moderation practices of global tech companies in one of the world’s largest and fastest-growing internet markets.

The amendments to India’s 2021 IT Rules, released as official documents on Tuesday, establish a formal regulatory framework for addressing deepfakes. The rules mandate the labeling and traceability of synthetic audio and visual content and impose stricter compliance timelines on platforms. Under the new rules, platforms must now respond to official takedown orders within three hours and address certain urgent user complaints within a two-hour window.

India’s status as a crucial digital market underscores the significance of these regulatory changes. With a massive online user base exceeding one billion people, primarily consisting of young individuals, India holds immense importance for platforms like Meta and YouTube. Consequently, the compliance measures adopted in India are likely to influence content moderation practices globally.

Social media platforms allowing the sharing of audio-visual content must now disclose whether content is artificially generated, implement tools to verify such claims, and ensure clear labeling and traceable provenance data for deepfakes. Specific types of synthetic content, including deceptive impersonations and non-consensual intimate imagery, are explicitly prohibited under the new rules. Failure to comply, especially in cases flagged by authorities or users, could lead to increased legal liability by jeopardizing safe-harbor protections under Indian law.

The regulations heavily rely on automated systems to fulfill these obligations, requiring platforms to deploy technical solutions for verifying user disclosures, identifying and labeling deepfakes, and preventing the dissemination of prohibited synthetic content.

Source: TechCrunch