calender_icon.png 3 November, 2025 | 10:04 AM

AI content regulatory rules proposed: Is it enough to combat deep fakes

30-10-2025 12:00:00 AM

Challenges identified for implementation:

Compliance problems for small studios and individual creators due to labeling requirements.

Unclear distinction between AI-assisted and fully synthetic content.

Platforms to enforce compliance, with non-compliance possibly resulting in criminal penalties, including imprisonment for company directors.

In response to the growing menace of deep fakes and AI-generated misinformation, the Ministry of Electronics and Information Technology (Meity) has introduced draft rules mandating clear labeling of all AI-generated content. This move, one of the first of its kind in the Global South, aims to curb the spread of synthetic media, including deep fake videos, voice clones, and manipulated content that have targeted celebrities, influenced elections, and fueled scams. 

The proposed regulations require all AI-generated images, audio, and videos to carry visible or audible tags identifying them as synthetic. Images must display a label covering 10% of their surface area, while audio content, such as music or voice recordings, must include an announcement for 10% of the duration, indicating its artificial origin. The draft amendments focus on strengthening due diligence requirements for social media intermediaries and large online platforms, such as WhatsApp, Facebook, Instagram and X to manage the spread of AI-generated or manipulated content. 

MeitY defines “synthetically generated information” as any content artificially created, modified, or altered using computational methods to appear authentic or true. Social media platforms will also be responsible for ensuring users declare AI usage before posting content. The public and stakeholders have until November 6, 2025, to provide feedback on these draft rules before they are finalized. 

The urgency of these measures stems from high-profile incidents involving deep fakes of celebrities like Amir Khan, Ranir Singh, Rashmika Mandana and Virat Kohli, which went viral and caused significant public concern. Cases of AI voice cloning used in job scams and payment frauds have further highlighted the risks of unchecked synthetic content. Technical as well as cyber law experts expressed concern that the line between real and fake “was getting thinner every day” and termed it as a bold move by the Indian government to set a global benchmark for transparency.

However, implementing these rules poses challenges. Small studios and individual creators may struggle to comply with labeling requirements and the distinction between AI-assisted and fully synthetic content remains unclear. Critics also question whether labels will be effective if public AI literacy remains low. Additionally, platforms face the daunting task of enforcing compliance, with non-compliance potentially leading to criminal penalties, including imprisonment for company directors.

A startup entrepreneur in AI sector opined that problems won’t be solved if people don’t understand what the label meant. The rules could also impact India’s burgeoning AI industry. Despite these concerns, legal experts emphasized that the regulations do not infringe on freedom of speech under Article 19 of the Indian Constitution, as they impose “reasonable restrictions” to protect the public from misinformation.

On the creative front, AI is transforming marketing by enabling brands to craft hyper-localized, personalized campaigns. A senior marketing official in an MNC AI firm, highlighted how AI helps brands scale creativity while saving time. “AI moves us from mass communication to mass personalization,” he said, noting its ability to tailor festive campaigns to diverse audiences, from urban centers to rural communities and the Indian diaspora abroad. However, he stressed the need for a balance between automation and human insight to preserve the emotional core of storytelling.

India’s move aligns with global efforts to regulate AI and deepfakes. The EU’s AI Act (2024) mandates transparency for synthetic media, while the U.S. has issued executive orders for watermarking and disclosure standards. The UK and Singapore emphasize voluntary AI frameworks, and China enforces strict content labeling. These efforts reflect a worldwide push for transparency and accountability in AI use. While experts acknowledge the need for AI regulation, some have raised concerns about the draft rules. Will labels stop deep fakes, or merely add noise? Only time—and the feedback until November 6—will tell.

Key Provisions of the draft rules

Labeling AI-generated content: Platforms must clearly mark AI-generated or edited content with visible or audible labels. For videos and images, labels must cover at least 10% of the screen, while for audio clips, they must play during the first 10% of the content.

User declarations and verification: Before publishing, platforms must require users to declare whether their content is synthetically generated. Accountability for non-compliance: Platforms that knowingly allow, promote, or fail to act against unlabeled synthetic content will be deemed non-compliant with due diligence obligations.