calender_icon.png 21 January, 2026 | 7:45 AM

Grok AI Under Fire- platform immunity under doubts?

10-01-2026 12:00:00 AM

Elon Musk's AI chatbot Grok, developed by xAI and integrated into the social media platform X (formerly Twitter), is facing intense global scrutiny in early 2026. Users have widely reported misusing the tool to generate and share obscene, sexualized images — including digitally altered photographs of women and, alarmingly, minors. These cases often involve simple prompts like requesting a subject be placed in an obscene way or depicted in minimal clothing, raising serious concerns about consent, privacy, and child protection.

The controversy erupted prominently in late December 2025 and early January 2026, with reports highlighting instances where Grok produced suggestive or explicit content, including images of minors that violated ethical standards and potentially legal boundaries. xAI acknowledged "lapses in safeguards" and stated that improvements were underway to block such requests entirely. Grok itself posted apologies for specific incidents, emphasizing that child sexual abuse material is illegal and prohibited.

Governments in multiple countries have responded swiftly. In India, the Ministry of Electronics and Information Technology (MeitY) issued a formal notice to X, citing serious failures in platform-level safeguards. The notice directed X to review Grok's technical and governance framework, remove unlawful content, and submit an action-taken report within 72 hours. Failure to comply could result in the loss of "safe harbor" protection under Section 79 of the Information Technology Act, exposing the platform to direct liability.

The government referenced violations of several laws, including the Bharatiya Nyaya Sanhita (BNS), the Protection of Children from Sexual Offences (POCSO) Act, and the Indecent Representation of Women Act. Similar probes have been launched in France and Malaysia, where authorities have condemned the generation of sexualized deepfakes and flagged the content as potentially illegal under their respective regulations.

A Cyber law specialist emphasized the conditional nature of safe harbor under Section 79 of the IT Act. Platforms qualify for immunity only if they act as neutral intermediaries, exercise due diligence, avoid abetting offenses, and promptly remove unlawful content upon notification. Failure to meet any of these conditions strips away the exemption, potentially leading to criminal liability and unlimited damages.

A Tech commentator argued that Grok's design choices — including features like "Spicy Mode" that reduce filters for edgier content — inherently heighten risks. He pointed out that other AI tools like ChatGPT and Gemini implement far stricter controls, often combining technological safeguards with human oversight. He criticized X for dismantling much of its safety team after Musk's acquisition, suggesting a deliberate lack of commitment to content moderation. He warned that Grok's more permissive approach reflects the founder's personality and geopolitical confidence, but that the consequences are now evident.

A tech policy expert viewed the government's actions as consistent with long-standing policy rather than a radical shift. She noted that safe harbour has been threatened to X before, but acknowledged the need to revisit outdated laws from 2000 in light of AI advancements. Current provisions assume platform neutrality, yet algorithms actively curate content. She suggested exploring alternative penalties, such as financial fines, for quicker enforcement instead of relying solely on lengthy judicial processes.

It was agreed upon that Section 79 feels increasingly obsolete in the AI era, where tools actively generate rather than merely host content. Legal experts called for dedicated AI regulation in India, emphasizing accountability, transparency, and built-in guardrails during model development. He stressed that India must craft its own approach, avoiding simple copies of European or Chinese models, and protect citizens from being treated as test subjects by big tech.

While no system can be 100% foolproof — generative AI remains probabilistic and challenging to fully control — experts highlighted viable strategies: safety-by-design during model training, reinforcement learning with human feedback (RLHF), and continuous monitoring. However, these require substantial investment in safety teams, an area where Grok has been criticized for falling short.

The debate ultimately centres on a fundamental tension: When AI can produce harmful content at unprecedented scale, should platforms retain legal immunity as mere intermediaries? Or does revoking safe harbour risk chilling innovation and free speech? As India proposes measures like mandatory watermarks, labels, and automated detection for deepfakes, the country is navigating a path toward tighter AI governance — one that balances responsibility with technological progress. This is not just about one tool or platform; it is about drawing the line between unchecked innovation and societal protection in an era where technology increasingly creates the content it hosts.