IT Rules 2026: 5 Rights Every Social Media User in India Must Know Right Now

 If you use social media in India — whether WhatsApp, Instagram, X (Twitter), YouTube, or Facebook — your legal rights online changed significantly on February 20, 2026. The Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026, have come into force and they place powerful new obligations on platforms that directly protect you as a user.



Most social media users in India have no idea these rights exist. This article changes that.

What Are the IT Rules 2026?

The IT Rules 2026 are an amendment to the existing IT (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021. They were notified by the Ministry of Electronics and Information Technology (MeitY) on February 10, 2026 and became legally enforceable on February 20, 2026.

These rules govern every “significant social media intermediary” — any platform with more than 50 lakh (5 million) registered users in India. This means Meta (Facebook, Instagram, WhatsApp), X Corp (Twitter), Google (YouTube), Telegram, Snapchat, and dozens of other platforms are all covered.

The key question the IT Rules 2026 answer is: what are your rights as a user, and what are platforms’ obligations toward you?

Here are the five most important rights you now have.


Right 1: Platforms Must Remove Illegal Content Within 3 Hours of Your Complaint

This is the most dramatic change in the IT Rules 2026.
Under the previous IT Rules 2021, platforms had 36 hours to remove content that fell in specific categories of illegal content after receiving a complaint or government order. Under the IT Rules 2026, this window has been slashed to just 3 hours for specific categories of harmful content.

What content falls under the 3-hour rule?

AI-generated or deepfake content used to harm, defame, or harass
Non-consensual intimate imagery (NCII) — sometimes called “revenge porn”
Child sexual abuse material (CSAM)
Content impersonating a public servant, government official, or court officer
False or fabricated electronic records intended to deceive
Content that facilitates the sale of arms, explosives, or narcotics

What this means for you: If someone posts a deepfake video of you, fabricates a statement in your name, or posts intimate content of you without consent, you can now demand removal within 3 hours. If the platform fails to act, it risks losing its Safe Harbour protection (explained below) and can be held legally liable.
Practical tip: File your complaint in writing via the platform’s official grievance portal, keep a screenshot of your complaint with timestamp, and follow up after 3 hours if no action is taken.

Right 2: AI-Generated Content Using Your Face or Voice Without Consent Is Now Illegal

The IT Rules 2026 introduce India’s first formal legal definition and regulation of AI-generated content, referred to as Synthetically Generated Information (SGI).
Under this new framework:

Any AI-generated image, video, audio, or text that uses your likeness, voice, face, or personal identity without your consent is prohibited
Platforms are required to take down such content when reported
The same 3-hour removal window applies to non-consensual AI deepfakes

This matters because AI deepfake technology has become frighteningly accessible. Anyone with a smartphone can now generate a realistic video of you saying or doing things you never said or did. Until the IT Rules 2026, India had no specific legal framework to address this. Now it does.

What you should do: If you discover an AI-generated video, image, or audio clip using your likeness without your consent, file an immediate complaint with the platform citing “Synthetically Generated Information — non-consensual” under the IT Rules 2026.

Right 3: Platforms Must Warn You Before Suspending Your Account — Every 3 Months

Under the IT Rules 2026, platforms are required to give users advance notice and an opportunity to appeal before suspending or terminating their account. More significantly, platforms must send users a periodic notification — at least once every 3 months — informing them of any community guidelines violations recorded against their account.
This prevents platforms from suddenly suspending accounts without any warning, which was a common complaint under the earlier framework.

What this means for you: You now have a right to know if your account is at risk before action is taken. If you receive a 3-month compliance notice identifying a violation, you can review and appeal it before it escalates to suspension.

Right 4: All AI-Generated Content Must Carry a Visible Label — No Exceptions

This right protects you as a consumer of content rather than as a creator.

Under IT Rules 2026, all AI-generated or synthetically generated content — whether created by a platform, a user, or a third-party AI tool — must carry a clearly visible label disclosing that the content is AI-generated. This label must also contain embedded metadata that allows the content to be traced back to its source.

Platforms are prohibited from removing or suppressing these labels. Users who upload AI-generated content must declare its nature at the point of upload, and platforms must verify these declarations.

Why this matters: Misinformation and propaganda powered by AI deepfakes have become a critical threat to democratic processes and individual reputations. The mandatory labelling requirement gives you, as a viewer, the right to know whether what you are watching is real or artificially generated.

Right 5: Your Grievance Must Be Acknowledged Within 7 Days

Under the IT Rules 2026, every significant social media intermediary must:
Appoint a Grievance Officer based in India
Acknowledge any user complaint within 7 days of receipt
Resolve the complaint within 15 days (for most categories)
Maintain a monthly transparency report disclosing the number of complaints received and actions taken

This creates a formal, trackable grievance redressal system that was absent or unenforced under earlier frameworks.

Practical tip: Always use the platform’s official in-app grievance portal when filing complaints, note the ticket number, and keep records of your complaint. If the platform fails to acknowledge within 7 days, you can escalate to MeitY’s grievance portal or approach a court of competent jurisdiction.

What Happens If Platforms Ignore These Rights?

If a platform fails to comply with the IT Rules 2026, it loses its Safe Harbour protection under Section 79 of the IT Act, 2000. Safe Harbour is the legal immunity that protects platforms from being held liable for content posted by users. Without it, platforms like Meta, X, and YouTube can be directly sued in Indian courts for illegal content posted on their platforms.

This is a powerful enforcement mechanism. The loss of Safe Harbour is not a fine — it is an exposure to unlimited civil and criminal liability. This is why the IT Rules 2026 are taken very seriously by platforms operating in India.

Source: IT (Intermediary Guidelines and Digital Media Ethics Code) Amendment Rules, 2026 — MeitY Notification dated February 10, 2026.


Next: Deep Dive — What Is Safe Harbour and Why It Matters →

0 Comments