India’s AI Regulation Bill Is the Most Consequential Tech Policy Nobody in the West Is Talking About
While Brussels and Washington dominate the global conversation on AI governance, a regulatory framework that could reshape how artificial intelligence is built, deployed, and litigated across the world’s most populous democracy is moving quietly through New Delhi — and most Western policy circles have barely noticed.

India’s draft Digital India Act, the proposed successor to the Information Technology Act of 2000, contains AI liability provisions that would hold platforms legally accountable for harms caused by AI-generated content and automated decision-making systems. For multinational technology companies already straining under the compliance demands of the EU AI Act, the prospect of a parallel — and in some respects more expansive — regulatory regime covering 1.4 billion users is not a distant hypothetical. It is an approaching deadline.
What the Digital India Act Actually Proposes
The Digital India Act has been in iterative development for several years, with the Indian Ministry of Electronics and Information Technology releasing consultation documents that signal a significant departure from the relatively permissive posture of the 2000 legislation it would replace.
Among its most consequential elements are provisions addressing AI liability — specifically, frameworks that would assign legal responsibility to intermediaries and platform operators when AI systems cause demonstrable harm to users. Unlike the EU AI Act, which organizes obligations primarily around risk classification tiers and pre-market conformity assessments, India’s approach appears more focused on post-harm accountability and the duties of platforms operating at scale within Indian jurisdiction.
One area of particular note in the ministry’s consultation documents is the treatment of “significant harm” caused by algorithmic systems — a category that could encompass everything from discriminatory automated lending decisions to AI-generated misinformation that incites violence. Platforms deploying such systems would face potential civil and, in some formulations, criminal liability if they cannot demonstrate adequate safeguards. The precise contours of these provisions remain subject to revision, but the directional intent is clear.
Why Scale Changes Everything

India’s AI regulation has historically been treated as a secondary concern by global tech policy analysts, largely because the country’s regulatory enforcement capacity has lagged behind its legislative ambition. That calculus is changing.
The sheer scale of the affected user base means that no major AI platform — whether a large language model provider, a social media company deploying recommendation algorithms, or a fintech firm using automated credit scoring — can treat Indian compliance as a niche market problem. A liability framework covering 1.4 billion users is not a regional footnote. It is, by definition, a global compliance event.
India’s digital economy is also not static. With one of the world’s fastest-growing smartphone user bases, a rapidly expanding fintech sector, and significant government investment in AI infrastructure through initiatives like IndiaAI, the country is simultaneously a major AI consumer market and an emerging AI producer. Regulation here will shape not just how foreign companies operate in India, but how Indian AI companies build products intended for export.
The Global South’s Answer to Brussels
The EU AI Act is frequently described as a potential global standard — the “Brussels effect” applied to artificial intelligence. But the Brussels effect has always had limits in the Global South, where regulatory models designed around European institutional structures and enforcement mechanisms do not translate cleanly.
India’s Digital India Act, if passed in something close to its current form, offers a different model: AI liability built around the realities of a high-volume, lower-income, linguistically diverse digital market where harms from algorithmic systems often fall on populations with limited legal recourse. That framing — accountability oriented toward protecting users at the bottom of the economic pyramid, rather than managing enterprise risk — could prove more exportable to other emerging economies than anything currently coming out of Brussels or Washington.
Several countries across Southeast Asia, Africa, and Latin America are watching India’s regulatory process closely. A workable Indian framework for AI liability could seed a parallel standard-setting ecosystem that operates largely outside the transatlantic policy conversation currently dominating AI governance discourse.
What Multinational Companies Must Do Now
For technology companies with significant Indian operations or user bases, the strategic imperative is straightforward even if the legislative timeline is not: treat the Digital India Act as a live compliance risk, not a future planning item.
That means auditing AI systems against the liability frameworks outlined in ministry consultation documents, mapping which deployed systems would fall under potential “significant harm” categories, and engaging directly with the Indian regulatory process through public comment periods and industry association channels. Companies that waited until the EU AI Act was finalized to begin compliance work discovered how expensive late preparation can be. The same mistake in the Indian context would be compounded by scale.
It also means taking seriously the possibility that India and the EU may not harmonize their AI liability standards. Compliance infrastructure built solely around EU risk-tier classifications may require substantial rearchitecting to satisfy India’s post-harm accountability requirements. Building for both frameworks from the outset is significantly cheaper than retrofitting later.
The Conversation the West Needs to Have
Tech policy analysts and AI governance researchers focused primarily on the transatlantic corridor are working from an increasingly incomplete map. The Digital India Act represents a serious, sophisticated attempt by the world’s largest democracy to assert regulatory sovereignty over AI systems affecting its citizens — and to do so on terms that reflect Indian legal traditions, market realities, and public interest priorities rather than imported frameworks.
Whether India’s approach ultimately strengthens global AI governance or fragments it into incompatible regional regimes will depend in large part on whether Western policymakers, companies, and researchers engage with it seriously and early. The window for that engagement — before the legislation hardens into final form — is open now. It will not stay open indefinitely.
Send free SMS worldwide
Reach any mobile number in 200+ countries from your browser. No signup, no app.
Send a free SMS →

