With momentum building around the AI Impact Summit later this month, the mainstream conversation also seems to be the least interesting: While the West engages in the frontier-model race and systemic-risk debates, India’s AI moat is often cast as a purely “applied AI for development” story. Not only is this contrast overdrawn, but it also misses the wider implications for policy, markets and India’s constraints.
India is investing in sovereign compute, foundational-model capability and research ecosystems. But even if it trains bigger models, India’s political economy and social context mean impact will get decided downstream in the AI stack: In procurement, integration, evaluation, and governance, once AI begins to influence entitlements and market access. India’s biggest test is whether it can scale AI deployment without turning public services, welfare delivery and digital consumer markets into black-box decision factories.
The Economic Survey 2025-26 officially charts a “frugal, application-focused” pathway, away from chasing the frontier at prohibitive fiscal cost, as a scalable market opportunity for India — one that is rooted in bottom-up, small-scale, sectoral adoption rather than prestige competition. The Budget’s Bharat-VISTAAR announcement, proposing a multilingual AI tool for farmers that integrates AgriStack portals with ICAR’s agricultural practices, exemplifies the same priorities: Usefulness over benchmarks. In e-commerce, Meesho, for instance, is building and scaling AI-powered chat and vernacular voice agents to support first-time online shoppers (including in rural and non-English contexts), using AI as workflow infrastructure for customer discovery, conversion and support.
But frugal is not code for low stakes. The stakes are high precisely because deployment and societal diffusion have bearings on rights, entitlements and livelihoods.
In the US and Europe, AI governance debates often center around frontier model safety frameworks, compute concentration, and rules aimed at a small set of general purpose model providers. India will confront many of the same underlying risks, including opacity, misuse, security breaches, privacy leakage, and weak redressal mechanisms but expressed through deployment embedded across welfare, education, hiring, lending, healthcare and compliance.
The risk categories also do not neatly split into “frontier risks” versus “application risks”. Many transcend across the layers of the stack. A frontier model’s unreliability translates into a welfare system’s wrongful exclusion. Lab opacity becomes a citizen’s inability to appeal for transparency.
Deployment use-cases show why these issues matter. Telangana’s AI-led welfare de-duplication exercise through the Samagra Vedika Programme had reportedly cut off subsidised food support for thousands, prompted by faulty proxies and data errors. And the state is not only a deployer of algorithmic systems, but increasingly a user of AI to audit itself. The Comptroller and Auditor General’s AI-based audits detected large numbers of fraudulent cases in state beneficiary schemes. While that potentially saves the exchequer’s money and enhances governmental efficiency, it also underlines the speed at which algorithmic tools are getting deployed in high-stakes governance, and the urgency to build rules for transparency and contestability.
Further, a DPI-style approach to AI infrastructure, as recently advocated in a December 2025 white paper by India’s Principal Scientific Advisor’s Office, would mean shared portals for data, models, and integration which can democratise deployment. But it also changes the risk and harms dynamics.
Three risks deserve careful attention in the deployment context. First, procurement capture and vendor lock-in: In a deployment-first economy, procurement contracts that do not mandate auditability, portability and interoperability could eventually create vendor-dependency risks, supplemented with incremental switching costs. In a DPI-like ecosystem, lock-in can also transcend from the application vendor to the marketplace operator, or the approved integration layer can become the new gatekeeper.
Second, interpretability risk and black boxes in high-stakes decisions: During the Special Intensive Revision in West Bengal, the draft electoral-roll reportedly had large-scale deletions and flags marking voters as “dead”, “missing”, and a separate list of logical discrepancies released after judicial guidance. This highlights how verification labels, if not explainable, can lead to de-facto administrative exclusion. The issue here extends beyond model bias to unreasoned authority, which gets further complicated in a DPI-style architecture when multiple schemes start relying on shared models and verification services. A black box can lead to systemic governance failure. This necessitates human-in-the-loop defaults in workflows, mandating clear override points.
The frontier ecosystem is increasingly treating interpretability as investable infrastructure. Goodfire, an AI lab focused on building with interpretability, raised $150 million at a $1.25 billion valuation, a signal that the ambition to understand model behaviour is becoming productised. India’s deployment stack will need interpretable workflows that can withstand institutional audits, public appeals, and court rulings.
Third, privacy risk: Applied AI and utility rests on data expansion through access to high quality meta-data, database linkages, broader data retention and profiling. While this can lead to better fraud detection, especially as a DPI-style system makes linkages easier, without provisions like purpose limitation, data minimisation defaults, access controls, and retention restrictions baked-in, interoperability can lead to data repurposing at scale and have ratchet effects.
A deployment-first strategy, therefore, demands a robust evaluation infrastructure: The institutional ability to test, audit and monitor systems across languages and contexts. But it also necessitates sharper attention to the application layer, which is the full workflow where harms get produced. Consistent with the emerging guidance for developers, in practice this entails defining intended purpose or use, mapping affected stakeholders, specifying harm categories upfront, and ensuring human fallback channels. AI deployments should be treated as lifecycle systems, not one-off pilots.
Contestability, too, has to feature in product choices as a system can be “accurate” on the aggregate yet illegitimate in practice if it offers no mechanisms to challenge outcomes. Clear notice-and-consent banners, accessible grievance mechanisms, audit-trails that a human can review, and escalation routes that go beyond a vendor helpdesk are essential. With DPI-style deployment, roll-out acceleration must be complemented with standardised and federated accountability involving decision logs, incident reporting, continuous monitoring for model drift and appeal mechanisms being embedded as platform capabilities travelling with the user, not with the vendor (changing app-to-app).
This story also has market-facing implications. A deployment-first AI economy would shift value away from the best model and toward implementation through integration, data pipelines, compliance and trust. Infosys, for instance, has released an open-source Responsible AI Toolkit that integrates capabilities like privacy, explainability, fairness and hallucination detection as reusable APIs. These early signals suggest that durable advantage will come not only through model access, but through auditability and trustworthiness.
India does not need to chase the frontier to lead, as frontier capability without robust deployment governance is just latent power. If “impact” is the goal, then privacy, explainability, and contestability must be baked in ex-ante, not bolted-on as ex-post patchwork.
The writer works with the Center for Security and Emerging Technology (CSET) in Washington DC on global AI governance research
