This publication is part of Carnegie India's Practitioner Paper Series, which highlights the experiences of professionals from the world of politics, public administration, and business.
A government department pilots an AI assistant for citizen service requests. It can triage applications, draft responses, and translate queries across languages. The pilot looks like a leap in efficiency. A month later, staff still follow the old process in parallel, supervisors are nervous about accountability, and the “new system” is a side screen that nobody trusts.
This is the most common AI story today—a failure of adoption, not the model.
The warning signs are visible in the data. Gartner expects a meaningful share of generative AI projects to be abandoned after proof of concept, citing poor data quality, inadequate risk controls, escalating costs, and unclear business value. IDC research, as reported by CIO.com, similarly finds that most AI proofs of concept do not reach widespread deployment.
These are not arguments against AI. These signal where the bottleneck has moved. The constraint is no longer raw capability but whether institutions and organizations can absorb AI into real work and integrate AI reasoning with human decision-making.
In most rooms, the question is not whether a model can do something in a demo. The question is: where does AI add value? For which user, in which workflow, with what measurable improvement, and with what accountability when it is wrong?
Why is Adoption Hard?
Shankar Maruwada, CEO and co-founder of the EkStep Foundation, argues that “adoption is proving harder than invention, especially for general-purpose technologies, as it once was for electricity. As a result, AI adoption remains on the sidelines.”
A pilot operates in a controlled environment, but production does not. What appears tractable in a limited trial must, in reality, contend with a range of operational and organizational complexities, including data flows, legacy systems, workforce turnover, competing incentives, procurement requirements, and compliance constraints. Surveys of enterprise AI implementations show that a large share of promising AI projects never reach full production because these contextual and structural hurdles are often underestimated in early stages of development. In many sectors, AI initiatives are stuck in “pilot purgatory,” with technical success in controlled settings failing to translate into reliable, scalable operational use across diverse environments.
This gap between demonstration and deployment is further compounded by what researchers describe as fragmented or siloed adoption. Solutions that may function well in one context often do not port cleanly across volumes, geographies, or programs, leading to slow broader uptake and raising costs. Systematic reviews of AI adoption in production contexts find that organizational and technological factors interact in ways that especially complicate adoption, and that research into these adoption pathways remains in the early stages.
The absence of shared institutional vocabulary and governance frameworks further impedes adoption. Differences in definitions of core terms such as use case, impact, or benchmark between technology builders, operators, and regulators can contribute to miscommunication, procurement delays, and unclear decision criteria.
Finally, adoption is shaped by broader institutional gaps in ownership and accountability. Without clear lines of responsibility and governance, organizations hesitate to embed AI systems deeply into core processes. Multiple studies on AI adoption have noted that institutional readiness, governance frameworks, and cross-functional coordination are critical determinants of success, and that their absence can result in stalled deployments or parallel manual systems. Adoption, then, is not a technology problem. It is a systems problem.
Creating Value Through Diffusion
If AI adoption is a systems problem, then the right mental model is diffusion, not invention alone.
Jeffrey Ding uses the phrase “diffusion capacity” and defines it as the ability “to spread and adopt innovations… across productive processes.” That is the frame most AI programs need. The advantage does not come from being first to show a demo. It comes from being able to embed AI into daily work, repeatedly, safely, and at scale. This is also why the same organizations can look simultaneously “AI forward” and “AI stuck.” They are inventing pilots but not diffusing adoption.
Use Case as a Unit of Adoption
A use case is not “deploy a chatbot” or “buy a copilot.” It is a commitment to improve a specific outcome for a specific person in a specific context. Thinking in use cases matters because it keeps the conversation anchored in value. It forces choices about what changes in the workflow to solve a problem faced by people, not just what the model can generate.
A practical use case definition has three parts:
- Persona: Who uses this in the real world, and what must be true for them to trust it?
Examples: a caseworker with a backlog, a supervisor who signs off, a frontline clerk evaluated on speed, a citizen who needs an explanation in their language. - Purpose: What problem are we solving? What is the key pain point?
Examples: a judge looking to transcribe in real time, a farmer looking for crop advisory based on weather and market prices, a student facing learning gaps. - Pathway: What does it take to move from an impressive pilot to a population scale system?
Examples: Can it be scaled across languages? Is safety included as an afterthought? Can it be accessed over voice? How are the institutions trusting and/or owning the output of AI?
A use case-first approach does not slow AI down. It stops AI from drifting into toy deployments that cannot survive contact with reality.
When Use Cases Multiply, a Pattern Appears
When purpose, persona, and pathways are considered across multiple deployments, a pattern becomes evident. The specific AI tool varies by use case, but the surrounding work repeats, and the same questions recur:
- How to access the right data, with the right permissions?
- How to bring in evaluations and benchmarks that matter to the persona and not borrowed from the creators of the technology?
- How to document guardrails so users know when not to trust the system?
- How to ensure the same answers are given across different large language models?
- How to train users and redesign workflows so the tool can handle both humans and AI?
If each deployment is treated as a one-off, scaffolding has to be rebuilt every time—a process that is expensive, slow, and fragile. This pattern signals that horizontal capacities can be built once and reused across use cases. The framework can improve the AI tool by ensuring feedback from deployments.
What is a UCAF, and What Does It Do?
A Use Case Adoption Framework, or UCAF, studies adoption across multiple use cases to identify common capabilities and inputs required for effective use and sustained impact. When these inputs recur, they become horizontal enablers that support use cases across diverse sectors. UCAF becomes useful when similar use cases appear across multiple organizations and institutions, and it becomes necessary to distinguish what is context-specific from what can be reused. Domain context, user needs, and outcome measures remain tied to each deployment. The surrounding adoption infrastructure often does not.
By making this separation explicit, UCAF makes AI adoption repeatable across institutions. It gives leaders a way to ask consistent questions across diverse use cases, and enables practitioners to reuse what works rather than relearn the same lessons in parallel. A core set of horizontal enablers tends to recur across sectors and use cases:
- Data readiness and access: How data is exchanged, AI-readiness of the data, how permissions are handled, and how data pipelines are maintained.
- Language capabilities: Which languages and dialects must be supported, how translation and localization are handled, how performance varies across languages, and how language coverage affects access, equity, and trust.
- Voice as a modality: When voice input or output is required, how speech recognition performs across accents and environments, and how it integrates with existing workflows and accessibility needs.
- Workforce reimagination: How roles, incentives, and workflows change with AI adoption; what tasks are augmented or automated; how human judgment is preserved; and how training supports sustained use.
- Guardrails: What constraints are placed on system behavior, how risks are identified and mitigated, how human oversight and escalation are triggered, and how accountability is defined when systems fail or cause harm.
A use case-first approach, strengthened by a UCAF mechanism, changes the unit of progress. Instead of counting pilots, it counts converted use cases that are working reliably in production. Instead of celebrating demos, it builds adoption infrastructure that makes future deployments easier. Instead of treating accountability as a legal footnote, it designs it into the workflow so users need not carry institutional risk alone.
This is how adoption compounds. Each successful use case leaves behind assets that accelerate the next one—evaluation benchmarks for low-resource languages, data flows and readiness architecture, monitoring playbooks, and training modules, among many others. It also clarifies what to stop doing. A use case portfolio, supported by shared horizontal enablers, produces fewer but stronger deployments.
This dynamic is evident in Africa. According to Keyzom Ngodup Massally, Director of the AI Hub for Sustainable Development at the UNDP, innovators participating in the G7–Africa AI Hub Infrastructure Builder and Compute Accelerator Programmes have identified central barriers to scaling AI adoption. These include context-specific local language models, affordable and sustainable compute aligned to real workloads in agriculture, health, and climate resilience, and pricing models compatible with local mobile money systems. Drawing on insights from thousands of connected use cases, she emphasized that reimagining partnerships as networks enables ecosystems to emerge organically, allowing safe and scalable adoption once real problems are solved.1
For example, the OpenAgriNet (OAN) initiative uses AI to deliver voice-first, multilingual last-mile advisory gaps with localized weather, market, and crop guidance. OAN’s Maharashtra pilot, MahaVISTAAR, is an open-network platform that advances the state’s digital inclusion goals by providing tailored, reliable, location-specific support that farmers can access by speaking in Marathi or Hindi, without a literacy barrier. UCAF guides the design of AI systems that work across Marathi and Hindi, integrates moderation modules for casual queries, and safeguards against misinformation to ensure provenance in replies, with data sources.
Conclusion
The value of AI is revealed when people actually use it—and that means moving from pilots to widespread, everyday adoption. Safe AI impact at scale depends on consistent focus, strong institutions, and safety built into every stage. The UCAF is an actionable framework that connects vertical sectors, where value is created, with horizontal enablers that deliver scalability and sustainability, all anchored on real-world use cases. Designed for global applicability, it establishes shared definitions to harmonize vocabulary across stakeholders and geographies, and it maps the path from ideas to large-scale impact. The challenges unfold here.
Notes
1Keyzon Ngodup Massally, AI Impact Summit Preparatory Meeting, Global Technology Summit Innovation Dialogue, New Delhi, December 14, 2025.
