مؤسسة الذكاء الاصطناعي المسؤول للمستقبل تشارك في استضافة القمة العالمية للذكاء الاصطناعي في أبوظبي وتوسع قيادتها التنفيذية

For much of the modern digital era, the dominant narrative has been of tech innovation flowing in one direction from a handful of dominant global forces. This is increasingly being challenged as AI presents an opportunity for new centres of innovation to emerge and ensure technology is shaped by the communities it serves.

Speaking at the India AI Impact Summit, I made the case that if AI power remains concentrated in a few places, the world risks drifting toward a new form of digital inequality — one where some countries create intelligence and others only consume it. A form of digital dependency that produces a homogenised monoculture: beige, uniform, and designed for someone else’s context.

AI sovereignty isn’t about nationalism or closing borders. It’s about having options — the genuine ability to build, adapt, and decide how technology is used at home. It means local languages matter. Local problems matter. And local data doesn’t automatically become someone else’s product.

India is demonstrating what that looks like in practice,  with this direction reinforced at the highest levels of government. S. Krishnan, India’s Minister of State for Electronics and IT, has consistently emphasised the importance of building domestic capability — from compute infrastructure to multilingual models and public-interest digital systems — so that India participates in shaping AI, not simply adopting it.

Image 1
Image 2 Image 3

When he spoke at the Abu Dhabi AI Summit in November 2025 — co-organised by the Responsible AI Future Foundation (RAIFF) —  his message centred on global cooperation alongside national capacity-building.

At the India AI Impact Summit, that same theme was evident in the focus on applied deployment, local relevance, and scalable infrastructure. The through-line is clear: participation, not isolation.

India already runs digital systems at population scale. Its startups are proving that AI can serve small farmers, local clinics, and first-generation entrepreneurs — not just large corporations. Its multilingual ecosystem is forcing model innovation well beyond English dominance. Indian founders are using AI as a leveller, building cost-efficient, India-first solutions across healthcare, agriculture, education, logistics, and governance.

This bottom-up innovation model matters enormously. A two-person startup can now serve millions. Health-tech founders are compressing years of clinical backlog with AI diagnostics. Agri-tech platforms are giving small farmers predictive intelligence once reserved for multinational giants. This is what genuine inclusion looks like: stronger local capability, home-grown innovation, and technology that serves people rather than extracting from them.

And none of this is happening slowly. At the Summit, Google DeepMind CEO Demis Hassabis suggested AGI could be achieved within five years — an apparent halving of his projected timeline from the year before. OpenAI’s Sam Altman claimed the world might be only a couple of years away from early forms of superintelligence. Anthropic’s Dario Amodei argued that advanced AI could potentially drive 25% annual GDP growth for India, compared to 10% for rich nations. These are not distant forecasts. The urgency is real, which is exactly why the decisions being made now — about who builds AI, for whom, and on whose terms — are so consequential.

India’s approach matters because it demonstrates that balance is achievable. You can build AI that works for your people. You can build AI that is deeply fitted to your context — your languages, your use cases, your laws and constraints. A model that works in Hindi, Tamil, or Marathi for a farmer in rural Karnataka is worth infinitely more to that farmer than one designed with a San Francisco engineer as the default user.

If India can build high-quality AI supporting 22 Indian languages, it creates a template other diverse nations can adopt. And the implications extend far beyond India’s borders. Imagine the AI future if African nations could co-develop agriculture and climate datasets. If Southeast Asia aligned on data governance standards. If Global South nations co-shaped safety frameworks instead of importing them wholesale from elsewhere.

In risk terms alone, that world is more resilient. AI becomes multipolar rather than monopolised. And sovereignty becomes a shared asset rather than a privilege reserved for the powerful.

India has provided a roadmap for dozens of other Global South countries trying to work out where they fit in a world where two nations currently dominate the technology that will define this century. It has also shown that AI governance is no longer a conversation held amongst a few — it is increasingly a conversation held with the many.

That shift, if we protect and extend it, may matter as much as the technology itself.This is precisely why the Responsible AI Future Foundation – which I lead as Executive Chair – exists.

Based in Abu Dhabi, with a focus from the United Arab Emirates to across the Global South, RAIFF was created to ensure that the future of advanced AI includes broader participation — not only in deployment, but in research, safety, and governance. Its purpose is to build bridges between frontier AI development and emerging economies, support capacity-building in high-impact regions, and help shape standards that reflect diverse societal needs.

Image 1

In doing so, RAIFF is not advocating fragmentation of the AI ecosystem; it is working to ensure that its benefits, responsibilities, and decision-making structures are more evenly shared.

AI is advancing at extraordinary speed. Governance, capability-building, and institutional design are struggling to keep pace. That gap is where long-term inequality — or long-term resilience — will be determined.

India’s approach demonstrates that the future of AI does not have to be written in a single geography. If more countries build the capacity to shape and adapt these systems for their own contexts, AI will become not just more powerful, but more legitimate. And legitimacy, in a technology this transformative, may prove to be its most important feature.

Baroness Joanna Shields OBE, Executive Chair, Responsible AI Future Foundation