The Infrastructure of Power: Artificial Intelligence between Global Innovation & Structural Exploitation

This article was originally written in German by Geraldine de Bastion for the INKOTA Network on 4th March 2026. Read the original here.

In the public eye, artificial intelligence (AI) is often portrayed as a technological marvel operating in an immaterial cloud, free from physical constraints. Yet the reality in Nairobi, Kenya paints a radically different picture: there, thousands of data workers spend their entire working day flagging content that is hateful, violent or sexually abusive. This often traumatic work is a necessary prerequisite for generative AI systems to learn how to filter out such content and appear ‘safe’ to end-users. In this parallel universe of ‘invisible’ digital labour, content annotation and moderation tasks have become a structural component of the global AI supply chain.

This work is routinely outsourced to low-wage markets and portrayed in the tech industry’s dominant narrative as a mere side effect of innovation. In reality, however, it is fundamental to the functionality of modern language models. A study highlighting this using the case of the company Sama in Kenya, which worked on behalf of OpenAI, revealed how workers there earned just between US$1.32 and US$2 per hour whilst reviewing deeply disturbing material to make systems like ChatGPT ‘less toxic’. The report describes serious psychological harm and a complete lack of support for those affected. A bitter paradox emerges: ‘safe AI’ for the Global North is often achieved at the expense of workers in the Global South being exposed to unsafe and hazardous conditions. When a technology relies on cheap labour from the Global South but fails to simultaneously strengthen their skills and bargaining power, this constitutes a new form of exploitation rather than an equal partnership.

AI as infrastructure: Beyond the software metaphor

In current development cooperation, AI is frequently introduced as a one-off tool that is simply added to existing projects – for example, in the form of a chatbot for public information, a risk model for social security systems, or a diagnostic model in healthcare. Yet AI is far more than an isolated software application. It must be understood as a coherent stack of components comprising data, massive computing power, cloud contracts, technical standards and organisational routines. Once these systems are embedded in social processes, they behave like infrastructure: they quietly shape the realm of possibility, determine who sets the conditions for their use, and decide who ultimately bears the consequences if these complex systems fail.

Technically speaking, most of these applications are based on machine learning systems that are trained using vast datasets to recognise patterns and, on that basis, make predictions, carry out assessments or issue recommendations. While generative models producing text and images are highly visible in the media, models with significant political implications often operate behind the scenes, embedded within institutions where they detect fraud, assess creditworthiness, prioritise legal cases, or automate eligibility checks for social benefits. For civil society experts, therefore, it is not technical innovation that is decisive, but governance — from the Latin gubernare, meaning “to steer, to guide” — that is, the principles, structures, and processes for the responsible management and oversight of organisations, companies, or states. These systems do not “innovate” on their own; they optimise objectives based on available data. If that data is biased or fails to reflect local conditions, AI reproduces exclusion at scale, and does so under the guise of technological objectivity.

The “imperial gaze” and the geopolitics of computing power

The benefits and risks of AI are distributed extremely unevenly across the globe. Even the language we use to describe these phenomena can obscure power relations. Indeed, experts warn that the term ‘Global South’ is often used in AI ethics as an imprecise catch-all term that obscures the diversity of political economies and institutional contexts. The term risks reproducing stereotypes or an ‘imperial gaze’ when it functions as a synonym for deficit rather than prompting examination of concrete power structures, financial flows, and governance asymmetries.

If “AI for the Global South” functions as a uniform, paternalistic narrative, it becomes easier to sell standard solutions from the North as universal answers. This makes it far harder for local communities to shape decisions about their own data, provider accountability, and value creation.If AI is understood as infrastructure, then access to computing capacity becomes synonymous with political and economic agency. The ability to train, adapt, and critically review one’s own models depends on reliable connectivity and affordable computing power — resources that remain extremely unevenly distributed. Many countries often face structural constraints that force reliance on external cloud providers. This drastically limits not only domestic value creation but also national bargaining power.

This concentration is also geopolitically motivated. US-China competition over semiconductor chips, cloud capacity, and global standards increasingly determines which technologies are available to other actors, and on what terms. Procurement decisions for AI infrastructure are thus bound up with export controls, security doctrines, and provider geopolitics — a dynamic that development cooperation can no longer treat as mere background noise.

The material basis: energy, water and the limits to growth

Contrary to the narrative of ‘clean’ digitalisation, computing power is not a metaphysical matter. Data centres consume enormous amounts of electricity, require extensive cooling, and often use significant quantities of water — meaning national AI capacity depends directly on grid stability, land access, and local resource management. This resource intensity is already generating public controversy where local communities bear environmental costs directly.

The implications for North-South relations are particularly precarious: one country may absorb the full energy burden and environmental consequences, while monetisation and high-margin product development occur almost exclusively elsewhere. Research asserts that compute and data storage rank among AI’s most resource-intensive inputs, and that the associated costs and benefits are likely to be distributed in deeply unequal ways across the globe. Energy must therefore be part of the same conversation as sovereignty: AI infrastructure is not just about digital modernisation, but about the global distribution of resource burdens and value creation.

Education and the ‘class issue’ of AI

Moreover, access to physical infrastructure does not resolve the second dividing line: the gap between being a mere user and being able to develop AI applications for local problem-solving. This gap is shaped by education, organisational capacity, and the affordability of professional tools.

AI-related roles often place extremely high demands on formal qualifications. Analyses show that virtually all specialist AI roles require at least a bachelor’s degree, and often even a postgraduate qualification. The emerging AI economy thus rewards those who already have privileged access to higher education. At the same time, many people most affected by automation or algorithmic management have little opportunity to acquire these necessary qualifications retrospectively.

Furthermore, the ability to act is restricted by a tiered system based on ability to pay. Whilst basic versions are often free, the most powerful functions, necessary for sustained professional use or complex workflows, sit behind paid subscriptions. This not only reproduces inequality between nations, but also creates new class divisions within societies: those who can pay become developers and power users; those who cannot are confined to a passive consumer role.

Concentration of power and the colonial pattern

Global AI development today is concentrated among an alarmingly small group of actors: technology giants, strategically oriented states, and a technological elite that has successfully framed speed and scale as inevitable. Within this ideology, social costs are often dismissed as unavoidable collateral damage of a technological race.

Critical observers such as Nick Couldry and Ulises Mejias, the founders of the concept of data colonialism, see this as a continuation of historical hierarchies — a real risk that AI expansion reproduces colonial political economies, concentrating economic value where capital and computing power already reside, while labour exploitation and environmental harm fall disproportionately on those with the least bargaining power. Whatever one makes of this diagnosis politically, its structural logic is hard to deny when tracing the supply chain from data annotation

Critical observers, such as Nick Couldry and Ulises Mejias, see this as a continuation of historical hierarchies. There is a real danger that AI expansion reproduces colonial political economies, concentrating economic value where capital and computing power already reside, while labour exploitation and environmental harm fall disproportionately on those with the least bargaining power. Whatever one makes of this diagnosis politically, its structural logic is hard to deny when tracing the supply chain from data annotation to resource consumption.

Alternative visions: The ‘other AI revolution’

Despite these grim predictions encouraging alternatives towards a more sustainable model of progress exist. Masakhane, a grassroots movement “for Africans, by Africans”, directly addresses the under-representation of African languages in mainstream AI. The AI4D African Language Dataset Challenge pursues similar goals, developing high-quality datasets for local languages to address a key bottleneck: the lack of accessible local data.

In India, the government-backed Bashini initiative shows how a national translation infrastructure can make digital content available across all national languages, treating multilingualism not as a problem, but as a public foundation. Researchers describe this as ‘the other AI revolution’: here, language models are developed and repurposed under resource constraints for communities that were never the primary target of Silicon Valley. Requiring a high degree of creativity in methods and governance, this approach demonstrates that technical progress need not follow a centralised logic.

Governance as an instrument of sovereignty

The long-term societal benefits of AI depend less on model performance than on the strength of institutions and the conditions under which these systems are deployed. For development cooperation, this requires a shift from enthusiastic adoption to strategic governance — one that insists AI applications in the public sector are transparent and verifiable, and that affected individuals have access to effective legal remedies. Furthermore, it is crucial that systems are measured against specific local conditions rather than imported benchmarks, in line with the rights-based framework set out in UNESCO’s global standards.Sovereignty is not an abstract ideal. It manifests in concrete terms through specific infrastructure, a strategic procurement policy, the development of local expertise and strong coalitions. Secure access to computing power, the development of indigenous language resources and effective occupational safety are not peripheral to ‘AI readiness’, but  determine whether AI becomes a tool for inclusive public value or a new chain of dependency.

The crucial question for the coming years is therefore not how quickly AI systems can be deployed on a large scale, but under what conditions societies can actively shape these technologies. Whether AI ushers in a new chapter of global dependency or offers a chance to renegotiate the terms of innovation depends less on the algorithms themselves than on governance, bargaining power, and the willingness to treat AI not as an import, but as critical infrastructure requiring regulation in the public interest.

Subscribe to get our latest insights into your inbox!

Thank you for your trust! By subscribing to our newsletter, you also agree to our terms and privacy policy for more info. We promise we don’t spam!