The
Next Resource Curse Will Not Come from the Ground, but from AI
By
Sebaggala Richard (PhD)
Anyone who has studied economics or political economy has encountered the idea of the resource curse. It is one of those concepts that, once learned, becomes difficult to ignore. The basic insight is not that natural resources are harmful, but that their effects depend on timing and institutions. When countries discover oil after building strong systems of governance, education, and accountability, the resource can support development. When oil arrives before those foundations are in place, it often distorts incentives, weakens institutions, and entrenches inequality.
Africa’s experience with oil illustrates this lesson clearly. In many countries, oil was discovered before institutions capable of managing it had matured. Rather than financing broad-based development, oil revenues reshaped political and economic behaviour. Governments became less dependent on taxation, weakening the relationship between citizens and the state. Political competition shifted toward control of resource rents, while long-term investments in human capital, skills formation, and institutional learning were crowded out by short-term extraction. The problem was never oil itself, but the institutional environment into which it arrived.
This pattern has repeated itself across regions. In Nigeria, oil wealth reduced pressure to build a diversified tax base and contributed to persistent governance challenges. In Angola, decades of oil exports coexisted with limited human capital development and fragile public institutions. Beyond Africa, Venezuela shows how even a country with relatively strong early indicators can succumb to the same dynamics when resource dependence undermines institutional discipline. Across these cases, corruption and leadership failures mattered, but they were symptoms rather than the root cause. At its core, the oil curse was a sequencing problem: a powerful resource arrived before societies had built the institutions needed to govern it.
What is less often recognised is that this logic applies far beyond natural resources. The same political-economy dynamics emerge whenever a powerful, economy-shaping input arrives before societies are ready to manage it. Today, artificial intelligence fits that description with unsettling precision.
AI is a general-purpose force, much like oil or electricity. It reshapes production, labour markets, and governance, not gradually but at speed. Yet AI does not create skills, judgment, or institutions on its own. It amplifies what already exists. Where educational systems are strong, where professional formation is deliberate, and where organisations are capable of learning, AI raises productivity and improves decision-making. Where those foundations are weak or uneven, the same technology magnifies fragility.
This makes the question of institutional timing unavoidable. In many developing countries, AI is spreading into economies where education systems remain oriented toward content delivery rather than competence formation, where labour markets offer limited structured learning pathways, and where public institutions struggle with capacity and coordination. Under such conditions, AI is unlikely to broaden opportunity. Instead, it risks reinforcing advantage among those who already possess skills, credentials, and institutional access.
The speed of this process adds to the risk. The oil curse unfolded slowly, often over decades. AI-driven divides can harden much faster. Once firms, universities, and public agencies reorganise around AI-intensive systems, late institutional adjustment becomes costly and politically difficult. Education systems, in particular, risk becoming sites where inequality is quietly reproduced rather than corrected.
This concern becomes clearer when we observe how AI is already reshaping outcomes at the individual level in advanced economies. A recent debate in Canada highlights a growing divide between early-career and experienced workers. Professionals with established expertise use AI as a productivity multiplier. It accelerates analysis, improves output quality, and extends their reach. For younger workers, however, AI is eliminating many of the entry-level tasks that once served as informal apprenticeships, allowing them to build judgment, intuition, and professional confidence.
The underlying mechanism mirrors the macro story. AI amplifies skill; it does not generate it. Experienced workers know how to frame problems, evaluate outputs, and integrate partial results into coherent decisions. Early-career workers acquire these capabilities through practice, often by doing imperfect, routine, and time-consuming tasks. As those tasks disappear, the pathway from novice to expert narrows. What appears to be a labour-market disruption is, at its core, a learning and institutional problem.
What is happening within firms and careers therefore reflects the same logic that once operated at the level of entire economies. Just as oil rewarded countries that already had strong institutions, AI rewards individuals who already possess deep knowledge and judgment. And just as oil undermined development where governance capacity was weak, AI threatens to erode career ladders and national development trajectories where foundational skills and institutions remain underdeveloped.
Seen in this light, the Canadian experience is not an anomaly but an early signal. Recent debates in Canada, including a widely discussed essay by Tony Frost and Christian Dippel in The Globe and Mail, show how artificial intelligence is widening gaps between early-career and experienced workers by displacing the very tasks through which judgment and expertise are traditionally developed. Although this discussion is grounded in a high-income country with relatively strong institutions, it offers a useful preview of dynamics that are likely to be more pronounced where institutional foundations are weaker. At the national level, African countries face similar risks. Without sustained investment in education, AI is likely to concentrate opportunity among a narrow elite. Without capable public institutions, algorithmic systems may be imported and relied upon without meaningful oversight. And without clear data governance, countries risk exporting raw data while importing finished intelligence, reproducing extractive relationships in digital form.
Higher education sits at the centre of this challenge. Universities are the primary institutions through which societies translate new technologies into widely shared capability. When they adapt slowly or defensively, technological change tends to benefit those who already have advantage.
In Uganda, this tension is increasingly visible. The National Council for Higher Education has pushed universities toward competence-based education, recognising that traditional content-heavy models are poorly aligned with labour-market realities. Curriculum reviews are underway across institutions, and there is growing agreement that graduates must demonstrate applied skills, judgment, and problem-solving ability rather than mastery of content alone.
Yet within these reforms, the role of artificial intelligence remains largely unresolved. Much of the discussion treats AI primarily as a threat to academic integrity or as a tool to be controlled. Far less attention has been given to how AI reshapes what competence itself means, or how it can be integrated into teaching, assessment, and supervision to strengthen reasoning rather than replace it. Even less effort has gone into preparing academic staff to work confidently and critically with AI, or into helping students learn how to use AI as a cognitive aid rather than a shortcut.
This gap matters. Competence-based education without AI risks becoming backward-looking, while AI adoption without competence-based thinking risks becoming extractive. If universities revise learning outcomes and assessment formats but ignore how AI changes the production of knowledge and skill, they may unintentionally widen inequality. Students with prior exposure, stronger educational backgrounds, or informal access to AI tools will benefit disproportionately, while others fall further behind.
From a development perspective, this is precisely how an AI curse would emerge. Not through dramatic technological failure, but through institutional lag. Universities would continue producing graduates formally certified as competent, yet unevenly prepared to think, judge, and integrate knowledge in an AI-rich environment. Academic staff would be pushed into a policing role rather than a pedagogical one. Over time, the gap between those who can work meaningfully with AI and those who merely coexist with it would widen.
Avoiding this outcome requires treating AI as a central feature of institutional reform rather than an afterthought. Preparing graduates for an AI-intensive economy means rethinking how competence is taught and assessed, how academic staff are trained, and how learning tasks are designed. It means embedding AI literacy, ethical reasoning, and applied judgment into curricula, rather than addressing AI only through restrictions and warnings.
Africa’s greatest risk, therefore, is not being left behind by AI. It is being integrated into the global AI economy in ways that lock in inequality and dependence, much as oil once did. The oil curse was recognised only after it had already reshaped political economies. With AI, there is still a narrow window to act differently. If that window closes, AI-driven inequality is likely to be faster, deeper, and harder to reverse than anything oil ever produced.
The lesson from development economics is sobering but clear. Resources do not curse societies. Institutions do. AI will not curse Africa on its own. But without deliberate institutional preparation, particularly within education systems, it risks becoming the most sophisticated version of an old and costly mistake.
No comments:
Post a Comment