Saturday, 21 February 2026

 

The Architect, Not the Builder: Preserving Scholarly Judgment in the Age of AI

By Sebaggala Richard (PhD)

Last week I spoke to a group of researchers and PhD students about artificial intelligence in scholarly writing and literature review. The mood in the room was not defensive; most participants accepted that AI tools are already reshaping academic work, and the discussion was marked by curiosity and cautious optimism. Beneath that enthusiasm, however, lay a quieter concern that went beyond plagiarism or hallucinations. What unsettled many was a more fundamental question: in embracing AI, might we gradually outsource the habits of thinking that define scholarship?

 

This concern deserves serious attention because the central risk is not misconduct but the slow erosion of intellectual ownership. For doctoral students and early-career researchers, research is not simply the production of text; it is the development of judgment. It requires working through ambiguity, weighing competing explanations, and refining arguments until they can withstand scrutiny. Large language models make many parts of this process faster by summarizing articles, suggesting theoretical connections, and interpreting statistical output with impressive fluency. The results often look polished, yet polish should not be confused with understanding.

 

During the training, I demonstrated how AI can assist with drafting search strings, organizing literature into themes, suggesting model specifications, and clarifying the presentation of regression results. The tools proved useful, but throughout the session I emphasized that acceleration does not alter the underlying logic of research. A literature review still begins with a clearly defined question and proceeds through a transparent search strategy, systematic screening, careful comparison of findings, and verification of sources. While AI can help structure these steps, it cannot determine what counts as relevant evidence or where the conceptual gap lies. Those judgments remain the responsibility of the researcher.

 

The same boundary becomes even more important in empirical work. In our example using survey data, AI was permitted to suggest possible dependent and independent variables, outline potential models, and draft statistical syntax. It could recommend robustness checks and help structure the results section. It did not, however, choose the identification strategy, justify causal claims, test assumptions, or determine the substantive meaning of the findings in context. Model choice requires theoretical grounding, causal inference demands methodological reasoning, and interpretation depends on domain knowledge. Delegating these decisions would weaken the integrity of the research.

 

Responsible use therefore begins with clarity about where assistance ends and authorship begins. Before turning to AI, researchers would do well to ask not whether its use is permitted but whether it enhances their reasoning or replaces it. There is a meaningful difference between asking AI to critique a draft and asking it to write the draft itself, just as there is a difference between using it to uncover blind spots and using it to construct an argument from scratch. Although these approaches may appear similar from the outside, they cultivate very different intellectual habits.

 

The discussion also revealed a broader cultural dimension, particularly relevant in many African academic settings where struggle is often equated with learning and difficulty is treated as evidence of seriousness. When processes become faster or more efficient, suspicion sometimes follows, as if reduced effort necessarily implies reduced rigor. AI unsettles this assumption. The ability to map literature more efficiently or clarify statistical syntax quickly does not automatically diminish depth or weaken econometric understanding. Hardship is not a prerequisite for rigor.

 

Struggle has value when it produces insight, but it adds little when it is purely mechanical. Manually formatting references does not deepen theoretical reasoning, nor does repeating routine coding steps automatically strengthen econometric judgment. Spending hours constructing search strings does not guarantee conceptual clarity. Some forms of difficulty are intellectually formative, while others persist simply because they have long been part of academic practice. The aim is not to preserve difficulty for its own sake but to preserve active and disciplined thinking.

 

In practice, thoughtful use of AI can strengthen learning. During the workshop, once some mechanical aspects of literature searching were streamlined, participants were able to devote more attention to substantive questions, such as why findings differed across contexts, where theoretical tensions remained unresolved, and how to sharpen the articulation of their research gaps. Automation, in this sense, freed cognitive space for higher-level analysis. A similar pattern emerged in empirical writing, where AI’s suggestions about alternative specifications or potential weaknesses created room to focus more carefully on identification, assumptions, and interpretation, leaving the intellectual core of the exercise intact.

 

A constructive approach is therefore to think independently first by framing the research problem, interpreting results on one’s own, and sketching the structure of the argument without assistance. AI can then be used to expand and test that thinking by identifying weaknesses, proposing alternative explanations, or improving clarity. The final step requires taking full responsibility for the work by verifying every citation, checking every claim, and ensuring that the argument reflects genuine understanding. A simple test helps clarify ownership: if AI were unavailable, could you defend your research question, theoretical framework, model specification, identification strategy, and interpretation of findings? If the answer is yes, automation has supported the work without undermining it; if not, further reflection is required before it can be considered truly your own.

 

A doctoral degree is not a document production exercise but a process of intellectual formation. AI can make writing more efficient, yet it cannot substitute for judgment. History provides perspective: calculators, statistical software, and digital databases were all met with resistance when first introduced, each innovation reducing effort in certain tasks and prompting concerns about declining standards. Research did not deteriorate; it evolved, shaped less by the technology itself than by the norms governing its use.

 

AI does not eliminate the need for careful thinking; it reduces some of the mechanical burdens that surround it. Whether scholarship becomes more superficial or more sophisticated in this environment will depend less on the capability of AI and more on the discipline of those who use it. Before generating text, it is worth pausing to ask whether the tool is being used to deepen reasoning or to bypass it. Responsible use is not about preserving hardship but about preserving judgment, and judgment remains, as it always has been, a human responsibility.

Saturday, 7 February 2026

 

The Next Resource Curse Will Not Come from the Ground, but from AI

By Sebaggala Richard (PhD)

Anyone who has studied economics or political economy has encountered the idea of the resource curse. It is one of those concepts that, once learned, becomes difficult to ignore. The basic insight is not that natural resources are harmful, but that their effects depend on timing and institutions. When countries discover oil after building strong systems of governance, education, and accountability, the resource can support development. When oil arrives before those foundations are in place, it often distorts incentives, weakens institutions, and entrenches inequality.

Africa’s experience with oil illustrates this lesson clearly. In many countries, oil was discovered before institutions capable of managing it had matured. Rather than financing broad-based development, oil revenues reshaped political and economic behaviour. Governments became less dependent on taxation, weakening the relationship between citizens and the state. Political competition shifted toward control of resource rents, while long-term investments in human capital, skills formation, and institutional learning were crowded out by short-term extraction. The problem was never oil itself, but the institutional environment into which it arrived.

This pattern has repeated itself across regions. In Nigeria, oil wealth reduced pressure to build a diversified tax base and contributed to persistent governance challenges. In Angola, decades of oil exports coexisted with limited human capital development and fragile public institutions. Beyond Africa, Venezuela shows how even a country with relatively strong early indicators can succumb to the same dynamics when resource dependence undermines institutional discipline. Across these cases, corruption and leadership failures mattered, but they were symptoms rather than the root cause. At its core, the oil curse was a sequencing problem: a powerful resource arrived before societies had built the institutions needed to govern it.

What is less often recognised is that this logic applies far beyond natural resources. The same political-economy dynamics emerge whenever a powerful, economy-shaping input arrives before societies are ready to manage it. Today, artificial intelligence fits that description with unsettling precision.

AI is a general-purpose force, much like oil or electricity. It reshapes production, labour markets, and governance, not gradually but at speed. Yet AI does not create skills, judgment, or institutions on its own. It amplifies what already exists. Where educational systems are strong, where professional formation is deliberate, and where organisations are capable of learning, AI raises productivity and improves decision-making. Where those foundations are weak or uneven, the same technology magnifies fragility.

This makes the question of institutional timing unavoidable. In many developing countries, AI is spreading into economies where education systems remain oriented toward content delivery rather than competence formation, where labour markets offer limited structured learning pathways, and where public institutions struggle with capacity and coordination. Under such conditions, AI is unlikely to broaden opportunity. Instead, it risks reinforcing advantage among those who already possess skills, credentials, and institutional access.

The speed of this process adds to the risk. The oil curse unfolded slowly, often over decades. AI-driven divides can harden much faster. Once firms, universities, and public agencies reorganise around AI-intensive systems, late institutional adjustment becomes costly and politically difficult. Education systems, in particular, risk becoming sites where inequality is quietly reproduced rather than corrected.

This concern becomes clearer when we observe how AI is already reshaping outcomes at the individual level in advanced economies. A recent debate in Canada highlights a growing divide between early-career and experienced workers. Professionals with established expertise use AI as a productivity multiplier. It accelerates analysis, improves output quality, and extends their reach. For younger workers, however, AI is eliminating many of the entry-level tasks that once served as informal apprenticeships, allowing them to build judgment, intuition, and professional confidence.

The underlying mechanism mirrors the macro story. AI amplifies skill; it does not generate it. Experienced workers know how to frame problems, evaluate outputs, and integrate partial results into coherent decisions. Early-career workers acquire these capabilities through practice, often by doing imperfect, routine, and time-consuming tasks. As those tasks disappear, the pathway from novice to expert narrows. What appears to be a labour-market disruption is, at its core, a learning and institutional problem.

What is happening within firms and careers therefore reflects the same logic that once operated at the level of entire economies. Just as oil rewarded countries that already had strong institutions, AI rewards individuals who already possess deep knowledge and judgment. And just as oil undermined development where governance capacity was weak, AI threatens to erode career ladders and national development trajectories where foundational skills and institutions remain underdeveloped.

Seen in this light, the Canadian experience is not an anomaly but an early signal. Recent debates in Canada, including a widely discussed essay by Tony Frost and Christian Dippel in The Globe and Mail, show how artificial intelligence is widening gaps between early-career and experienced workers by displacing the very tasks through which judgment and expertise are traditionally developed. Although this discussion is grounded in a high-income country with relatively strong institutions, it offers a useful preview of dynamics that are likely to be more pronounced where institutional foundations are weaker. At the national level, African countries face similar risks. Without sustained investment in education, AI is likely to concentrate opportunity among a narrow elite. Without capable public institutions, algorithmic systems may be imported and relied upon without meaningful oversight. And without clear data governance, countries risk exporting raw data while importing finished intelligence, reproducing extractive relationships in digital form.

Higher education sits at the centre of this challenge. Universities are the primary institutions through which societies translate new technologies into widely shared capability. When they adapt slowly or defensively, technological change tends to benefit those who already have advantage.

In Uganda, this tension is increasingly visible. The National Council for Higher Education has pushed universities toward competence-based education, recognising that traditional content-heavy models are poorly aligned with labour-market realities. Curriculum reviews are underway across institutions, and there is growing agreement that graduates must demonstrate applied skills, judgment, and problem-solving ability rather than mastery of content alone.

Yet within these reforms, the role of artificial intelligence remains largely unresolved. Much of the discussion treats AI primarily as a threat to academic integrity or as a tool to be controlled. Far less attention has been given to how AI reshapes what competence itself means, or how it can be integrated into teaching, assessment, and supervision to strengthen reasoning rather than replace it. Even less effort has gone into preparing academic staff to work confidently and critically with AI, or into helping students learn how to use AI as a cognitive aid rather than a shortcut.

This gap matters. Competence-based education without AI risks becoming backward-looking, while AI adoption without competence-based thinking risks becoming extractive. If universities revise learning outcomes and assessment formats but ignore how AI changes the production of knowledge and skill, they may unintentionally widen inequality. Students with prior exposure, stronger educational backgrounds, or informal access to AI tools will benefit disproportionately, while others fall further behind.

From a development perspective, this is precisely how an AI curse would emerge. Not through dramatic technological failure, but through institutional lag. Universities would continue producing graduates formally certified as competent, yet unevenly prepared to think, judge, and integrate knowledge in an AI-rich environment. Academic staff would be pushed into a policing role rather than a pedagogical one. Over time, the gap between those who can work meaningfully with AI and those who merely coexist with it would widen.

Avoiding this outcome requires treating AI as a central feature of institutional reform rather than an afterthought. Preparing graduates for an AI-intensive economy means rethinking how competence is taught and assessed, how academic staff are trained, and how learning tasks are designed. It means embedding AI literacy, ethical reasoning, and applied judgment into curricula, rather than addressing AI only through restrictions and warnings.

Africa’s greatest risk, therefore, is not being left behind by AI. It is being integrated into the global AI economy in ways that lock in inequality and dependence, much as oil once did. The oil curse was recognised only after it had already reshaped political economies. With AI, there is still a narrow window to act differently. If that window closes, AI-driven inequality is likely to be faster, deeper, and harder to reverse than anything oil ever produced.

The lesson from development economics is sobering but clear. Resources do not curse societies. Institutions do. AI will not curse Africa on its own. But without deliberate institutional preparation, particularly within education systems, it risks becoming the most sophisticated version of an old and costly mistake.

Saturday, 31 January 2026

 

Talent and Luck Matter, but Divine Favor Completes Kahneman’s Equation

 

By Richard Sebaggala (PhD)

 

I recently read an article in The Economic Times reflecting on a deceptively simple idea from Daniel Kahneman. The quote was familiar and quietly unsettling in its honesty: success is a combination of talent and luck, while great success requires only a little more talent but a lot more luck.

 

As I read the piece, I agreed with much of it. The argument was clear, persuasive, and consistent with Kahneman’s long-standing warning against overstating skill and understating chance. Still, something stayed with me. The insights were sound, yet they felt incomplete when viewed from our context. There was a missing link, one that could widen the argument and make it speak more directly to fragile economies like Uganda and much of Africa.

 

For readers who may not be familiar with him, Kahneman is widely regarded as the father of behavioral economics. His work challenged the assumption that humans are consistently rational decision-makers. By showing how judgment is shaped by bias, heuristics, and randomness, he forced economics to take psychology seriously. His Nobel Prize recognized a simple but uncomfortable truth: markets and life outcomes are far messier than tidy models suggest.

 

That background matters because Kahneman’s wealth quote is not casual pessimism. It is a disciplined conclusion drawn from decades of studying how people misunderstand success. We prefer stories where intelligence, effort, and discipline explain outcomes neatly. Behavioral economics shows otherwise. Timing, networks, institutional gatekeepers, accidents, and macroeconomic shifts often matter just as much, and sometimes more. In today’s volatile economy, shaped by AI disruption, fragile labor markets, and political uncertainty, this insight feels especially relevant.

 

Believing that success is fully earned creates two problems. It breeds quiet arrogance among those who succeed, and it leaves those who struggle thinking their failure is entirely personal. Kahneman’s point unsettles both assumptions.

 

In fragile economies, this reality is not abstract. By fragile economies, I mean settings where institutions are thin, risks are personal, and the link between effort and outcome is unreliable. Talent matters, but it operates in environments where opportunities are uneven and pathways rarely linear. Two people with similar ability can end up in very different places because one met the right person at the right time, accessed capital when it was available, avoided a health or family shock, or simply arrived before a door closed. Hard work is necessary, but it is often not enough.

 

This is where context reshapes interpretation.

 

What economists describe as “luck” is rarely experienced here as blind randomness. In deeply religious societies, luck is commonly understood as God’s grace and favor. People speak of doors opening, protection appearing, and timing aligning in ways they did not plan or control. These experiences are not dismissed as coincidence. They are understood as outcomes shaped beyond individual effort.

 

Kahneman does not frame luck in theological terms, and that is consistent with his scientific approach. But acknowledging randomness does not rule out faith-based interpretations. It simply operates at a different level of explanation. What behavioral economics calls external factors such as health, timing, networks, and shocks, faith communities often describe as divine ordering. Both perspectives point to the same limitation: individuals do not control the full set of conditions that shape outcomes.

 

This distinction matters because belief systems shape behavior. In settings where people distrust God but fear witchcraft or small gods, luck becomes something to manipulate or fear. In settings where people trust God, luck is reframed as grace, something not coerced, but sought through humility, integrity, and right living.

That is why the biblical instruction to seek first God, and the rest will be added, resonates so strongly. It is not an argument against effort or skill development. It is an argument about order and limits. Capability alone does not guarantee outcomes. Effort alone does not control timing. Talent alone does not protect anyone from shocks.

 

Talent without God often drifts into pride.
Effort without grace often turns into exhaustion.
Skill without humility quietly becomes entitlement.

 

Seen this way, Kahneman’s equation is not wrong. It is incomplete. Completing it for fragile economies requires recognizing that success is not a mechanical outcome of inputs alone. It reflects a relationship between human agency and forces beyond it. Capability opens possibilities, but grace shapes which possibilities become real.

 

In a volatile economy, this perspective is grounding. It encourages serious investment in skills while remaining honest about limits. It protects those who are struggling from concluding that they are failures. It also reminds the successful that their position is not proof of superiority, but evidence of fortunate timing.

 

Perhaps the most realistic lesson is this: we should work as if effort matters deeply, and trust as if outcomes are not fully ours to command. Kahneman helps us see the limits of meritocracy. Faith helps us live wisely within those limits.

 

Tuesday, 13 January 2026

 

When Campaign Rallies Become Economic Lessons: How Deep Cognitive Bias Runs

By Richard Sebaggala (PhD)

 

 

For many years, I have followed Uganda’s presidential and parliamentary campaigns closely. Across election cycles, one pattern has remained remarkably consistent. Beyond the speeches and manifestos, campaign periods are marked by familiar scenes on the roads and in trading centres: convoys moving at high speed, boda boda riders escorting candidates recklessly, traffic rules ignored, and ordinary judgment seemingly suspended. Each election reinforces the same question—why do these behaviours repeat themselves so predictably?

What is often described as political excitement or youthful enthusiasm is, on closer inspection, something deeper. These rallies offer a revealing window into how authority, risk, and scarcity shape decision-making in our society. When viewed through an economic lens, they become more than political events. They become lessons in how cognitive bias operates at scale.

Young people, especially boda boda riders, trail candidates at high speed, ride against traffic, and take risks that would normally be avoided. In some instances, a simple remark or instruction from a candidate—whether sensible or not—is acted upon immediately, without hesitation or reflection. The behaviour is not random. It follows a pattern shaped by authority bias, where the presence of a leader overrides individual judgment and personal safety.

The candidate moves, and the convoy moves. Rules that apply on ordinary days suddenly feel optional. Safety becomes secondary. Riders follow not because it is rational or necessary, but because an authority figure is present and in motion. Judgment is effectively outsourced upward, while the costs of risk are borne individually.

This pattern extends far beyond campaign convoys. It reflects a broader tendency in which personalities override systems. Where authority easily replaces rules, institutions struggle to take root. Compliance becomes conditional on who is watching rather than on shared norms. Over time, this erodes accountability and weakens the very institutions needed for economic coordination and growth.

Alongside authority bias sits optimism bias—the belief that negative outcomes are more likely to happen to others than to oneself. Every rider who speeds through a crowded junction in a convoy assumes, often unconsciously, that nothing will go wrong for them. Accidents are abstract possibilities, not personal risks. The same mindset appears elsewhere in the economy, in low insurance uptake, weak safety practices, and limited preparation for shocks. When optimism bias dominates, risk is normalised and vulnerability accumulates quietly.

It is also iportant to understand why so many young people are drawn into these behaviours. Most are not acting out of ignorance or recklessness. They are responding to incentives shaped by scarcity. When income is unstable and opportunities are limited, the future feels uncertain and distant. Under such conditions, short-term benefits—small payments, fuel, food, recognition, or proximity to power—carry immediate value. Behaviour that appears irrational from a distance often makes sense in the moment.

This is where the development challenge becomes clearer. Scarcity does not only limit material choices; it narrows time horizons. When large segments of the population are locked into short-term thinking, investment in skills, safety, and long-term productivity becomes difficult. Growth requires patience, yet patience is costly when survival is uncertain.

More troubling still is how easily questionable statements or instructions from candidates are accepted and amplified during rallies. Remarks that are clearly impractical or economically unrealistic are often received with applause rather than scrutiny. Here, authority bias blends with confirmation bias. Ideas are accepted not because they are workable, but because they come from a trusted figure. Evidence and feasibility give way to allegiance.

In such an environment, public debate weakens. Elections risk becoming contests of belief rather than judgment. Promises replace plans, and enthusiasm substitutes for evaluation. From a development perspective, this matters deeply. Economies grow when citizens can question leaders, demand credible proposals, and distinguish aspiration from implementation.

The issue, then, is not simply about politics or which candidate wins. It is about how people relate to authority, risk, and incentives. Countries do not develop merely by holding elections. They develop when rules apply consistently, leadership is constrained by institutions, and individuals retain the capacity to think independently, even in the presence of power.

Campaign periods bring these dynamics into sharp focus. They act as large-scale behavioural tests, revealing how people respond to opportunity, uncertainty, and authority when emotions are high and incentives are visible. If we ignore what these moments reveal, we will continue to misdiagnose Uganda’s challenges as purely political or institutional. Some of the most binding constraints lie deeper, in the cognitive habits shaped by scarcity, obedience, and short-term survival.

The rallies will end. The noise will fade. But the patterns they expose do not disappear with the campaign season. They persist in how businesses are run, how policies are evaluated, and how risks are taken in everyday economic life.

That is why campaign rallies deserve attention not just from political analysts, but from anyone concerned with Uganda’s long-term development. They are not only about votes. They are economic lessons, played out in public, revealing how deep cognitive bias runs—and why addressing it is central to any serious conversation about growth.

As we head into the polls on 15th January,2026,  whoever emerges victorious would do well to reflect on what these campaigns have revealed about our society. The cognitive biases on display—authority bias, optimism bias, short-termism driven by scarcity—are not marginal issues. They are central to how policies are received, how institutions function, and how citizens respond to reform. Ignoring them comes at a cost. Well-designed reforms and public interventions, when introduced into a population shaped by these biases, will struggle to gain traction or deliver results. If Uganda is to change its development narrative in a meaningful way, addressing cognitive bias must be treated as seriously as infrastructure, budgets, and laws. Without that attention, progress will remain fragile, and growth will continue to fall short of its promise.