Friday, 20 March 2026

 You Do Not Need Every AI Tool: A Lesson from Econometrics

By Sebaggala Richard (PhD)

 

 

Every few weeks a new artificial intelligence tool is introduced with the promise of transforming research, teaching, writing, coding, and analysis in academia. The pace of innovation is impressive, but it has also created a certain level of anxiety within universities. Students feel compelled to experiment with every new platform they encounter. Lecturers worry about keeping pace with rapidly changing technologies. Researchers sometimes feel that failure to adopt the latest tool may leave them behind.

 

The real problem, however, is not the abundance of AI tools. It is that in trying to use all of them, researchers risk fragmenting their attention and weakening the depth of their thinking.

 

In many cases, the response to this environment has been predictable. Instead of building deep competence with a small number of tools, people begin to accumulate platforms. They open accounts on multiple systems, experiment briefly with each one, and then move to the next new tool when it appears. The result is often the opposite of what they intended. Productivity declines rather than improves.

 

Whenever I observe this pattern today, I remember a lesson from my econometrics training many years ago. At the time, we were being introduced to statistical software packages such as Stata, EViews, and SPSS. These programs were widely used in universities and research institutions around the world, and for students beginning to learn applied econometrics the choice of software seemed overwhelming. Many of us were unsure which package we should invest time in learning.

 

Our lecturer offered a simple but memorable analogy. He told us that one does not need to drive every car in order to become a good driver. What matters is learning one vehicle thoroughly and understanding how it works. He then advised us that if we learned Stata properly, we would not miss much from the other packages, and that the skills we acquired would make it easier to understand any other software we might encounter later.

 

At the time the comment seemed like practical advice about software. With experience, however, it becomes clear that the point was much deeper. The lesson was about mastery and focus. In economics we often think about the efficient allocation of scarce resources. Attention is one of those scarce resources. When attention is spread across too many tools, the quality of learning and productivity declines.

 

The current environment of artificial intelligence tools presents a similar challenge. A growing number of platforms now offer support for academic tasks such as summarizing literature, drafting text, generating code, analysing documents, and organizing research materials. Systems such as ChatGPT, Gemini, Claude, Perplexity, Elicit, Avidnote, ResearchRabbit, Scite, and NotebookLM have become increasingly visible in academic discussions. Each claims to provide significant advantages for research and knowledge work.

 

Students therefore frequently ask which of these tools they should learn. The question resembles the one we asked about econometrics software years ago. The answer is also similar. Researchers do not need to learn every available platform. What matters is developing a deep understanding of a small number of tools and learning how to use them effectively in an intellectual workflow.

 

When researchers attempt to use every tool available, several difficulties tend to emerge. The first is fragmentation of workflow. Instead of concentrating on the research problem itself, the researcher spends time switching between multiple systems. The second is superficial knowledge. Individuals may become familiar with the basic interface of several platforms without developing the skill required to use any of them effectively. The third is cognitive overload. Mental effort is directed toward managing tools rather than analysing data, developing arguments, or interpreting results.

 

There is, however, a deeper and less visible cost. When researchers constantly switch between AI systems, they do not only fragment their workflow; they also fragment their thinking. Each system structures responses differently, suggests particular framings, and nudges users toward specific ways of expressing ideas. Over time, this can weaken intellectual coherence. Instead of developing a consistent analytical voice, the researcher begins to adapt to the logic of whichever tool is being used at the moment.

 

Part of this confusion is reinforced by the strong competition currently taking place among major artificial intelligence developers. Large technology firms are investing heavily in AI systems and competing to build the most capable digital assistants. This has produced intense comparisons between leading platforms such as ChatGPT, Claude, and Gemini. Each system has particular strengths. Some are particularly effective in analysing long documents, others integrate well with search engines or cloud services, and others perform strongly in coding and structured analysis.

 

For most academic researchers, however, the differences between these systems are less important than the discussion surrounding them might suggest. Modern AI models already possess capabilities that would have been considered remarkable only a few years ago. They can summarize academic papers, assist in structuring literature reviews, explain theoretical frameworks, generate programming scripts, and help refine academic writing. The critical issue is therefore not access to artificial intelligence tools but the ability to use them thoughtfully.

 

From an economic perspective, this behaviour reflects classic problems of bounded rationality and switching costs. Each new tool requires time to learn, cognitive effort to integrate, and attention to evaluate. When these costs are ignored, researchers over-invest in exploration and under-invest in mastery. The result is diminishing returns to additional tools and, in many cases, a decline in overall productivity.

 

In practice, a focused combination of tools can already provide substantial support for academic work. Systems such as ChatGPT are particularly useful as intellectual companions during the research process. They can assist in refining research questions, clarifying conceptual frameworks, designing surveys, interpreting statistical output, and improving the structure of academic writing. When used carefully, such systems function less as automated generators of text and more as conversational partners that help researchers examine their reasoning.

 

Other platforms offer strengths in areas such as document analysis and information synthesis. Systems like Gemini are often helpful when researchers are working with large reports or multiple documents that need to be summarized and compared. Tools such as Claude have become known for their ability to handle very long texts and produce structured explanations. When used selectively, these capabilities can significantly reduce the time required to extract insights from extensive material.

 

The broader principle underlying this discussion is familiar in economics. Productivity does not necessarily increase with the number of technologies employed. It increases when individuals develop comparative advantage in the use of particular tools. A researcher who understands three systems deeply will usually work more efficiently than someone who attempts to use ten different platforms at once. Mastery compounds over time. Once the logic of AI interaction is understood, adapting to new tools becomes relatively straightforward.

 

This observation also has implications for universities. Institutions sometimes respond to technological change by attempting to introduce students to a large number of platforms. A more effective approach would focus on teaching core competencies. Students should learn the principles of AI literacy, critical engagement with algorithmic outputs, responsible and ethical use of AI, and disciplined integration of a few tools into their research workflow. The objective is not simply to familiarize students with technology but to help them think more effectively in an environment where intelligent systems are widely available.

 

Looking back, the lesson from my econometrics lecturer was not primarily about statistical software. It was about maintaining focus in a world that constantly presents new options. That insight remains highly relevant today. Artificial intelligence tools will continue to appear at a rapid pace, and debates about which system is superior will likely persist.

 

In the age of artificial intelligence, the constraint is no longer access to tools. It is the ability to think clearly while using them. The danger is not missing out on AI tools; it is becoming cognitively shallow while using them. Researchers who benefit most from these technologies will not be those who pursue every new platform, but those who develop disciplined, focused, and reflective ways of working with a few powerful tools.

 

In the same way that learning to drive one vehicle well provides the foundation for driving many others, mastering a few tools can provide the foundation for productive research in the age of artificial intelligence.

 

Sunday, 1 March 2026

 

Sequencing or Stagnation? Rethinking Africa’s Artificial Intelligence Strategy

By Sebaggala Richard (PhD)

 


 

In a recent Brookings commentary titled “Why Africa Should Sequence, Not Rush Into AI,” Mark-Alexandre Doumba argues that Africa’s greatest risk is not missing the AI revolution but joining it too early. Drawing on the work of Ricardo Hausmann and Dani Rodrik, the article cautions against what it describes as premature automation. The central concern is that without adequate digital infrastructure, data governance frameworks, and productive capabilities, rapid adoption of artificial intelligence could deepen dependency rather than accelerate structural transformation. It is a thoughtful intervention in an important policy debate and one that deserves serious engagement.


At the same time, the analogy underpinning the sequencing argument merits closer examination.  The thesis implicitly treats artificial intelligence as comparable to earlier industrial technologies such as factories, heavy manufacturing, or large-scale power infrastructure. In those historical periods, countries needed to accumulate domestic skills, supply chains, and institutional capacity before industrial investment could generate sustained returns. Where this foundation was weak, industrialization often produced enclaves with limited linkages to the broader economy.

 

Artificial intelligence operates in a different space. It does not primarily reorganize physical production; it reshapes how thinking and problem-solving are organized. It influences how research is conducted, how policies are drafted, how code is written, how diagnoses are made, and how information is processed. Its deployment is largely cloud-based and does not depend on ownership of heavy physical capital. More importantly, the use of AI tools itself contributes to skill formation. Individuals often develop competence through interaction, experimentation, and repeated application. Capability therefore grows partly through adoption rather than entirely before it.

 

This reality complicates the historical logic of waiting until foundations are fully consolidated. In earlier industrial waves, late entry sometimes allowed countries to observe pioneers, import mature technologies, and expand cautiously. In the current environment, the capability frontier moves quickly and continuously. Early adopters refine processes, accumulate institutional experience, and embed AI deeply into their systems. As experience compounds, catching up becomes more demanding.

 

The pattern is already visible at the individual level. Professionals who dismissed AI tools a few years ago often find that peers who experimented early have reorganized how they conduct research, prepare lectures, analyze data, and manage projects. The difference is not limited to marginal efficiency gains. It reflects changes in workflow, iteration speed, and analytical depth. When such shifts scale across institutions and economies, divergence becomes structural.

 

The labor market concerns raised in the sequencing argument are understandable. Automation can displace certain categories of routine work, particularly in service sectors. Yet many African economies have not developed large-scale industrial employment bases comparable to those that powered earlier development trajectories elsewhere. Informality remains widespread, and productivity gaps persist. In this context, the more pressing risk may not be premature deindustrialization but the failure to cultivate high-productivity knowledge and service sectors capable of absorbing a growing youth population.

 

Artificial intelligence should therefore be viewed not only as an automation technology but also as a productivity-enhancing instrument. It can strengthen agricultural advisory systems, support diagnostic processes in health care, enhance educational personalization, improve logistics coordination, and assist public administration. In environments where documentation remains paper-based and data fragmented; AI-assisted digitization and analysis can accelerate institutional modernization. In that sense, AI can contribute to building the very foundations that sequencing advocates consider prerequisites.

 

The concern about digital dependency is historically grounded. Africa’s experience with extractive development shows how exporting raw inputs while importing high-value outputs can entrench structural imbalances. A digital parallel could emerge if data is generated locally while algorithms, platforms, and standards are designed and controlled elsewhere.

 

However, dependency does not arise solely from early adoption. It can also result from disengagement. Global AI platforms will continue to expand regardless of cautious national strategies. Data ecosystems will evolve. Technical standards will consolidate. Countries that actively cultivate domestic competence are better positioned to negotiate terms, influence governance frameworks, and adapt systems to local realities. Sovereignty in the digital age depends not only on regulation but also on participation and expertise.

 

The labor dimension is equally nuanced. The relevant comparison is not between African workers and machines in isolation, but between workers who use AI effectively and those who do not. In global service markets, AI literacy is rapidly becoming a baseline expectation. Youth who master these tools strengthen their competitiveness in remote work, digital entrepreneurship, research support, and creative industries. Delayed exposure risks widening skill gaps that become increasingly difficult to close.

 

None of this diminishes the importance of governance, infrastructure, and regulatory design. Data protection regimes, interoperability standards, and digital public infrastructure remain essential pillars of a sustainable AI ecosystem. The question is whether these frameworks must be fully consolidated before meaningful adoption begins, or whether they can evolve alongside practical engagement. Institutional learning is often iterative. Policymakers refine regulatory approaches through exposure to real-world applications and emerging risks.

 

The strategic issue, then, is not whether Africa should move early or late. It is whether it will build the capacity to shape how AI is integrated into its economies and institutions. Artificial intelligence functions as a general-purpose technology that reshapes the production of knowledge and decision-making. Countries that embed it thoughtfully in education systems, research environments, entrepreneurial ecosystems, and public administration may realize productivity gains that conventional development models underestimate.

 

The debate should not be reduced to speed versus sequencing. It should focus on whether Africa approaches AI as a passive consumer or as an active capability builder. Postponement may appear prudent, but in a rapidly evolving technological landscape it carries opportunity costs that accumulate quietly yet persistently.

 

In this context, delay is not merely caution. It is a strategic position whose consequences deserve careful reflection.

Saturday, 21 February 2026

 

The Architect, Not the Builder: Preserving Scholarly Judgment in the Age of AI

By Sebaggala Richard (PhD)

Last week I spoke to a group of researchers and PhD students about artificial intelligence in scholarly writing and literature review. The mood in the room was not defensive; most participants accepted that AI tools are already reshaping academic work, and the discussion was marked by curiosity and cautious optimism. Beneath that enthusiasm, however, lay a quieter concern that went beyond plagiarism or hallucinations. What unsettled many was a more fundamental question: in embracing AI, might we gradually outsource the habits of thinking that define scholarship?

 

This concern deserves serious attention because the central risk is not misconduct but the slow erosion of intellectual ownership. For doctoral students and early-career researchers, research is not simply the production of text; it is the development of judgment. It requires working through ambiguity, weighing competing explanations, and refining arguments until they can withstand scrutiny. Large language models make many parts of this process faster by summarizing articles, suggesting theoretical connections, and interpreting statistical output with impressive fluency. The results often look polished, yet polish should not be confused with understanding.

 

During the training, I demonstrated how AI can assist with drafting search strings, organizing literature into themes, suggesting model specifications, and clarifying the presentation of regression results. The tools proved useful, but throughout the session I emphasized that acceleration does not alter the underlying logic of research. A literature review still begins with a clearly defined question and proceeds through a transparent search strategy, systematic screening, careful comparison of findings, and verification of sources. While AI can help structure these steps, it cannot determine what counts as relevant evidence or where the conceptual gap lies. Those judgments remain the responsibility of the researcher.

 

The same boundary becomes even more important in empirical work. In our example using survey data, AI was permitted to suggest possible dependent and independent variables, outline potential models, and draft statistical syntax. It could recommend robustness checks and help structure the results section. It did not, however, choose the identification strategy, justify causal claims, test assumptions, or determine the substantive meaning of the findings in context. Model choice requires theoretical grounding, causal inference demands methodological reasoning, and interpretation depends on domain knowledge. Delegating these decisions would weaken the integrity of the research.

 

Responsible use therefore begins with clarity about where assistance ends and authorship begins. Before turning to AI, researchers would do well to ask not whether its use is permitted but whether it enhances their reasoning or replaces it. There is a meaningful difference between asking AI to critique a draft and asking it to write the draft itself, just as there is a difference between using it to uncover blind spots and using it to construct an argument from scratch. Although these approaches may appear similar from the outside, they cultivate very different intellectual habits.

 

The discussion also revealed a broader cultural dimension, particularly relevant in many African academic settings where struggle is often equated with learning and difficulty is treated as evidence of seriousness. When processes become faster or more efficient, suspicion sometimes follows, as if reduced effort necessarily implies reduced rigor. AI unsettles this assumption. The ability to map literature more efficiently or clarify statistical syntax quickly does not automatically diminish depth or weaken econometric understanding. Hardship is not a prerequisite for rigor.

 

Struggle has value when it produces insight, but it adds little when it is purely mechanical. Manually formatting references does not deepen theoretical reasoning, nor does repeating routine coding steps automatically strengthen econometric judgment. Spending hours constructing search strings does not guarantee conceptual clarity. Some forms of difficulty are intellectually formative, while others persist simply because they have long been part of academic practice. The aim is not to preserve difficulty for its own sake but to preserve active and disciplined thinking.

 

In practice, thoughtful use of AI can strengthen learning. During the workshop, once some mechanical aspects of literature searching were streamlined, participants were able to devote more attention to substantive questions, such as why findings differed across contexts, where theoretical tensions remained unresolved, and how to sharpen the articulation of their research gaps. Automation, in this sense, freed cognitive space for higher-level analysis. A similar pattern emerged in empirical writing, where AI’s suggestions about alternative specifications or potential weaknesses created room to focus more carefully on identification, assumptions, and interpretation, leaving the intellectual core of the exercise intact.

 

A constructive approach is therefore to think independently first by framing the research problem, interpreting results on one’s own, and sketching the structure of the argument without assistance. AI can then be used to expand and test that thinking by identifying weaknesses, proposing alternative explanations, or improving clarity. The final step requires taking full responsibility for the work by verifying every citation, checking every claim, and ensuring that the argument reflects genuine understanding. A simple test helps clarify ownership: if AI were unavailable, could you defend your research question, theoretical framework, model specification, identification strategy, and interpretation of findings? If the answer is yes, automation has supported the work without undermining it; if not, further reflection is required before it can be considered truly your own.

 

A doctoral degree is not a document production exercise but a process of intellectual formation. AI can make writing more efficient, yet it cannot substitute for judgment. History provides perspective: calculators, statistical software, and digital databases were all met with resistance when first introduced, each innovation reducing effort in certain tasks and prompting concerns about declining standards. Research did not deteriorate; it evolved, shaped less by the technology itself than by the norms governing its use.

 

AI does not eliminate the need for careful thinking; it reduces some of the mechanical burdens that surround it. Whether scholarship becomes more superficial or more sophisticated in this environment will depend less on the capability of AI and more on the discipline of those who use it. Before generating text, it is worth pausing to ask whether the tool is being used to deepen reasoning or to bypass it. Responsible use is not about preserving hardship but about preserving judgment, and judgment remains, as it always has been, a human responsibility.

Saturday, 7 February 2026

 

The Next Resource Curse Will Not Come from the Ground, but from AI

By Sebaggala Richard (PhD)

Anyone who has studied economics or political economy has encountered the idea of the resource curse. It is one of those concepts that, once learned, becomes difficult to ignore. The basic insight is not that natural resources are harmful, but that their effects depend on timing and institutions. When countries discover oil after building strong systems of governance, education, and accountability, the resource can support development. When oil arrives before those foundations are in place, it often distorts incentives, weakens institutions, and entrenches inequality.

Africa’s experience with oil illustrates this lesson clearly. In many countries, oil was discovered before institutions capable of managing it had matured. Rather than financing broad-based development, oil revenues reshaped political and economic behaviour. Governments became less dependent on taxation, weakening the relationship between citizens and the state. Political competition shifted toward control of resource rents, while long-term investments in human capital, skills formation, and institutional learning were crowded out by short-term extraction. The problem was never oil itself, but the institutional environment into which it arrived.

This pattern has repeated itself across regions. In Nigeria, oil wealth reduced pressure to build a diversified tax base and contributed to persistent governance challenges. In Angola, decades of oil exports coexisted with limited human capital development and fragile public institutions. Beyond Africa, Venezuela shows how even a country with relatively strong early indicators can succumb to the same dynamics when resource dependence undermines institutional discipline. Across these cases, corruption and leadership failures mattered, but they were symptoms rather than the root cause. At its core, the oil curse was a sequencing problem: a powerful resource arrived before societies had built the institutions needed to govern it.

What is less often recognised is that this logic applies far beyond natural resources. The same political-economy dynamics emerge whenever a powerful, economy-shaping input arrives before societies are ready to manage it. Today, artificial intelligence fits that description with unsettling precision.

AI is a general-purpose force, much like oil or electricity. It reshapes production, labour markets, and governance, not gradually but at speed. Yet AI does not create skills, judgment, or institutions on its own. It amplifies what already exists. Where educational systems are strong, where professional formation is deliberate, and where organisations are capable of learning, AI raises productivity and improves decision-making. Where those foundations are weak or uneven, the same technology magnifies fragility.

This makes the question of institutional timing unavoidable. In many developing countries, AI is spreading into economies where education systems remain oriented toward content delivery rather than competence formation, where labour markets offer limited structured learning pathways, and where public institutions struggle with capacity and coordination. Under such conditions, AI is unlikely to broaden opportunity. Instead, it risks reinforcing advantage among those who already possess skills, credentials, and institutional access.

The speed of this process adds to the risk. The oil curse unfolded slowly, often over decades. AI-driven divides can harden much faster. Once firms, universities, and public agencies reorganise around AI-intensive systems, late institutional adjustment becomes costly and politically difficult. Education systems, in particular, risk becoming sites where inequality is quietly reproduced rather than corrected.

This concern becomes clearer when we observe how AI is already reshaping outcomes at the individual level in advanced economies. A recent debate in Canada highlights a growing divide between early-career and experienced workers. Professionals with established expertise use AI as a productivity multiplier. It accelerates analysis, improves output quality, and extends their reach. For younger workers, however, AI is eliminating many of the entry-level tasks that once served as informal apprenticeships, allowing them to build judgment, intuition, and professional confidence.

The underlying mechanism mirrors the macro story. AI amplifies skill; it does not generate it. Experienced workers know how to frame problems, evaluate outputs, and integrate partial results into coherent decisions. Early-career workers acquire these capabilities through practice, often by doing imperfect, routine, and time-consuming tasks. As those tasks disappear, the pathway from novice to expert narrows. What appears to be a labour-market disruption is, at its core, a learning and institutional problem.

What is happening within firms and careers therefore reflects the same logic that once operated at the level of entire economies. Just as oil rewarded countries that already had strong institutions, AI rewards individuals who already possess deep knowledge and judgment. And just as oil undermined development where governance capacity was weak, AI threatens to erode career ladders and national development trajectories where foundational skills and institutions remain underdeveloped.

Seen in this light, the Canadian experience is not an anomaly but an early signal. Recent debates in Canada, including a widely discussed essay by Tony Frost and Christian Dippel in The Globe and Mail, show how artificial intelligence is widening gaps between early-career and experienced workers by displacing the very tasks through which judgment and expertise are traditionally developed. Although this discussion is grounded in a high-income country with relatively strong institutions, it offers a useful preview of dynamics that are likely to be more pronounced where institutional foundations are weaker. At the national level, African countries face similar risks. Without sustained investment in education, AI is likely to concentrate opportunity among a narrow elite. Without capable public institutions, algorithmic systems may be imported and relied upon without meaningful oversight. And without clear data governance, countries risk exporting raw data while importing finished intelligence, reproducing extractive relationships in digital form.

Higher education sits at the centre of this challenge. Universities are the primary institutions through which societies translate new technologies into widely shared capability. When they adapt slowly or defensively, technological change tends to benefit those who already have advantage.

In Uganda, this tension is increasingly visible. The National Council for Higher Education has pushed universities toward competence-based education, recognising that traditional content-heavy models are poorly aligned with labour-market realities. Curriculum reviews are underway across institutions, and there is growing agreement that graduates must demonstrate applied skills, judgment, and problem-solving ability rather than mastery of content alone.

Yet within these reforms, the role of artificial intelligence remains largely unresolved. Much of the discussion treats AI primarily as a threat to academic integrity or as a tool to be controlled. Far less attention has been given to how AI reshapes what competence itself means, or how it can be integrated into teaching, assessment, and supervision to strengthen reasoning rather than replace it. Even less effort has gone into preparing academic staff to work confidently and critically with AI, or into helping students learn how to use AI as a cognitive aid rather than a shortcut.

This gap matters. Competence-based education without AI risks becoming backward-looking, while AI adoption without competence-based thinking risks becoming extractive. If universities revise learning outcomes and assessment formats but ignore how AI changes the production of knowledge and skill, they may unintentionally widen inequality. Students with prior exposure, stronger educational backgrounds, or informal access to AI tools will benefit disproportionately, while others fall further behind.

From a development perspective, this is precisely how an AI curse would emerge. Not through dramatic technological failure, but through institutional lag. Universities would continue producing graduates formally certified as competent, yet unevenly prepared to think, judge, and integrate knowledge in an AI-rich environment. Academic staff would be pushed into a policing role rather than a pedagogical one. Over time, the gap between those who can work meaningfully with AI and those who merely coexist with it would widen.

Avoiding this outcome requires treating AI as a central feature of institutional reform rather than an afterthought. Preparing graduates for an AI-intensive economy means rethinking how competence is taught and assessed, how academic staff are trained, and how learning tasks are designed. It means embedding AI literacy, ethical reasoning, and applied judgment into curricula, rather than addressing AI only through restrictions and warnings.

Africa’s greatest risk, therefore, is not being left behind by AI. It is being integrated into the global AI economy in ways that lock in inequality and dependence, much as oil once did. The oil curse was recognised only after it had already reshaped political economies. With AI, there is still a narrow window to act differently. If that window closes, AI-driven inequality is likely to be faster, deeper, and harder to reverse than anything oil ever produced.

The lesson from development economics is sobering but clear. Resources do not curse societies. Institutions do. AI will not curse Africa on its own. But without deliberate institutional preparation, particularly within education systems, it risks becoming the most sophisticated version of an old and costly mistake.

Saturday, 31 January 2026

 

Talent and Luck Matter, but Divine Favor Completes Kahneman’s Equation

 

By Richard Sebaggala (PhD)

 

I recently read an article in The Economic Times reflecting on a deceptively simple idea from Daniel Kahneman. The quote was familiar and quietly unsettling in its honesty: success is a combination of talent and luck, while great success requires only a little more talent but a lot more luck.

 

As I read the piece, I agreed with much of it. The argument was clear, persuasive, and consistent with Kahneman’s long-standing warning against overstating skill and understating chance. Still, something stayed with me. The insights were sound, yet they felt incomplete when viewed from our context. There was a missing link, one that could widen the argument and make it speak more directly to fragile economies like Uganda and much of Africa.

 

For readers who may not be familiar with him, Kahneman is widely regarded as the father of behavioral economics. His work challenged the assumption that humans are consistently rational decision-makers. By showing how judgment is shaped by bias, heuristics, and randomness, he forced economics to take psychology seriously. His Nobel Prize recognized a simple but uncomfortable truth: markets and life outcomes are far messier than tidy models suggest.

 

That background matters because Kahneman’s wealth quote is not casual pessimism. It is a disciplined conclusion drawn from decades of studying how people misunderstand success. We prefer stories where intelligence, effort, and discipline explain outcomes neatly. Behavioral economics shows otherwise. Timing, networks, institutional gatekeepers, accidents, and macroeconomic shifts often matter just as much, and sometimes more. In today’s volatile economy, shaped by AI disruption, fragile labor markets, and political uncertainty, this insight feels especially relevant.

 

Believing that success is fully earned creates two problems. It breeds quiet arrogance among those who succeed, and it leaves those who struggle thinking their failure is entirely personal. Kahneman’s point unsettles both assumptions.

 

In fragile economies, this reality is not abstract. By fragile economies, I mean settings where institutions are thin, risks are personal, and the link between effort and outcome is unreliable. Talent matters, but it operates in environments where opportunities are uneven and pathways rarely linear. Two people with similar ability can end up in very different places because one met the right person at the right time, accessed capital when it was available, avoided a health or family shock, or simply arrived before a door closed. Hard work is necessary, but it is often not enough.

 

This is where context reshapes interpretation.

 

What economists describe as “luck” is rarely experienced here as blind randomness. In deeply religious societies, luck is commonly understood as God’s grace and favor. People speak of doors opening, protection appearing, and timing aligning in ways they did not plan or control. These experiences are not dismissed as coincidence. They are understood as outcomes shaped beyond individual effort.

 

Kahneman does not frame luck in theological terms, and that is consistent with his scientific approach. But acknowledging randomness does not rule out faith-based interpretations. It simply operates at a different level of explanation. What behavioral economics calls external factors such as health, timing, networks, and shocks, faith communities often describe as divine ordering. Both perspectives point to the same limitation: individuals do not control the full set of conditions that shape outcomes.

 

This distinction matters because belief systems shape behavior. In settings where people distrust God but fear witchcraft or small gods, luck becomes something to manipulate or fear. In settings where people trust God, luck is reframed as grace, something not coerced, but sought through humility, integrity, and right living.

That is why the biblical instruction to seek first God, and the rest will be added, resonates so strongly. It is not an argument against effort or skill development. It is an argument about order and limits. Capability alone does not guarantee outcomes. Effort alone does not control timing. Talent alone does not protect anyone from shocks.

 

Talent without God often drifts into pride.
Effort without grace often turns into exhaustion.
Skill without humility quietly becomes entitlement.

 

Seen this way, Kahneman’s equation is not wrong. It is incomplete. Completing it for fragile economies requires recognizing that success is not a mechanical outcome of inputs alone. It reflects a relationship between human agency and forces beyond it. Capability opens possibilities, but grace shapes which possibilities become real.

 

In a volatile economy, this perspective is grounding. It encourages serious investment in skills while remaining honest about limits. It protects those who are struggling from concluding that they are failures. It also reminds the successful that their position is not proof of superiority, but evidence of fortunate timing.

 

Perhaps the most realistic lesson is this: we should work as if effort matters deeply, and trust as if outcomes are not fully ours to command. Kahneman helps us see the limits of meritocracy. Faith helps us live wisely within those limits.

 

Tuesday, 13 January 2026

 

When Campaign Rallies Become Economic Lessons: How Deep Cognitive Bias Runs

By Richard Sebaggala (PhD)

 

 

For many years, I have followed Uganda’s presidential and parliamentary campaigns closely. Across election cycles, one pattern has remained remarkably consistent. Beyond the speeches and manifestos, campaign periods are marked by familiar scenes on the roads and in trading centres: convoys moving at high speed, boda boda riders escorting candidates recklessly, traffic rules ignored, and ordinary judgment seemingly suspended. Each election reinforces the same question—why do these behaviours repeat themselves so predictably?

What is often described as political excitement or youthful enthusiasm is, on closer inspection, something deeper. These rallies offer a revealing window into how authority, risk, and scarcity shape decision-making in our society. When viewed through an economic lens, they become more than political events. They become lessons in how cognitive bias operates at scale.

Young people, especially boda boda riders, trail candidates at high speed, ride against traffic, and take risks that would normally be avoided. In some instances, a simple remark or instruction from a candidate—whether sensible or not—is acted upon immediately, without hesitation or reflection. The behaviour is not random. It follows a pattern shaped by authority bias, where the presence of a leader overrides individual judgment and personal safety.

The candidate moves, and the convoy moves. Rules that apply on ordinary days suddenly feel optional. Safety becomes secondary. Riders follow not because it is rational or necessary, but because an authority figure is present and in motion. Judgment is effectively outsourced upward, while the costs of risk are borne individually.

This pattern extends far beyond campaign convoys. It reflects a broader tendency in which personalities override systems. Where authority easily replaces rules, institutions struggle to take root. Compliance becomes conditional on who is watching rather than on shared norms. Over time, this erodes accountability and weakens the very institutions needed for economic coordination and growth.

Alongside authority bias sits optimism bias—the belief that negative outcomes are more likely to happen to others than to oneself. Every rider who speeds through a crowded junction in a convoy assumes, often unconsciously, that nothing will go wrong for them. Accidents are abstract possibilities, not personal risks. The same mindset appears elsewhere in the economy, in low insurance uptake, weak safety practices, and limited preparation for shocks. When optimism bias dominates, risk is normalised and vulnerability accumulates quietly.

It is also iportant to understand why so many young people are drawn into these behaviours. Most are not acting out of ignorance or recklessness. They are responding to incentives shaped by scarcity. When income is unstable and opportunities are limited, the future feels uncertain and distant. Under such conditions, short-term benefits—small payments, fuel, food, recognition, or proximity to power—carry immediate value. Behaviour that appears irrational from a distance often makes sense in the moment.

This is where the development challenge becomes clearer. Scarcity does not only limit material choices; it narrows time horizons. When large segments of the population are locked into short-term thinking, investment in skills, safety, and long-term productivity becomes difficult. Growth requires patience, yet patience is costly when survival is uncertain.

More troubling still is how easily questionable statements or instructions from candidates are accepted and amplified during rallies. Remarks that are clearly impractical or economically unrealistic are often received with applause rather than scrutiny. Here, authority bias blends with confirmation bias. Ideas are accepted not because they are workable, but because they come from a trusted figure. Evidence and feasibility give way to allegiance.

In such an environment, public debate weakens. Elections risk becoming contests of belief rather than judgment. Promises replace plans, and enthusiasm substitutes for evaluation. From a development perspective, this matters deeply. Economies grow when citizens can question leaders, demand credible proposals, and distinguish aspiration from implementation.

The issue, then, is not simply about politics or which candidate wins. It is about how people relate to authority, risk, and incentives. Countries do not develop merely by holding elections. They develop when rules apply consistently, leadership is constrained by institutions, and individuals retain the capacity to think independently, even in the presence of power.

Campaign periods bring these dynamics into sharp focus. They act as large-scale behavioural tests, revealing how people respond to opportunity, uncertainty, and authority when emotions are high and incentives are visible. If we ignore what these moments reveal, we will continue to misdiagnose Uganda’s challenges as purely political or institutional. Some of the most binding constraints lie deeper, in the cognitive habits shaped by scarcity, obedience, and short-term survival.

The rallies will end. The noise will fade. But the patterns they expose do not disappear with the campaign season. They persist in how businesses are run, how policies are evaluated, and how risks are taken in everyday economic life.

That is why campaign rallies deserve attention not just from political analysts, but from anyone concerned with Uganda’s long-term development. They are not only about votes. They are economic lessons, played out in public, revealing how deep cognitive bias runs—and why addressing it is central to any serious conversation about growth.

As we head into the polls on 15th January,2026,  whoever emerges victorious would do well to reflect on what these campaigns have revealed about our society. The cognitive biases on display—authority bias, optimism bias, short-termism driven by scarcity—are not marginal issues. They are central to how policies are received, how institutions function, and how citizens respond to reform. Ignoring them comes at a cost. Well-designed reforms and public interventions, when introduced into a population shaped by these biases, will struggle to gain traction or deliver results. If Uganda is to change its development narrative in a meaningful way, addressing cognitive bias must be treated as seriously as infrastructure, budgets, and laws. Without that attention, progress will remain fragile, and growth will continue to fall short of its promise.