Saturday, 7 February 2026

 

The Next Resource Curse Will Not Come from the Ground, but from AI

By Sebaggala Richard (PhD)

Anyone who has studied economics or political economy has encountered the idea of the resource curse. It is one of those concepts that, once learned, becomes difficult to ignore. The basic insight is not that natural resources are harmful, but that their effects depend on timing and institutions. When countries discover oil after building strong systems of governance, education, and accountability, the resource can support development. When oil arrives before those foundations are in place, it often distorts incentives, weakens institutions, and entrenches inequality.

Africa’s experience with oil illustrates this lesson clearly. In many countries, oil was discovered before institutions capable of managing it had matured. Rather than financing broad-based development, oil revenues reshaped political and economic behaviour. Governments became less dependent on taxation, weakening the relationship between citizens and the state. Political competition shifted toward control of resource rents, while long-term investments in human capital, skills formation, and institutional learning were crowded out by short-term extraction. The problem was never oil itself, but the institutional environment into which it arrived.

This pattern has repeated itself across regions. In Nigeria, oil wealth reduced pressure to build a diversified tax base and contributed to persistent governance challenges. In Angola, decades of oil exports coexisted with limited human capital development and fragile public institutions. Beyond Africa, Venezuela shows how even a country with relatively strong early indicators can succumb to the same dynamics when resource dependence undermines institutional discipline. Across these cases, corruption and leadership failures mattered, but they were symptoms rather than the root cause. At its core, the oil curse was a sequencing problem: a powerful resource arrived before societies had built the institutions needed to govern it.

What is less often recognised is that this logic applies far beyond natural resources. The same political-economy dynamics emerge whenever a powerful, economy-shaping input arrives before societies are ready to manage it. Today, artificial intelligence fits that description with unsettling precision.

AI is a general-purpose force, much like oil or electricity. It reshapes production, labour markets, and governance, not gradually but at speed. Yet AI does not create skills, judgment, or institutions on its own. It amplifies what already exists. Where educational systems are strong, where professional formation is deliberate, and where organisations are capable of learning, AI raises productivity and improves decision-making. Where those foundations are weak or uneven, the same technology magnifies fragility.

This makes the question of institutional timing unavoidable. In many developing countries, AI is spreading into economies where education systems remain oriented toward content delivery rather than competence formation, where labour markets offer limited structured learning pathways, and where public institutions struggle with capacity and coordination. Under such conditions, AI is unlikely to broaden opportunity. Instead, it risks reinforcing advantage among those who already possess skills, credentials, and institutional access.

The speed of this process adds to the risk. The oil curse unfolded slowly, often over decades. AI-driven divides can harden much faster. Once firms, universities, and public agencies reorganise around AI-intensive systems, late institutional adjustment becomes costly and politically difficult. Education systems, in particular, risk becoming sites where inequality is quietly reproduced rather than corrected.

This concern becomes clearer when we observe how AI is already reshaping outcomes at the individual level in advanced economies. A recent debate in Canada highlights a growing divide between early-career and experienced workers. Professionals with established expertise use AI as a productivity multiplier. It accelerates analysis, improves output quality, and extends their reach. For younger workers, however, AI is eliminating many of the entry-level tasks that once served as informal apprenticeships, allowing them to build judgment, intuition, and professional confidence.

The underlying mechanism mirrors the macro story. AI amplifies skill; it does not generate it. Experienced workers know how to frame problems, evaluate outputs, and integrate partial results into coherent decisions. Early-career workers acquire these capabilities through practice, often by doing imperfect, routine, and time-consuming tasks. As those tasks disappear, the pathway from novice to expert narrows. What appears to be a labour-market disruption is, at its core, a learning and institutional problem.

What is happening within firms and careers therefore reflects the same logic that once operated at the level of entire economies. Just as oil rewarded countries that already had strong institutions, AI rewards individuals who already possess deep knowledge and judgment. And just as oil undermined development where governance capacity was weak, AI threatens to erode career ladders and national development trajectories where foundational skills and institutions remain underdeveloped.

Seen in this light, the Canadian experience is not an anomaly but an early signal. Recent debates in Canada, including a widely discussed essay by Tony Frost and Christian Dippel in The Globe and Mail, show how artificial intelligence is widening gaps between early-career and experienced workers by displacing the very tasks through which judgment and expertise are traditionally developed. Although this discussion is grounded in a high-income country with relatively strong institutions, it offers a useful preview of dynamics that are likely to be more pronounced where institutional foundations are weaker. At the national level, African countries face similar risks. Without sustained investment in education, AI is likely to concentrate opportunity among a narrow elite. Without capable public institutions, algorithmic systems may be imported and relied upon without meaningful oversight. And without clear data governance, countries risk exporting raw data while importing finished intelligence, reproducing extractive relationships in digital form.

Higher education sits at the centre of this challenge. Universities are the primary institutions through which societies translate new technologies into widely shared capability. When they adapt slowly or defensively, technological change tends to benefit those who already have advantage.

In Uganda, this tension is increasingly visible. The National Council for Higher Education has pushed universities toward competence-based education, recognising that traditional content-heavy models are poorly aligned with labour-market realities. Curriculum reviews are underway across institutions, and there is growing agreement that graduates must demonstrate applied skills, judgment, and problem-solving ability rather than mastery of content alone.

Yet within these reforms, the role of artificial intelligence remains largely unresolved. Much of the discussion treats AI primarily as a threat to academic integrity or as a tool to be controlled. Far less attention has been given to how AI reshapes what competence itself means, or how it can be integrated into teaching, assessment, and supervision to strengthen reasoning rather than replace it. Even less effort has gone into preparing academic staff to work confidently and critically with AI, or into helping students learn how to use AI as a cognitive aid rather than a shortcut.

This gap matters. Competence-based education without AI risks becoming backward-looking, while AI adoption without competence-based thinking risks becoming extractive. If universities revise learning outcomes and assessment formats but ignore how AI changes the production of knowledge and skill, they may unintentionally widen inequality. Students with prior exposure, stronger educational backgrounds, or informal access to AI tools will benefit disproportionately, while others fall further behind.

From a development perspective, this is precisely how an AI curse would emerge. Not through dramatic technological failure, but through institutional lag. Universities would continue producing graduates formally certified as competent, yet unevenly prepared to think, judge, and integrate knowledge in an AI-rich environment. Academic staff would be pushed into a policing role rather than a pedagogical one. Over time, the gap between those who can work meaningfully with AI and those who merely coexist with it would widen.

Avoiding this outcome requires treating AI as a central feature of institutional reform rather than an afterthought. Preparing graduates for an AI-intensive economy means rethinking how competence is taught and assessed, how academic staff are trained, and how learning tasks are designed. It means embedding AI literacy, ethical reasoning, and applied judgment into curricula, rather than addressing AI only through restrictions and warnings.

Africa’s greatest risk, therefore, is not being left behind by AI. It is being integrated into the global AI economy in ways that lock in inequality and dependence, much as oil once did. The oil curse was recognised only after it had already reshaped political economies. With AI, there is still a narrow window to act differently. If that window closes, AI-driven inequality is likely to be faster, deeper, and harder to reverse than anything oil ever produced.

The lesson from development economics is sobering but clear. Resources do not curse societies. Institutions do. AI will not curse Africa on its own. But without deliberate institutional preparation, particularly within education systems, it risks becoming the most sophisticated version of an old and costly mistake.

Saturday, 31 January 2026

 

Talent and Luck Matter, but Divine Favor Completes Kahneman’s Equation

 

By Richard Sebaggala (PhD)

 

I recently read an article in The Economic Times reflecting on a deceptively simple idea from Daniel Kahneman. The quote was familiar and quietly unsettling in its honesty: success is a combination of talent and luck, while great success requires only a little more talent but a lot more luck.

 

As I read the piece, I agreed with much of it. The argument was clear, persuasive, and consistent with Kahneman’s long-standing warning against overstating skill and understating chance. Still, something stayed with me. The insights were sound, yet they felt incomplete when viewed from our context. There was a missing link, one that could widen the argument and make it speak more directly to fragile economies like Uganda and much of Africa.

 

For readers who may not be familiar with him, Kahneman is widely regarded as the father of behavioral economics. His work challenged the assumption that humans are consistently rational decision-makers. By showing how judgment is shaped by bias, heuristics, and randomness, he forced economics to take psychology seriously. His Nobel Prize recognized a simple but uncomfortable truth: markets and life outcomes are far messier than tidy models suggest.

 

That background matters because Kahneman’s wealth quote is not casual pessimism. It is a disciplined conclusion drawn from decades of studying how people misunderstand success. We prefer stories where intelligence, effort, and discipline explain outcomes neatly. Behavioral economics shows otherwise. Timing, networks, institutional gatekeepers, accidents, and macroeconomic shifts often matter just as much, and sometimes more. In today’s volatile economy, shaped by AI disruption, fragile labor markets, and political uncertainty, this insight feels especially relevant.

 

Believing that success is fully earned creates two problems. It breeds quiet arrogance among those who succeed, and it leaves those who struggle thinking their failure is entirely personal. Kahneman’s point unsettles both assumptions.

 

In fragile economies, this reality is not abstract. By fragile economies, I mean settings where institutions are thin, risks are personal, and the link between effort and outcome is unreliable. Talent matters, but it operates in environments where opportunities are uneven and pathways rarely linear. Two people with similar ability can end up in very different places because one met the right person at the right time, accessed capital when it was available, avoided a health or family shock, or simply arrived before a door closed. Hard work is necessary, but it is often not enough.

 

This is where context reshapes interpretation.

 

What economists describe as “luck” is rarely experienced here as blind randomness. In deeply religious societies, luck is commonly understood as God’s grace and favor. People speak of doors opening, protection appearing, and timing aligning in ways they did not plan or control. These experiences are not dismissed as coincidence. They are understood as outcomes shaped beyond individual effort.

 

Kahneman does not frame luck in theological terms, and that is consistent with his scientific approach. But acknowledging randomness does not rule out faith-based interpretations. It simply operates at a different level of explanation. What behavioral economics calls external factors such as health, timing, networks, and shocks, faith communities often describe as divine ordering. Both perspectives point to the same limitation: individuals do not control the full set of conditions that shape outcomes.

 

This distinction matters because belief systems shape behavior. In settings where people distrust God but fear witchcraft or small gods, luck becomes something to manipulate or fear. In settings where people trust God, luck is reframed as grace, something not coerced, but sought through humility, integrity, and right living.

That is why the biblical instruction to seek first God, and the rest will be added, resonates so strongly. It is not an argument against effort or skill development. It is an argument about order and limits. Capability alone does not guarantee outcomes. Effort alone does not control timing. Talent alone does not protect anyone from shocks.

 

Talent without God often drifts into pride.
Effort without grace often turns into exhaustion.
Skill without humility quietly becomes entitlement.

 

Seen this way, Kahneman’s equation is not wrong. It is incomplete. Completing it for fragile economies requires recognizing that success is not a mechanical outcome of inputs alone. It reflects a relationship between human agency and forces beyond it. Capability opens possibilities, but grace shapes which possibilities become real.

 

In a volatile economy, this perspective is grounding. It encourages serious investment in skills while remaining honest about limits. It protects those who are struggling from concluding that they are failures. It also reminds the successful that their position is not proof of superiority, but evidence of fortunate timing.

 

Perhaps the most realistic lesson is this: we should work as if effort matters deeply, and trust as if outcomes are not fully ours to command. Kahneman helps us see the limits of meritocracy. Faith helps us live wisely within those limits.

 

Tuesday, 13 January 2026

 

When Campaign Rallies Become Economic Lessons: How Deep Cognitive Bias Runs

By Richard Sebaggala (PhD)

 

 

For many years, I have followed Uganda’s presidential and parliamentary campaigns closely. Across election cycles, one pattern has remained remarkably consistent. Beyond the speeches and manifestos, campaign periods are marked by familiar scenes on the roads and in trading centres: convoys moving at high speed, boda boda riders escorting candidates recklessly, traffic rules ignored, and ordinary judgment seemingly suspended. Each election reinforces the same question—why do these behaviours repeat themselves so predictably?

What is often described as political excitement or youthful enthusiasm is, on closer inspection, something deeper. These rallies offer a revealing window into how authority, risk, and scarcity shape decision-making in our society. When viewed through an economic lens, they become more than political events. They become lessons in how cognitive bias operates at scale.

Young people, especially boda boda riders, trail candidates at high speed, ride against traffic, and take risks that would normally be avoided. In some instances, a simple remark or instruction from a candidate—whether sensible or not—is acted upon immediately, without hesitation or reflection. The behaviour is not random. It follows a pattern shaped by authority bias, where the presence of a leader overrides individual judgment and personal safety.

The candidate moves, and the convoy moves. Rules that apply on ordinary days suddenly feel optional. Safety becomes secondary. Riders follow not because it is rational or necessary, but because an authority figure is present and in motion. Judgment is effectively outsourced upward, while the costs of risk are borne individually.

This pattern extends far beyond campaign convoys. It reflects a broader tendency in which personalities override systems. Where authority easily replaces rules, institutions struggle to take root. Compliance becomes conditional on who is watching rather than on shared norms. Over time, this erodes accountability and weakens the very institutions needed for economic coordination and growth.

Alongside authority bias sits optimism bias—the belief that negative outcomes are more likely to happen to others than to oneself. Every rider who speeds through a crowded junction in a convoy assumes, often unconsciously, that nothing will go wrong for them. Accidents are abstract possibilities, not personal risks. The same mindset appears elsewhere in the economy, in low insurance uptake, weak safety practices, and limited preparation for shocks. When optimism bias dominates, risk is normalised and vulnerability accumulates quietly.

It is also iportant to understand why so many young people are drawn into these behaviours. Most are not acting out of ignorance or recklessness. They are responding to incentives shaped by scarcity. When income is unstable and opportunities are limited, the future feels uncertain and distant. Under such conditions, short-term benefits—small payments, fuel, food, recognition, or proximity to power—carry immediate value. Behaviour that appears irrational from a distance often makes sense in the moment.

This is where the development challenge becomes clearer. Scarcity does not only limit material choices; it narrows time horizons. When large segments of the population are locked into short-term thinking, investment in skills, safety, and long-term productivity becomes difficult. Growth requires patience, yet patience is costly when survival is uncertain.

More troubling still is how easily questionable statements or instructions from candidates are accepted and amplified during rallies. Remarks that are clearly impractical or economically unrealistic are often received with applause rather than scrutiny. Here, authority bias blends with confirmation bias. Ideas are accepted not because they are workable, but because they come from a trusted figure. Evidence and feasibility give way to allegiance.

In such an environment, public debate weakens. Elections risk becoming contests of belief rather than judgment. Promises replace plans, and enthusiasm substitutes for evaluation. From a development perspective, this matters deeply. Economies grow when citizens can question leaders, demand credible proposals, and distinguish aspiration from implementation.

The issue, then, is not simply about politics or which candidate wins. It is about how people relate to authority, risk, and incentives. Countries do not develop merely by holding elections. They develop when rules apply consistently, leadership is constrained by institutions, and individuals retain the capacity to think independently, even in the presence of power.

Campaign periods bring these dynamics into sharp focus. They act as large-scale behavioural tests, revealing how people respond to opportunity, uncertainty, and authority when emotions are high and incentives are visible. If we ignore what these moments reveal, we will continue to misdiagnose Uganda’s challenges as purely political or institutional. Some of the most binding constraints lie deeper, in the cognitive habits shaped by scarcity, obedience, and short-term survival.

The rallies will end. The noise will fade. But the patterns they expose do not disappear with the campaign season. They persist in how businesses are run, how policies are evaluated, and how risks are taken in everyday economic life.

That is why campaign rallies deserve attention not just from political analysts, but from anyone concerned with Uganda’s long-term development. They are not only about votes. They are economic lessons, played out in public, revealing how deep cognitive bias runs—and why addressing it is central to any serious conversation about growth.

As we head into the polls on 15th January,2026,  whoever emerges victorious would do well to reflect on what these campaigns have revealed about our society. The cognitive biases on display—authority bias, optimism bias, short-termism driven by scarcity—are not marginal issues. They are central to how policies are received, how institutions function, and how citizens respond to reform. Ignoring them comes at a cost. Well-designed reforms and public interventions, when introduced into a population shaped by these biases, will struggle to gain traction or deliver results. If Uganda is to change its development narrative in a meaningful way, addressing cognitive bias must be treated as seriously as infrastructure, budgets, and laws. Without that attention, progress will remain fragile, and growth will continue to fall short of its promise.

Tuesday, 30 December 2025

The Eagle’s Economics: A New Year of Rebuilding

 

The Eagle’s Economics: Rebuilding for Strength and Growth in the New Year

By Richard Sebaggala (PhD)

 

 Most people have seen the American eagle as a symbol of power—wide wings, sharp eyes, effortless command of the sky. Fewer people know the story often told about what happens when that same eagle grows old. It is said that after decades of dominance, the eagle’s body begins to betray him. His beak, once razor-sharp, grows dull. His talons lose their grip. His feathers thicken, grow heavy, and begin to weigh him down. Hunting becomes difficult. Flying becomes exhausting. What once made him strong slowly becomes what threatens his survival.

 At that point, the eagle is said to face a choice that determines everything. He can continue as he is, slowly declining until he can no longer feed or fly. Or he can do something far more painful and far more deliberate. He flies to a lonely cliff, far from safety and comfort, and begins to smash his beak against the rock until it breaks off completely. For weeks he waits, weak and exposed, until a new beak grows. When it does, he uses it to pluck out his old feathers one by one, enduring pain so that lighter, stronger feathers can replace them. Only after this long, brutal process does he rise again—renewed, not because time spared him, but because he chose to rebuild rather than fade.

 Whether taken as biology or metaphor, the power of this story lies in its economic clarity. As we stand at the beginning of a new year, it offers one of the most honest lessons about progress. Growth does not come from preserving everything we have accumulated. It comes from knowing what must be dismantled. Economists describe this process as creative destruction: the uncomfortable but necessary replacement of old, inefficient structures with new and more productive ones. What markets go through in cycles, individuals must sometimes go through in life.

 The difficulty is that people are rarely held back by a lack of effort. More often, they are held back by attachment. We cling to what once worked because we invested time, energy, and identity in it. In economic terms, these are sunk costs—past investments that cannot be recovered and should not influence future decisions. In human terms, they feel personal. A profession that once offered growth but now offers only routine. A lifestyle that once felt rewarding but now drains health and focus. Habits that once felt harmless but now quietly tax every productive hour. Like the eagle’s dull beak and heavy feathers, these things remain not because they still serve us, but because letting go feels like admitting loss.

 This is why so many people enter a new year talking about adding—new goals, new resolutions, new ambitions—without first confronting what must be removed. Yet the real constraint on progress is rarely the absence of new ideas. It is the presence of old weight. A poor saving culture is not simply a financial issue; it removes the flexibility to take risks and adapt. Chronic sleep deprivation is not just a health problem; it reduces cognitive capacity and decision quality. Staying in the wrong job is not merely about income; it carries an enormous opportunity cost—the person you could have become if your skills were redeployed elsewhere.

 The eagle’s decision to smash his beak is, at its core, a cost–benefit calculation. The short-term costs are severe: pain, vulnerability, hunger, uncertainty. The long-term benefit is survival and renewed strength. Humans face similar trade-offs, though less visibly dramatic. Leaving a stagnant role, saving instead of consuming, rebuilding discipline, or re-skilling later in life all feel like losses at first. They reduce comfort before they increase capacity. But avoiding these costs does not make them disappear; it simply replaces visible pain with invisible decline.

What makes the eagle’s story especially instructive is that renewal is not partial. He does not grow a new beak and keep the old feathers. The process is comprehensive. In human terms, meaningful change rarely comes from isolated adjustments. Professional reassessment without lifestyle discipline fails. Financial goals without behavioral change collapse. Ambition without rest and focus burns out. Renewal works when the whole system is addressed, patiently and deliberately.

There is also an overlooked phase in the story: waiting. After the beak breaks, after the feathers are removed, the eagle does not immediately soar. He endures a period of weakness. This is the phase most people misinterpret as failure. In reality, it is investment time—the uncomfortable interval where effort has been committed but results have not yet appeared. Growth compounds quietly before it becomes visible.

 As the year begins, the more useful question is not what you want to add to your life, but what you are still carrying that no longer belongs there. What habits, roles, routines, or assumptions have become heavy feathers? What familiar structures now act as dull tools? Progress, whether in economies or in lives, does not reward preservation for its own sake. It rewards those who recognize when rebuilding is no longer optional.

 The eagle rises again not because he avoided pain, but because he chose it deliberately. That choice—uncomfortable, disciplined, and forward-looking—is the real lesson. If there is a mandate for the new year, it is this: have the courage to break what is holding you down, so that what is meant to carry you forward can finally grow.

Saturday, 27 December 2025

 

The Economics of Small Choices: Lessons from the Power of Compounding 

By Richard Sebaggala (PhD)

 

 

The year’s end has a way of slowing us down. Christmas offers a pause, the New Year kindles expectations. In the space between reflection and anticipation, certain lessons become clearer than they do during the rush of ordinary days.

One such lesson comes from a simple thought experiment.

Imagine you are offered two options. You can take three million dollars in cash today, or you can take one cent that doubles in value every day for thirty-one days. Most people would take the three million without hesitation. It feels sensible, immediate, and certain.

Yet, that instinct exposes something important about how we think.

The mathematics behind the one-cent proposition is straightforward. On day one, you have $0.01. On day two, it becomes $0.02. On day three, $0.04. Each day, the amount is multiplied by two. In simple terms, the value on any given day is the starting amount multiplied by two raised to the number of days passed.

After ten days, the total is only $5.12. After twenty days, it is approximately $5,243. Even by day twenty-nine, the amount is roughly $2.7 million—still below the three million offered at the start. It is only in the final two days that the growth becomes dramatic, reaching more than ten million dollars by day thirty-one.

 

Nothing magical happens on the last day. The same rule applies throughout; what changes is scale. This is the nature of compounding.

This principle extends far beyond finance.

Over the years, I have consistently encouraged people to engage with new tools, especially artificial intelligence, long before it became mainstream. The message was always the same: do not wait to master everything, start using it in small ways. Read with it. Write with it. Use it to think more clearly, not to avoid thinking.

Many people were hesitant. Some dismissed it. A few quietly experimented.

Today, the difference is obvious. Those who started early, even imperfectly, are now confident and adaptive. They did not become experts overnight; they accumulated familiarity. Daily exposure replaced fear with understanding. Small, repeated application led to real transformation. Those who waited for the "perfect moment" are now trying to catch up, wondering how the gap became so wide.

That is how compounding works in real life. For a long time, progress looks unimpressive. At times, it looks like nothing is happening at all. The temptation to stop is strongest just before momentum becomes tangible.

This logic applies directly to postgraduate students, though it is rarely framed in these terms. For instance, many PhD students perceive progress as episodic: a breakthrough chapter, a long writing retreat, or a sudden moment of clarity. When these do not materialize, frustration sets in and confidence begins to erode. In practice, most PhDs are achieved through accumulation rather than intermittent intensity. Reading a small number of papers consistently, writing a few hundred words regularly, refining one argument at a time, or cleaning data in small batches may feel insignificant on any given day. Over months and years, these efforts compound into a coherent thesis.

 

The same logic applies to marriage, though we rarely describe it this way. Many people think marital happiness stems from big moments: grand gestures, expensive trips, or dramatic apologies. These matter, but they do not sustain a relationship. What sustains a marriage is a series of small, repeated actions: listening carefully, speaking kindly when tired, praying together (even briefly), addressing misunderstandings early, and expressing appreciation when it feels unnecessary.

Neglect, too, compounds. Small, dismissive comments; unresolved resentment; emotional withdrawal. Each seems minor in isolation. Over time, they accumulate. Most marriages do not collapse suddenly; they move gradually in one direction or the other.

As 2026 approaches, the temptation will be to strive for dramatic change. Many New Year resolutions fail not because they are misguided, but because they are too large and disconnected from daily practice. Compounding does not require intensity; it requires consistency.

The real question is not what major breakthrough you want next year. It is what small action you are willing to repeat every day, even when it feels insignificant. One page read; one honest conversation; one deliberate effort to improve your workflow; one act of patience; one choice to show care.

Like the one cent, these actions do not look impressive at the beginning. But life, like mathematics, respects accumulation.

Christmas reminds us that the greatest change entered the world quietly. The New Year invites us to live differently—not dramatically, but deliberately. If you commit to the logic of compounding in your work, your learning, your faith, and your relationships, you may not feel successful in January. But by December 2026, you will understand why small beginnings truly matter.

Saturday, 20 December 2025

 

Wise Generosity in Hard Times: Poverty and Mental Bandwidth

By Richard Sebaggala  (PhD)

 In the last weeks of December, I have deliberately chosen to pause my usual writing on the economics of AI and focus on a more fundamental issue: one that shapes everyday economic life for millions of African families. December is a season of celebration, reflection, generosity, travel, and social obligation. Yet, it is also the prelude to one of the most financially stressful periods of the year.

From December into January and February, households transition abruptly from festivity to financial strain. Savings are stretched by food, ceremonies, transport, gifts, and the unspoken obligations that come with being part of a family, community, and church. Almost immediately, schools reopen. Fees, uniforms, transport, and learning materials arrive when financial buffers are weakest. What may seem like poor planning is often a timing problem: heavy spending followed by unavoidable obligations stacked too closely together.

 

This coming cycle is likely to be even more demanding in Uganda. National general elections fall within this same period, adding further strain to the economy. Election seasons divert public spending, slow private investment, heighten uncertainty, and introduce psychological stress. Highly contested elections bring economic costs that extend beyond budgets and markets, reaching into households already under pressure.

 

While reflecting on this convergence (festivities, school fees, elections, and uncertainty), I was reminded of an Oxford-linked study among Indian farmers that explained something many Africans feel but rarely name: poverty and financial pressure quietly reduce our ability to think clearly.

 

The study followed the same farmers before harvest, when money was scarce, and after harvest, when income had arrived. The findings were striking. The same individuals performed significantly worse on cognitive tests when they were financially stressed. The gap was equivalent to losing up to thirteen IQ points. Their intelligence had not changed. What had changed was the mental burden of financial worry.

 

The conclusion was simple but profound: poverty does not only drain income, but it also drains mental bandwidth.

Once this insight is understood, African economic life appears in a different light. Across our societies, financial stress is rarely private. African life is structured around openness (to family, kinship networks, community, and church). Even small signs of stability attract moral expectations to support others. These expectations are rooted in solidarity and shared survival, and they have sustained communities for generations.

Yet, they also carry a hidden cost.

Demands placed on individuals often exceed their income and capacity. Support given rarely satisfies expectations, not because people are ungrateful, but because need itself is deep and widespread. The result is a quiet but persistent form of cognitive taxation. When one individual becomes responsible for many problems they cannot realistically solve, and the mind is left permanently managing emergencies.

Individually, it is very difficult to resist these pressures. Saying no feels unchristian. Setting limits appears selfish. Yet constant exposure to open-ended demands leaves little mental space for planning, growth, or peace.

This is where organised social groups matter in a way that is often misunderstood.  Groups such as Munno Daala are effective not only because they pool resources, but because they create structure. Membership defines contributions, expectations, and beneficiaries. It gives individuals a socially legitimate basis to say, "I am already committed elsewhere." In highly communal societies, this matters enormously.

Such groups reduce the cognitive burden imposed by limitless demands. They allow individuals to focus support within a small, defined circle where reciprocity is clearer, and assistance is assured in times of crisis. This does not mean abandoning family or community. It means preventing one household from being overwhelmed by unbounded expectations.

At this point, it is important to be clear: this argument is not against generosity. In fact, it is deeply aligned with the Christian understanding of giving.

Scripture teaches generosity that is intentional, structured, and sustainable. In Deuteronomy 24:17–22, God commands the Israelites not to harvest everything, but to leave what is missed for the foreigner, the fatherless, and the widow. This was not reckless giving; it was a carefully designed system. The harvest itself was secured, boundaries were clear, beneficiaries were specified, and dignity was preserved through work rather than dependence.

Biblical hospitality was never about limitless personal obligation. It was about creating social arrangements that protected both the vulnerable and the giver. The goal was not to exhaust households, but to reflect God’s generosity in ways that sustained the community over time.

Seen this way, organised groups like Munno Daala are not unchristian alternatives to generosity; they are modern expressions of biblical wisdom. They allow people to remain generous without collapsing under cognitive and financial overload. They preserve mental space, and mental space is necessary for discernment, compassion, and faithfulness.

Trusting God’s provision does not require abandoning wisdom. Scripture consistently links generosity with prudence, planning, and stewardship. A constantly overwhelmed mind struggles not only to plan economically but also to love well.

Reducing cognitive overload, therefore, is not selfish: it is responsible. It allows individuals to think clearly, strengthen their households, and ultimately increase their capacity to support others meaningfully.

The Oxford study leaves us with a lesson that resonates deeply with both economics and faith. Intelligence is not scarce in Africa; mental bandwidth is. That bandwidth is depleted not only by poverty itself, but by poorly timed obligations, stacked pressures, and repeated shocks without buffers.

If we want stronger families, wiser decisions, and more resilient communities, we must pay attention not only to income, but also to how expectations and responsibilities are structured. Progress will not come only from earning more, but from fewer emergencies, clearer boundaries, and stronger communal systems.

Poverty, it turns out, is not just about how little we have. It is also about how much of our mind it takes away, and how wisely we steward what remains.

Saturday, 13 December 2025

 

When AI Gets It Wrong and Why That Shouldn’t Scare Us

By

Richard Sebaggala (PhD)

Stories about lawyers being “caught using AI wrongly” have become a familiar feature of professional headlines. One recent case in Australia illustrates the pattern. A King’s Counsel, together with junior counsel and their instructing solicitor, was referred to state disciplinary bodies after artificial intelligence–generated errors were discovered in court submissions. The documents contained fabricated or inaccurate legal references—so-called hallucinations—which were not identified before filing. When the court sought an explanation, the responses were unsatisfactory. Costs were awarded against the legal team, and responsibility for the errors became a matter for regulators.

The episode was widely reported, often with a tone of alarm. Artificial intelligence, the implication ran, had intruded into the courtroom with damaging consequences. The lesson appeared obvious: AI is unreliable and should be kept well away from serious professional work.

That conclusion, however, is too simple—and ultimately unhelpful. The problem in this case was not that artificial intelligence produced errors. It was that its output was treated as authoritative rather than provisional. What failed was not the technology itself, but the assumptions made about what the technology could do.

Hallucinations are not moral lapses, nor are they merely the result of careless users. They are a structural limitation of current large language models, arising from how these systems are built and trained. Even developers acknowledge that hallucinations have not been fully eliminated. To frame such incidents as scandals is to overlook a more productive question: how should AI be used, and where should it not be trusted?

A small experiment of my own makes the point more clearly. I recently asked ChatGPT to convert students’ group course marks into an Excel-style table, largely to avoid the tedium of manual data entry. The task involved nothing more than copying names, registration numbers, and marks into a clean, structured format. The result looked impeccable at first glance—neatly aligned, professionally presented, and entirely plausible. Yet closer inspection revealed several errors. Registration numbers had been swapped between students, and in some cases, marks were attributed to the wrong individuals, despite the original data being correct.

When I queried why such mistakes had occurred, given the simplicity of the task, the answer lay in how AI systems operate. These models do not “see” data as humans do. They do not inherently track identity, ownership, or factual relationships unless those constraints are explicitly imposed. Instead, they generate text by predicting what is most likely to come next, based on patterns absorbed during training.

When faced with structured material—tables, grades, legal citations, or names linked to numbers—the system tends to prioritise surface coherence over factual precision. The output looks right, but there is no internal mechanism verifying consistency or truth. This is the same dynamic that produced fabricated case citations in the King’s Counsel matter, and it is why hallucinations also appear in academic references, medical summaries, and financial reports.

Language models are not databases, nor are they calculators. They generate language probabilistically. When asked to reproduce or reorganise factual information, they may quietly reshape it, smoothing entries or rearranging details in ways that make linguistic sense but undermine accuracy. The problem is compounded by the absence of an internal truth-checking function. Unless an AI system is deliberately connected to verified external sources—databases, spreadsheets, citation tools—it has no reliable way of knowing when it is wrong. Confidence, in this context, is meaningless.

The risk increases further when many similar elements appear together. Names, numbers, and references can blur, particularly in long or complex prompts. That is what happened in my grading exercise and what appears to have happened in the legal case. Add to this the way such systems are trained—rewarded for producing answers rather than declining to respond—and the persistence of hallucinations becomes easier to understand. Faced with uncertainty, the model will usually generate something rather than admit ignorance.

This is why the lawyers involved did not err simply by using AI. They erred by relying on its output without independent verification. The same risk confronts lecturers, accountants, doctors, policy analysts, and researchers. In all these fields, responsibility does not shift to the machine. It remains with the professional.

Used properly, artificial intelligence is a powerful tool. It excels at drafting, organising ideas, summarising material, and reducing the burden of repetitive work. It can free time for deeper thinking and better judgment. Where it remains weak is in factual custody, precise attribution, and tasks where small errors carry serious consequences. Confusing these roles is what turns a useful assistant into a liability.

The lesson to draw from recent headlines is therefore not that AI should be avoided. It is that its limits must be understood. AI can work alongside human judgment, but it cannot replace it. When that boundary is respected, the technology becomes a collaborator rather than a shortcut—an amplifier of human reasoning rather than a substitute for it.

Fear, in this context, is the wrong response. What is needed instead is literacy: a clear-eyed understanding of what AI can do well, what it does poorly, and where human oversight is indispensable. The gains on offer—in productivity, creativity, and learning—are too substantial to be dismissed on the basis of misunderstood failures.