Saturday, 20 December 2025

 

Wise Generosity in Hard Times: Poverty and Mental Bandwidth

By Richard Sebaggala  (PhD)

 In the last weeks of December, I have deliberately chosen to pause my usual writing on the economics of AI and focus on a more fundamental issue: one that shapes everyday economic life for millions of African families. December is a season of celebration, reflection, generosity, travel, and social obligation. Yet, it is also the prelude to one of the most financially stressful periods of the year.

From December into January and February, households transition abruptly from festivity to financial strain. Savings are stretched by food, ceremonies, transport, gifts, and the unspoken obligations that come with being part of a family, community, and church. Almost immediately, schools reopen. Fees, uniforms, transport, and learning materials arrive when financial buffers are weakest. What may seem like poor planning is often a timing problem: heavy spending followed by unavoidable obligations stacked too closely together.

 

This coming cycle is likely to be even more demanding in Uganda. National general elections fall within this same period, adding further strain to the economy. Election seasons divert public spending, slow private investment, heighten uncertainty, and introduce psychological stress. Highly contested elections bring economic costs that extend beyond budgets and markets, reaching into households already under pressure.

 

While reflecting on this convergence (festivities, school fees, elections, and uncertainty), I was reminded of an Oxford-linked study among Indian farmers that explained something many Africans feel but rarely name: poverty and financial pressure quietly reduce our ability to think clearly.

 

The study followed the same farmers before harvest, when money was scarce, and after harvest, when income had arrived. The findings were striking. The same individuals performed significantly worse on cognitive tests when they were financially stressed. The gap was equivalent to losing up to thirteen IQ points. Their intelligence had not changed. What had changed was the mental burden of financial worry.

 

The conclusion was simple but profound: poverty does not only drain income, but it also drains mental bandwidth.

Once this insight is understood, African economic life appears in a different light. Across our societies, financial stress is rarely private. African life is structured around openness (to family, kinship networks, community, and church). Even small signs of stability attract moral expectations to support others. These expectations are rooted in solidarity and shared survival, and they have sustained communities for generations.

Yet, they also carry a hidden cost.

Demands placed on individuals often exceed their income and capacity. Support given rarely satisfies expectations, not because people are ungrateful, but because need itself is deep and widespread. The result is a quiet but persistent form of cognitive taxation. When one individual becomes responsible for many problems they cannot realistically solve, and the mind is left permanently managing emergencies.

Individually, it is very difficult to resist these pressures. Saying no feels unchristian. Setting limits appears selfish. Yet constant exposure to open-ended demands leaves little mental space for planning, growth, or peace.

This is where organised social groups matter in a way that is often misunderstood.  Groups such as Munno Daala are effective not only because they pool resources, but because they create structure. Membership defines contributions, expectations, and beneficiaries. It gives individuals a socially legitimate basis to say, "I am already committed elsewhere." In highly communal societies, this matters enormously.

Such groups reduce the cognitive burden imposed by limitless demands. They allow individuals to focus support within a small, defined circle where reciprocity is clearer, and assistance is assured in times of crisis. This does not mean abandoning family or community. It means preventing one household from being overwhelmed by unbounded expectations.

At this point, it is important to be clear: this argument is not against generosity. In fact, it is deeply aligned with the Christian understanding of giving.

Scripture teaches generosity that is intentional, structured, and sustainable. In Deuteronomy 24:17–22, God commands the Israelites not to harvest everything, but to leave what is missed for the foreigner, the fatherless, and the widow. This was not reckless giving; it was a carefully designed system. The harvest itself was secured, boundaries were clear, beneficiaries were specified, and dignity was preserved through work rather than dependence.

Biblical hospitality was never about limitless personal obligation. It was about creating social arrangements that protected both the vulnerable and the giver. The goal was not to exhaust households, but to reflect God’s generosity in ways that sustained the community over time.

Seen this way, organised groups like Munno Daala are not unchristian alternatives to generosity; they are modern expressions of biblical wisdom. They allow people to remain generous without collapsing under cognitive and financial overload. They preserve mental space, and mental space is necessary for discernment, compassion, and faithfulness.

Trusting God’s provision does not require abandoning wisdom. Scripture consistently links generosity with prudence, planning, and stewardship. A constantly overwhelmed mind struggles not only to plan economically but also to love well.

Reducing cognitive overload, therefore, is not selfish: it is responsible. It allows individuals to think clearly, strengthen their households, and ultimately increase their capacity to support others meaningfully.

The Oxford study leaves us with a lesson that resonates deeply with both economics and faith. Intelligence is not scarce in Africa; mental bandwidth is. That bandwidth is depleted not only by poverty itself, but by poorly timed obligations, stacked pressures, and repeated shocks without buffers.

If we want stronger families, wiser decisions, and more resilient communities, we must pay attention not only to income, but also to how expectations and responsibilities are structured. Progress will not come only from earning more, but from fewer emergencies, clearer boundaries, and stronger communal systems.

Poverty, it turns out, is not just about how little we have. It is also about how much of our mind it takes away, and how wisely we steward what remains.

Saturday, 13 December 2025

 

When AI Gets It Wrong and Why That Shouldn’t Scare Us

By

Richard Sebaggala (PhD)

Stories about lawyers being “caught using AI wrongly” have become a familiar feature of professional headlines. One recent case in Australia illustrates the pattern. A King’s Counsel, together with junior counsel and their instructing solicitor, was referred to state disciplinary bodies after artificial intelligence–generated errors were discovered in court submissions. The documents contained fabricated or inaccurate legal references—so-called hallucinations—which were not identified before filing. When the court sought an explanation, the responses were unsatisfactory. Costs were awarded against the legal team, and responsibility for the errors became a matter for regulators.

The episode was widely reported, often with a tone of alarm. Artificial intelligence, the implication ran, had intruded into the courtroom with damaging consequences. The lesson appeared obvious: AI is unreliable and should be kept well away from serious professional work.

That conclusion, however, is too simple—and ultimately unhelpful. The problem in this case was not that artificial intelligence produced errors. It was that its output was treated as authoritative rather than provisional. What failed was not the technology itself, but the assumptions made about what the technology could do.

Hallucinations are not moral lapses, nor are they merely the result of careless users. They are a structural limitation of current large language models, arising from how these systems are built and trained. Even developers acknowledge that hallucinations have not been fully eliminated. To frame such incidents as scandals is to overlook a more productive question: how should AI be used, and where should it not be trusted?

A small experiment of my own makes the point more clearly. I recently asked ChatGPT to convert students’ group course marks into an Excel-style table, largely to avoid the tedium of manual data entry. The task involved nothing more than copying names, registration numbers, and marks into a clean, structured format. The result looked impeccable at first glance—neatly aligned, professionally presented, and entirely plausible. Yet closer inspection revealed several errors. Registration numbers had been swapped between students, and in some cases, marks were attributed to the wrong individuals, despite the original data being correct.

When I queried why such mistakes had occurred, given the simplicity of the task, the answer lay in how AI systems operate. These models do not “see” data as humans do. They do not inherently track identity, ownership, or factual relationships unless those constraints are explicitly imposed. Instead, they generate text by predicting what is most likely to come next, based on patterns absorbed during training.

When faced with structured material—tables, grades, legal citations, or names linked to numbers—the system tends to prioritise surface coherence over factual precision. The output looks right, but there is no internal mechanism verifying consistency or truth. This is the same dynamic that produced fabricated case citations in the King’s Counsel matter, and it is why hallucinations also appear in academic references, medical summaries, and financial reports.

Language models are not databases, nor are they calculators. They generate language probabilistically. When asked to reproduce or reorganise factual information, they may quietly reshape it, smoothing entries or rearranging details in ways that make linguistic sense but undermine accuracy. The problem is compounded by the absence of an internal truth-checking function. Unless an AI system is deliberately connected to verified external sources—databases, spreadsheets, citation tools—it has no reliable way of knowing when it is wrong. Confidence, in this context, is meaningless.

The risk increases further when many similar elements appear together. Names, numbers, and references can blur, particularly in long or complex prompts. That is what happened in my grading exercise and what appears to have happened in the legal case. Add to this the way such systems are trained—rewarded for producing answers rather than declining to respond—and the persistence of hallucinations becomes easier to understand. Faced with uncertainty, the model will usually generate something rather than admit ignorance.

This is why the lawyers involved did not err simply by using AI. They erred by relying on its output without independent verification. The same risk confronts lecturers, accountants, doctors, policy analysts, and researchers. In all these fields, responsibility does not shift to the machine. It remains with the professional.

Used properly, artificial intelligence is a powerful tool. It excels at drafting, organising ideas, summarising material, and reducing the burden of repetitive work. It can free time for deeper thinking and better judgment. Where it remains weak is in factual custody, precise attribution, and tasks where small errors carry serious consequences. Confusing these roles is what turns a useful assistant into a liability.

The lesson to draw from recent headlines is therefore not that AI should be avoided. It is that its limits must be understood. AI can work alongside human judgment, but it cannot replace it. When that boundary is respected, the technology becomes a collaborator rather than a shortcut—an amplifier of human reasoning rather than a substitute for it.

Fear, in this context, is the wrong response. What is needed instead is literacy: a clear-eyed understanding of what AI can do well, what it does poorly, and where human oversight is indispensable. The gains on offer—in productivity, creativity, and learning—are too substantial to be dismissed on the basis of misunderstood failures.

Sunday, 7 December 2025

 

Marriage Isn’t Dying in Africa—It’s Being Constrained

By Richard Sebaggala (PhD)

 

Today, while preparing to publish my weekly article on the economics of artificial intelligence, I came across an article titled “What’s Killing Marriage—Unmarriageable Men or Liberal Women?” by Maria Baer and Brad Wilcox. The article draws on new US survey evidence showing a sharp decline in women’s confidence in marriage. According to recent Pew data, the share of 12th-grade girls in the United States who say they expect to get married one day has fallen from 83 percent in 1993 to 61 percent in 2023—a drop of more than twenty percentage points in just three decades. Over the same period, young men’s desire for marriage has remained relatively stable at around 75 percent.

 

These figures are striking. They suggest that marriage in the United States is increasingly being questioned not because men have abandoned it, but because many women no longer believe it will improve their lives. Among liberal women in particular, marriage and childbearing now rank low compared to priorities such as career fulfillment, financial security, and emotional well-being. This shift has generated intense debate in the West about whether the decline of marriage reflects male economic decline, feminist ideology, cultural change, or the corrosive influence of digital technology.

 

Reading this, I found myself asking a different question. If marriage is described as “alarming” in a high-income country like the United States, with relatively strong institutions and social safety nets, what does the marriage story look like in Africa?

 

After stepping back and considering both data and lived realities across African societies, it becomes clear that Africa is not experiencing the same phenomenon. The weakening of marriage in Africa does not stem from a loss of faith in the institution itself. In fact, most African women still express strong aspirations for marriage and family life. Marriage remains central to social identity, respectability, and life meaning across cultures, religions, and income groups. Yet marriage is increasingly delayed, informal, or avoided in practice. This apparent contradiction points to a different underlying problem.

 

What Africa is facing is not ideological rejection, but what can best be described as aspirational frustration. Many women want marriage, but they do not want poor-quality marriages. Expectations around economic stability, emotional maturity, mutual respect, and security have risen, while the conditions necessary to meet those expectations have deteriorated. As a result, marriage is postponed, approached cautiously, or entered into only under strict conditions. It is not rejected in principle, but deferred in practice.

 

A central driver of this frustration lies in the political economy of male economic vulnerability. Much like in the United States, African public discourse often speaks of a shortage of “marriageable men.” But in Africa, this is less a story of cultural malaise and more one of structural exclusion. High youth unemployment, widespread informality, unstable incomes, and delayed economic independence make it difficult for many men to meet longstanding expectations of provision and responsibility. Since marriage in many African societies remains closely tied to economic readiness, this instability increases the risks associated with formal unions, especially for women who disproportionately bear the long-term costs of household failure and childrearing.

 

Digital technology is also reshaping expectations, though in a different way from the West. Rather than fueling explicit ideological opposition to marriage, social media in Africa rapidly globalizes lifestyles, aspirations, and relationship ideals. Exposure to highly curated images of success, romance, and consumption raises expectations faster than incomes grow and institutions adapt. Traditional responsibilities remain in place, modern aspirations multiply, and economic capacity lags behind both. The result is growing dissatisfaction not with marriage itself, but with the likelihood of achieving a version of marriage that feels stable and dignified.

 

Importantly, marriage in Africa still correlates strongly with well-being when it works. Evidence from household surveys and well-being studies consistently shows higher life satisfaction among those in stable unions, especially where economic stress and conflict are limited. However, as marital quality deteriorates under economic and social strain, the benefits of marriage weaken. For many women, delaying marriage becomes a rational strategy to avoid long-term vulnerability rather than a rejection of family life.

 

This means that marriage in Africa is not dying; it is being constrained. It is gradually shifting from an expected life stage to a high-stakes decision, from a collective institution to an individual risk calculation. If current trends continue without meaningful economic and institutional reform, the likely outcomes are continued delays in formal marriage, growth of informal and unstable unions, and increasing single parenthood driven not by ideology, but by constrained choices.

 

The contrast with the United States is therefore crucial. While many women in the US are losing faith in marriage as an institution—clearly reflected in the sharp decline in stated desire to marry—many women in Africa still believe in marriage but cannot find the conditions that make it viable. Africa’s marriage challenge is not primarily about values or belief. It is about economics, employment, and the widening gap between aspirations and lived realities.

 

Conditions, unlike beliefs, can be changed. But doing so requires moving beyond moral panic and imported culture wars, and instead treating marriage as part of Africa’s broader social and economic infrastructure. If we are willing to confront the structural roots of aspirational frustration, marriage in Africa remains a resilient institution—not because people are clinging to it blindly, but because they are still waiting for it to work.

Thursday, 27 November 2025

 

AI’s Misallocation Paradox: High Adoption, Low Impact

 By Richard Sebaggala (PhD)

 

When the Washington Post recently analysed 47,000 publicly shared ChatGPT conversations, the findings revealed something both intriguing and troubling from an economic standpoint. Despite hundreds of millions of weekly users, most interactions with AI remain small, superficial, and low stakes. People turn to it for casual fact-checking, emotional support, relationship advice, minor drafting tasks, and personal reflections. What stands out is how little of this activity involves the kinds of tasks where AI is genuinely transformative—research, modelling, academic writing, teaching preparation, data analysis, supervision, and professional decision-making.

 

For economists, this pattern immediately recalls the familiar distinction between having technology and using it productively. The issue is not that AI lacks capability; it is that society is allocating this new form of cognitive capital to low-return activities. In classic economic terms, this is a misallocation problem. A technology designed to augment reasoning, accelerate knowledge production, and expand human capability is being deployed primarily for conversations and conveniences that generate almost no measurable productivity gains.

 

This conclusion is not only supported by the Washington Post’s dataset; it is something I encounter repeatedly in practice. Over the past two years, as I have conducted AI-literacy workshops, supervised research, and written about AI’s role in higher education, I am often struck by the kinds of questions people ask. They tend to revolve around the most basic aspects of AI: What is AI? Will it replace teachers? How can eliminate AI in my work? These questions do not reflect curiosity about using AI for complex professional or analytical work; instead, they reveal uncertainty about where to even begin. Many participants—professionals and academics included—have never attempted to use AI for deep reasoning, data analysis, literature synthesis, curriculum design, or research supervision. When I think about how transformative AI can be in teaching, research, and analytical work, I am often frustrated because it feels as though we are sitting on an intellectual gold mine, yet many people do not realise that the gold is there.

 

This personal experience is fully consistent with the Washington Post findings. Fewer than one in ten of the sampled conversations involved anything resembling technical reasoning or serious academic engagement. Data analysis was almost entirely absent. Interactions that could have strengthened research, teaching, policymaking, or organisational performance were overshadowed by uses that, while understandable on a human level, contribute little to economic or educational transformation. The bottleneck here is not technological capacity but human imagination and institutional readiness.

 

Several factors help explain why this misallocation persists. Many users simply lack the literacy to see AI as anything more than a conversational tool. Habits shaped in a pre-AI world also remain dominant, students still search manually, write from scratch, and labour through tasks that AI could meaningfully accelerate. Institutions are even slower to adapt. Universities, schools, government agencies, and workplaces continue to operate with old structures, old workflows, and outdated expectations, even as they claim to “adopt” AI. When technology evolves faster than institutional culture, capability inevitably sits idle.

 

Economists have long demonstrated that new technology produces productivity gains only when complementary capabilities are in place. Skills must evolve, organisational routines must adapt, and incentives must shift. Without these complements, even the most powerful general-purpose technologies generate only modest results. AI today fits this pattern almost perfectly. It has been adopted widely but absorbed shallowly.

 

This gap between potential and practice is especially relevant for Africa. The continent stands to benefit enormously from disciplined, high-value use of AI—particularly in strengthening research output, expanding supervision capacity, enhancing data-driven policymaking, improving public-sector performance, and enriching teaching and curriculum design. Many of Africa’s longstanding constraints—limited supervision capacity, slow research processes, weak analytical infrastructure—are precisely the areas where AI can make the most difference. Yet the prevailing pattern mirrors global trends: high adoption for low-value tasks and minimal use in areas that matter most for development.

 

Ultimately, the impact of AI will depend less on the technology itself and more on how societies choose to integrate it into their high-value activities. The real opportunity lies in shifting AI from consumption to production—from a tool of conversation to a tool of analysis, reasoning, modelling, and knowledge creation. This requires deliberate investment in AI literacy, institutional redesign, and a cultural shift in how we think about teaching, research, and professional work.

 

The paradox is clear: adoption is high, yet impact remains low because the technology is misallocated. The task ahead is not to wait for “more advanced” AI, but to use the AI we already have for the work that truly matters. Only then will its economic and educational potential be realised.

Thursday, 20 November 2025

 

When Intelligence Stops Mattering: The Economics of Attention in the AI Era

 By Richard Sebaggala (PhD)

 

 If you spend enough time teaching university students or supervising research in Uganda, you begin to notice something that contradicts the story we were all raised on. The brightest people do not always win. Some of the most intellectually gifted students drift into ordinary outcomes, while those labelled as average quietly build remarkable lives. It is one of those puzzles that economists enjoy because it challenges the assumption that intelligence is destiny. The truth is more uncomfortable: beyond a certain level, intelligence stops being the thing that separates people.

 

This idea is not new. In 1921, psychologist Lewis Terman selected more than 1,500 exceptionally intelligent children, convinced they would become the Einsteins, Picassos, and Da Vincis of modern America. Today, his famous “Termites” study reads almost like a cautionary tale. These children had extraordinary IQ scores, superior schooling, and strong early promise. Yet most lived ordinary, respectable lives. A few became professionals, but none went on to reshape the world in the way their intelligence suggested they might. The outcome was not what Terman expected. It revealed a principle that economists immediately recognise: a factor that is no longer scarce loses its power to generate outsized results. In this case, intelligence reached a point of diminishing marginal returns. After a moderate threshold, more IQ did not produce more achievement.

 

Threshold Theory emerged from this insight. It suggests that once someone has “enough” intelligence to understand and navigate the world, their long-term success depends far more on consistency, deliberate practice, and attention to detail. In other words, it is the boring habits that win, not the brilliance. You can see this in the lives of people like Isaac Asimov, who published more than 500 books not because he had superhuman intelligence but because he wrote every day. Picasso, often celebrated as a natural genius, produced an estimated 20,000 works, and that relentless productivity was responsible for his influence far more than any single stroke of innate talent.

 

These patterns appear clearly in our context as well. In my teaching and supervision, the student who simply shows up, writes a little every day, reflects regularly, and keeps refining their work eventually surpasses the student who delivers occasional bursts of brilliance but lacks rhythm. It is the slow, steady accumulation of effort that compounds over time. It is difficult to accept this truth because dedication feels less glamorous than talent, yet it explains far more about real outcomes.

 

This brings us to the present era where artificial intelligence has rewritten the economics of human capability. A century after Terman, we live in a world where tools like ChatGPT and Claude have made cognitive ability widely accessible. An undergraduate in Gulu can now generate summaries, explanations, models, and arguments that once required years of academic experience. AI has lifted almost everyone above the old intelligence threshold. The scarcity has shifted. Intelligence is no longer the differentiator. The new constraint is attention.

 

Attention is fast becoming a rare commodity. While knowledge is infinite, the real challenge isn't access, but sustained focus. In my online classes, students are often managing dozens of tabs, buzzing phones, and multiple background conversations, leading to fragmented concentration. They skim rather than read, and jump between tasks without reflection. The deepest poverty of our generation is no longer information poverty, but attentional poverty. In economic terms, focus is emerging as the new source of comparative advantage.

 

This phenomenon matters even more for African learners and institutions. The continent does not suffer from a shortage of intelligent people. What we struggle with are the habits that make intelligence useful: sustained concentration, deliberate practice, refinement, and a culture that values slow thinking as much as quick recall. Our education systems often reward memorisation, not reasoning. Our learners tend to fear discomfort instead of embracing it as part of growth. And when AI enters such an environment, it does not fix these gaps. It magnifies them. A distracted student given AI becomes even more distracted, because the illusion of shortcuts becomes stronger. But a focused student who pairs AI with discipline suddenly becomes incredibly productive.

 

This is where Threshold Theory becomes deeply relevant for the AI age. If intelligence is widespread and cheap, and AI has lifted everyone above the threshold, then the difference between people will increasingly come from their habits. The human work now is to protect attention, practise something meaningful every day, use AI to expand thinking rather than avoid effort, build routines that compound, and stay curious long after others settle into laziness. AI can assist with reasoning, but it cannot replace judgment, contextual understanding, ethical interpretation, or the capacity to sustain deep effort. These remain profoundly human strengths.

 

In the end, genius is slowly shifting from something you are born with to something you practice. AI gives everyone the same starting point. Discipline and attention determine the destination. The real question for each of us, especially in Africa where the opportunity is enormous but unevenly captured, is simple: what will you do with your attention?

Wednesday, 12 November 2025

 

Seeing the Whole System: How Economists Should Think About AI 

By Richard Sebaggala (PhD)

Recently, while reading an article from The Economist debating whether economists or technologists are right about artificial intelligence, I found myself uneasy with how both camps framed the issue. Economists, true to their discipline, approached AI with caution. Erik Brynjolfsson of Stanford has long argued that “technology alone is never enough,” reminding us that productivity gains arise only when organizations redesign workflows, invest in skills, and realign incentives. Daron Acemoglu at MIT makes a similar point when he notes that “there is nothing automatic about new technologies bringing shared prosperity.” These warnings echo a familiar historical pattern: earlier general-purpose technologies, whether electricity, computers, or the internet, took decades before their full impact on productivity materialized.

Technologists, on the other hand, describe AI as a decisive break from that past. Sam Altman, CEO of OpenAI, has called AI “the most important technology humanity has ever developed,” emphasizing the speed and magnitude of socioeconomic disruption that may follow. Jensen Huang of NVIDIA goes even further, claiming we are “at the beginning of a new industrial revolution” driven by accelerated computing and machine intelligence. For thinkers in this camp, AI is not merely another digital tool, but a system endowed with reasoning capabilities that can automate cognitive functions once reserved for humans.

 

Both perspectives carry important truths, yet each misses a critical dimension. The debate often assumes AI is a self-contained phenomenon, detached from the digital infrastructure on which it actually operates. In reality, AI does not replace the computer or the internet; it builds on them. It exists because of them. The more productive question, therefore, is not how powerful AI is in isolation, but what happens when the billions of people who already use computers and the internet begin to work, learn, and think with AI assistance embedded in their daily routines.

From a pragmatic perspective, this framing changes everything. Pragmatism, unlike optimism or scepticism, asks what works, for whom, and under what conditions. It is concerned less with prediction and more with functionality. A pragmatic economist sees technology as capital whose productivity depends on how it is organized and incentivized within an institutional system. A pragmatic technologist, in turn, recognises that adoption depends on human adaptation—habits, trust, and training. The convergence of these two sensibilities produces a more grounded understanding of AI: not as a revolutionary force that will automatically transform society, but as an evolutionary layer that extends the power of existing digital infrastructures.

In my own thinking, the most useful way to understand AI is to see it as “computer plus internet plus intelligence.” This perspective recognizes that every technological breakthrough builds on the foundations laid by earlier digital layers. Computers automated calculation and data processing. The internet automated connectivity and access. AI now automates reasoning, prediction, and creation. Seen this way, AI is not an isolated revolution but the next evolutionary layer in a long digital continuum. The computer and internet revolution required complementary investments in education, governance, and organizational design before their full economic effects could materialize. When computers became widespread, firms had to reorganize workflows and hire IT specialists. When the internet emerged, they had to create digital marketing, cybersecurity, and logistics functions. The same will hold for AI: productivity gains will depend not merely on access to algorithms but on how societies redesign work, education, and decision-making to make intelligent tools genuinely useful.

 

This logic holds particular relevance for Africa. The continent’s technological progress has always been characterised by pragmatic adaptation rather than linear imitation. The success of mobile money, for instance, emerged not from cutting-edge infrastructure but from creatively reconfiguring existing resources to solve pressing coordination problems. In the same way, the potential of AI in African contexts may depend less on hardware and more on cognitive integration—how intelligently people and institutions use the tools already within reach. A university lecturer with a laptop, stable internet, and access to ChatGPT represents a new kind of productivity unit: a human–AI partnership capable of reimagining teaching, research, and supervision. But this transformation will not occur automatically; it requires investment in AI literacy, ethical awareness, and institutional readiness.

The economists are correct that such transformations take time. Every general-purpose technology has exhibited a lag between invention and impact, as economies struggle through a reorganization phase before productivity surges. But the technologists are equally right about the scope of change. Unlike earlier digital tools that mechanized physical or transactional processes, AI extends automation into the cognitive realm. It can assist in writing, designing, diagnosing, predicting, and problem-solving. It changes not only the speed of work but its very composition. The synthesis of both positions yields a pragmatic insight: AI’s short-term effects are often overestimated, but its long-term restructuring power is profoundly underestimated. The path to productivity follows a J-curve, with initial disruption followed by enduring dividends.

 

To think like an economist in the age of AI is to resist both technological euphoria and excessive caution. It is to examine incentives, complementarities, and institutional conditions rather than merely forecasting growth or disruption. The central question is not whether AI will replace human labour but how humans will reorganize around intelligence. The transformative potential of AI lies not in replacing human reasoning but in amplifying it, turning disciplined thought into augmented creativity.

This perspective is especially vital for developing regions where digital infrastructure already exists but underperforms. The challenge is to build the human and institutional complements that convert computational power into social and economic value. As teachers, researchers, and policymakers, the task is not to wait for AI to be perfected elsewhere but to make it work within our realities—to make AI ready for us. That is what it means to think pragmatically and, indeed, to think like an economist.

Tuesday, 4 November 2025

Beyond the Turing Test: Where Human Curiosity Meets AI Creation

 

Beyond the Turing Test: Where Human Curiosity Meets AI Creation

By Richard Sebaggala (PhD)

A few weeks ago, while attending a validation workshop, I had an engaging conversation with an officer from Uganda’s Ministry of Local Government. She described a persistent puzzle they have observed for years: why do some local governments in Uganda perform exceptionally well in local revenue collection while others, operating under the same laws and using the same digital systems, remain stagnant? It was not a new question, but the way she framed it revealed both urgency and frustration. Despite years of administrative reforms and data-driven initiatives, no one had found a clear explanation for the variation.

The question stayed with me long after the workshop ended. As a researcher and supervisor of graduate students, I have been working closely with one of my students who is studying the relationship between technology adoption and revenue performance. We recently obtained data from the Integrated Revenue Administration System (IRAS) and other public sources that could potentially answer this very question. On my journey to Mbarara, I decided to explore it further. I opened my laptop on the bus and began a conversation with an AI model to see how far it could help me think through the problem. What happened next became a lesson in how human curiosity and artificial intelligence can work together to deepen understanding.

The exchange reminded me of an ongoing debate that has been rekindled in recent months around the legacy of the Turing test. In 1950, the British mathematician Alan Turing proposed what he called the “imitation game”, an experiment to determine whether a computer could imitate human conversation so convincingly that a judge could not tell whether they were speaking to a person or a machine. For decades, this thought experiment has shaped how we think about machine intelligence. Yet, as several scientists recently argued at a Royal Society conference in London marking the 75th anniversary of Turing’s paper, the test has outlived its purpose.

 

At the meeting, researchers such as Anil Seth of the University of Sussex and Gary Marcus of New York University challenged the assumption that imitation is equivalent to intelligence. Seth urged that instead of measuring how human-like machines can appear, we should ask what kinds of systems society actually needs and how to evaluate their usefulness and safety. Marcus added that the pursuit of so-called “artificial general intelligence” may be misplaced, given that some of the most powerful AI systems (like DeepMind’s AlphaFold) are effective precisely because they focus on specific, well-defined tasks rather than trying to mimic human thought. The discussion, attended by scholars, artists, and public figures such as musician Peter Gabriel and actor Laurence Fishburne, represented a turning point in how we think about the relationship between human and artificial cognition.

Patterning and Parallax Cognition

It was against this backdrop that I found myself conducting an experiment of my own. When I asked ChatGPT why certain districts in Uganda outperform others in local revenue collection, the system responded not with answers, but with structure. It organised the problem into measurable domains: performance indicators such as revenue growth and taxpayer expansion; institutional adaptability reflected in IRAS adoption, audit responsiveness, and staff capacity; and governance context including political alignment and leadership stability. It even suggested how these could be investigated through a combination of quantitative techniques (panel data models, difference-in-differences estimation, and instrumental variables) and qualitative approaches like process tracing or comparative case analysis.

 

What the AI provided was not knowledge in itself but an architectural framework for inquiry. It revealed patterns that a researcher might take days or weeks to discern through manual brainstorming. Within a few minutes, I could see clear analytical pathways: which variables could be measured, how they might interact, and which data sources could be triangulated. It was a vivid demonstration of what John Nosta has called parallax cognition—the idea that when human insight and machine computation intersect, they produce cognitive depth similar to how two eyes create depth of vision. What one eye sees is never exactly what the other perceives, and it is their combination that produces true perspective. I am beginning to think that, in work-related terms, many of us have been operating for years with only one eye (limited by time, inadequate training, knowledge gaps, weak analytical grounding, and sometimes by poor writing and grammatical skills). Artificial intelligence may well be the second eye, enabling us to see problems and possibilities in fuller dimension. This should not be taken lightly, as it changes not only how knowledge is produced but also how human potential is developed and expressed.

The Human Contribution: Depth and Judgement

However, seeing with two eyes is only the beginning; what follows is the act of making sense of what is seen. Patterns alone do not create meaning, and once the scaffolding is in place, it becomes the researcher’s task to interpret and refine it. I examined the proposed research ideas and variables, assessing which reflected genuine institutional learning and which were merely bureaucratic outputs. For example, staff training frequency reveals more about adaptive capacity than the mere number of reports filed. I also adjusted the proposed econometric models to suit Uganda’s data realities, preferring fixed-effects estimation over pooled OLS to account for unobserved heterogeneity among districts. Each decision required contextual knowledge and an appreciation of the political dynamics, administrative cultures, and data constraints that shape local government operations.

 

This is where the collaboration between human and machine became intellectually productive. The AI contributed breadth (its ability to draw quickly from a vast array of statistical and conceptual possibilities). The human side provided depth (the judgement needed to determine what was relevant, credible, and ethically grounded). The process did not replace thinking; it accelerated and disciplined it. It transformed a loosely defined curiosity into a structured, methodologically sound research design within the space of a single journey.

The Future of Human–Machine Interaction

Reflecting on this experience later, I realised how it paralleled the arguments made at the Royal Society event. The real value of AI lies not in its capacity to imitate human reasoning, but in its ability to extend it. When aligned with human purpose, AI becomes an amplifier of curiosity rather than a substitute for it. This partnership invites a new kind of research practice (one that moves beyond competition between human and machine and towards complementarity).

For researchers, especially those in data-rich but resource-constrained environments, this shift carries significant implications. AI can help reveal relationships and structures that are easily overlooked when working alone. But it cannot determine what matters or why. Those judgements remain uniquely human, grounded in theory, experience, and ethical responsibility. In this sense, AI functions as a mirror, reflecting our intellectual choices back to us, allowing us to refine and clarify them.

The experience also challenged me to reconsider how we define intelligence itself. The Turing test, for all its historical importance, measures imitation; parallax cognition measures collaboration. The former asks whether a machine can fool us; the latter asks whether a machine can help us. In a world where AI tools increasingly populate academic, policy, and professional work, this distinction may determine whether technology deepens understanding or simply accelerates superficiality.

My brief encounter with AI on a bus to Mbarara became more than an experiment in convenience; it became a lesson in the epistemology of research. The system identified what was invisible; I supplied what was indispensable. Together, we achieved a kind of cognitive depth that neither could reach alone. This is the real future of human–machine interaction: not imitation, but illumination; not rivalry, but partnership.

If the death of the Turing test marks the end of one era, it also signals the beginning of another. The new measure of intelligence will not be how convincingly machines can pretend to be human, but how effectively they can collaborate with humans to generate insight, solve problems, and expand the boundaries of knowledge. The task before us, as researchers and educators, is to embrace this partnership thoughtfully, to ensure that in gaining computational power, we do not lose intellectual purpose.

Sunday, 19 October 2025

 

Don’t Blame the Hammer: How Poor Use of AI Tools Reveals Deeper Competence Gaps

By Richard Sebaggala (PhD)

When Deloitte was recently exposed for submitting a government report in Australia filled with fabricated citations produced by Azure OpenAI's GPT-4o, the headlines quickly became accusatory. Commentators framed it as yet another failure of artificial intelligence– a cautionary tale about machines gone wrong. However, that interpretation misses the essence of what happened. AI did not fail; people did. The Deloitte incident is not evidence that the technology is unreliable, but that its users lacked the skill to use it responsibly. Like every tool humanity has invented, artificial intelligence merely amplifies the quality of its handler. It does not make one lazy, careless, or mediocre; it only exposes those qualities if they are already present.

 

Generative AI is a mirror. It reflects the discipline, understanding, and ethics of the person behind the keyboard. The Deloitte report was not the first time this mirror has revealed uncomfortable truths about modern knowledge work. Many professionals, consultants, and even academics have quietly adopted AI tools to draft documents, write proposals, or summarise literature, yet few invest time in learning the principles of proper prompting, verification, and validation. When errors emerge, the reflex is to blame the tool rather than acknowledge the absence of rigour. But blaming the hammer when the carpenter misses the nail has never built a better house.

The fear surrounding hallucinations (AI’s tendency to produce plausible but false information) has become the favourite defence of those unwilling to adapt. Yes, hallucination remains a legitimate limitation of large language models. These systems predict language patterns rather than verify factual accuracy, and early versions of ChatGPT frequently produced citations that did not exist. Yet the scale of that problem has fallen sharply. Current models generate fewer hallucinations, and most can be avoided through simple measures: specifying reliable data sources, directing the model to cite only verifiable material, and performing a quick cross-check on references. The issue is not that AI cannot think; it is that many users do not.

 

I saw this firsthand while revising a research manuscript with a co-author. Several references generated in earlier drafts (while we were expanding our theoretical background) turned out to be incomplete or fabricated. Instead of blaming the tool, we treated the discovery as a test of our own academic discipline: cross-verifying every citation through Google Scholar and refining our theoretical background until it met publication standards. The experience reinforced a simple truth: AI is not a substitute for scholarly rigour; it is a magnifier of it.

In the process, I also discovered that when one identifies gaps or errors in AI-generated outputs and explicitly alerts the system, it responds with greater caution and precision. It not only corrects itself but often proposes credible alternative sources through targeted search. Over time, I have learned to instruct AI during prompting to be careful, critical, and to verify every fact or reference it cites. This practice has consistently improved the quality and reliability of its responses, turning AI from a speculative assistant into a more disciplined research partner.

This pattern is not new. Each technological leap in the history of work has produced the same anxiety and the same blame. When calculators arrived, some accountants abandoned mental arithmetic. When Excel spread through offices, others stopped understanding the logic of their formulas. When search engines became ubiquitous, some students stopped reading beyond the first page of results. Every generation confronts a moment when a tool reveals that the real deficit lies not in technology but in human effort. Artificial intelligence is simply the latest example.

The responsible use of AI therefore depends on three habits that serious professionals already practise: clarity, verification, and complementarity. Clarity means knowing exactly what one is asking for– just as an economist designs a model with clear variables and assumptions, a user must frame a prompt with precision and boundaries. Verification requires treating every AI output as a hypothesis, not a conclusion, and testing it against credible data or literature. Complementarity is the understanding that AI is a collaborator, not a substitute. The most capable researchers use it to draft, refine, and challenge their thinking, while maintaining ownership of judgement and interpretation. Those who surrender that judgement end up automating their own ignorance.

Refusing to learn how to work with AI will not preserve professional integrity; it will only ensure obsolescence. Every major innovation, from the printing press to the spreadsheet, initially appeared to threaten expertise but ultimately expanded it for those who embraced it. What AI changes is the return on competence. It increases the productivity of skilled workers far more than that of the unskilled, widening the gap between the thoughtful and the thoughtless. In economic terms, it shifts the production function upward for those who know how to use it and flattens it for those who do not.

This has important implications for universities, firms, and public institutions. Rather than issuing blanket bans on AI, the real task is to integrate AI literacy into education, professional training, and policy practice. Students must learn to interrogate information generated by machines. Analysts must learn to audit AI-assisted reports before submission. Organisations must cultivate a culture where transparency about AI use is encouraged rather than concealed. Using AI is not unethical; misusing it is.

The Deloitte episode will not be the last. Other institutions will repeat it because they see AI as a shortcut rather than an instrument of discipline. Yet the lesson remains clear: AI is not a threat to competence; it is a test of it. The technology does not replace understanding; it exposes whether understanding exists in the first place. Those who master it will multiply their insight and efficiency; those who misuse it will multiply their mistakes.

In truth, artificial intelligence has simply revived an old economic principle: productivity gains follow learning. The faster we acquire the skills to use these tools well, the more valuable our human judgement becomes. Blaming the hammer for the bent nail may feel satisfying, but it changes nothing. The problem is not the hammer; it is the hand that never learned how to swing it. Every correction, every verified reference, every disciplined prompt is part of that learning curve. Each moment of alertness – when we question an output, verify a citation, or refine an instruction – makes both the user and the tool more intelligent. The technology will keep improving, but whether knowledge improves with it depends entirely on us.

Tuesday, 14 October 2025

 

When Schools Hold the Cards: How Information Asymmetry Is Hurting Our Children

By Richard Sebaggala (PhD)

Today, I have decided not to write about artificial intelligence, as I usually do in my weekly reflections on the economics of technology. This change of topic was prompted by watching the evening news on NTV. The story featured students in Mukono who fainted and cried after discovering they could not sit their UCE examinations because their school had not registered them, even though their parents had already paid the required fees. It was painful to watch young people who had worked hard for four years, now stranded on the day that was supposed to mark a major step in their education.

Unfortunately, this is not a new problem. Every examination season, similar stories emerge from different parts of the country. Some schools collect registration fees but fail to remit them to the Uganda National Examinations Board (UNEB). When the examinations begin, students find their names missing from the register. In many cases, the head teachers or directors responsible are later arrested, but that does little to help the students. By then, the exams are already underway, and the victims have lost an entire academic year. Parents lose their savings, and the education system loses public trust.

 

What is most troubling is how easily this could be prevented. Uganda has made progress in using technology to deliver public services. UNEB already allows students to check their examination results through a simple SMS system. If the same technology can instantly display a student’s grades after the exams, why can it not confirm a student’s registration before the exams? Imagine if every candidate could send an SMS reading “UCE STATUS INDEX NUMBER” to a UNEB shortcode and receive a message showing whether they are registered, their centre name, and the date their payment was received. If registration was missing, the student would be alerted early enough to follow up.

Such a system would protect thousands of students from unnecessary loss and reduce the incentives for dishonest school administrators to exploit their informational advantage. In economic terms, this situation reflects a classic case of information asymmetry, where one party (the school) possesses critical information that the other parties (the parents and students) do not. This imbalance distorts decision-making and accountability, creating room for opportunistic behaviour and moral hazard. The most effective remedy is to restore information symmetry through transparency and timely access to verifiable data, enabling parents and students to make informed choices and hold institutions accountable.

The Ministry of Education and UNEB already have the basic tools to make this work. The registration database is digital, and the SMS platform for results is already in use. A simple update could link the two. The cost would be small compared to the harm caused each year by fraudulent registration practices. This would shift the system from reacting after harm has occurred to preventing harm before it happens.

Other institutions in Uganda have shown that such solutions work. The National Identification and Registration Authority allows citizens to check the status of their national ID applications by SMS. During the COVID-19 pandemic, the Ministry of Health used mobile phones to share health updates and collect data. Even savings groups and telecom companies send instant confirmations for every transaction. If a mobile money user can confirm a payment in seconds, surely a student should be able to confirm their examination registration.

 

This issue goes beyond technology. It is about governance and trust. When public institutions act only after problems have occurred, citizens lose confidence in them. But when they act early and give people access to information, trust begins to grow. An SMS registration system would be a simple but powerful way to show that the Ministry of Education and UNEB care about transparency and fairness as much as they care about performance. It would protect families from unnecessary loss and strengthen public confidence in the examination process.

As I watched those students in Mukono crying at their school gate, I kept thinking how easily their situation could have been avoided. A single text message could have told them months in advance that their registration had not been completed. They could have taken action, sought help, or transferred to another school. Instead, they found out when it was already too late.

Uganda does not need a new commission or an expensive reform to solve this. It only needs a small, practical innovation that gives students and parents control over information that directly affects their lives. Such steps would make the education system more transparent, efficient, and fair.

Although this article is not about artificial intelligence, it conveys a familiar lesson. Technology has little value without imagination and accountability. If we can use digital tools to issue results and manage national exams, we can also use them to ensure that every student who deserves to sit those exams has the opportunity to do so. True accountability begins before the exams, not after.