Thursday, 27 November 2025

 

AI’s Misallocation Paradox: High Adoption, Low Impact

 By Richard Sebaggala (PhD)

 

When the Washington Post recently analysed 47,000 publicly shared ChatGPT conversations, the findings revealed something both intriguing and troubling from an economic standpoint. Despite hundreds of millions of weekly users, most interactions with AI remain small, superficial, and low stakes. People turn to it for casual fact-checking, emotional support, relationship advice, minor drafting tasks, and personal reflections. What stands out is how little of this activity involves the kinds of tasks where AI is genuinely transformative—research, modelling, academic writing, teaching preparation, data analysis, supervision, and professional decision-making.

 

For economists, this pattern immediately recalls the familiar distinction between having technology and using it productively. The issue is not that AI lacks capability; it is that society is allocating this new form of cognitive capital to low-return activities. In classic economic terms, this is a misallocation problem. A technology designed to augment reasoning, accelerate knowledge production, and expand human capability is being deployed primarily for conversations and conveniences that generate almost no measurable productivity gains.

 

This conclusion is not only supported by the Washington Post’s dataset; it is something I encounter repeatedly in practice. Over the past two years, as I have conducted AI-literacy workshops, supervised research, and written about AI’s role in higher education, I am often struck by the kinds of questions people ask. They tend to revolve around the most basic aspects of AI: What is AI? Will it replace teachers? How can eliminate AI in my work? These questions do not reflect curiosity about using AI for complex professional or analytical work; instead, they reveal uncertainty about where to even begin. Many participants—professionals and academics included—have never attempted to use AI for deep reasoning, data analysis, literature synthesis, curriculum design, or research supervision. When I think about how transformative AI can be in teaching, research, and analytical work, I am often frustrated because it feels as though we are sitting on an intellectual gold mine, yet many people do not realise that the gold is there.

 

This personal experience is fully consistent with the Washington Post findings. Fewer than one in ten of the sampled conversations involved anything resembling technical reasoning or serious academic engagement. Data analysis was almost entirely absent. Interactions that could have strengthened research, teaching, policymaking, or organisational performance were overshadowed by uses that, while understandable on a human level, contribute little to economic or educational transformation. The bottleneck here is not technological capacity but human imagination and institutional readiness.

 

Several factors help explain why this misallocation persists. Many users simply lack the literacy to see AI as anything more than a conversational tool. Habits shaped in a pre-AI world also remain dominant, students still search manually, write from scratch, and labour through tasks that AI could meaningfully accelerate. Institutions are even slower to adapt. Universities, schools, government agencies, and workplaces continue to operate with old structures, old workflows, and outdated expectations, even as they claim to “adopt” AI. When technology evolves faster than institutional culture, capability inevitably sits idle.

 

Economists have long demonstrated that new technology produces productivity gains only when complementary capabilities are in place. Skills must evolve, organisational routines must adapt, and incentives must shift. Without these complements, even the most powerful general-purpose technologies generate only modest results. AI today fits this pattern almost perfectly. It has been adopted widely but absorbed shallowly.

 

This gap between potential and practice is especially relevant for Africa. The continent stands to benefit enormously from disciplined, high-value use of AI—particularly in strengthening research output, expanding supervision capacity, enhancing data-driven policymaking, improving public-sector performance, and enriching teaching and curriculum design. Many of Africa’s longstanding constraints—limited supervision capacity, slow research processes, weak analytical infrastructure—are precisely the areas where AI can make the most difference. Yet the prevailing pattern mirrors global trends: high adoption for low-value tasks and minimal use in areas that matter most for development.

 

Ultimately, the impact of AI will depend less on the technology itself and more on how societies choose to integrate it into their high-value activities. The real opportunity lies in shifting AI from consumption to production—from a tool of conversation to a tool of analysis, reasoning, modelling, and knowledge creation. This requires deliberate investment in AI literacy, institutional redesign, and a cultural shift in how we think about teaching, research, and professional work.

 

The paradox is clear: adoption is high, yet impact remains low because the technology is misallocated. The task ahead is not to wait for “more advanced” AI, but to use the AI we already have for the work that truly matters. Only then will its economic and educational potential be realised.

Thursday, 20 November 2025

 

When Intelligence Stops Mattering: The Economics of Attention in the AI Era

 By Richard Sebaggala (PhD)

 

 If you spend enough time teaching university students or supervising research in Uganda, you begin to notice something that contradicts the story we were all raised on. The brightest people do not always win. Some of the most intellectually gifted students drift into ordinary outcomes, while those labelled as average quietly build remarkable lives. It is one of those puzzles that economists enjoy because it challenges the assumption that intelligence is destiny. The truth is more uncomfortable: beyond a certain level, intelligence stops being the thing that separates people.

 

This idea is not new. In 1921, psychologist Lewis Terman selected more than 1,500 exceptionally intelligent children, convinced they would become the Einsteins, Picassos, and Da Vincis of modern America. Today, his famous “Termites” study reads almost like a cautionary tale. These children had extraordinary IQ scores, superior schooling, and strong early promise. Yet most lived ordinary, respectable lives. A few became professionals, but none went on to reshape the world in the way their intelligence suggested they might. The outcome was not what Terman expected. It revealed a principle that economists immediately recognise: a factor that is no longer scarce loses its power to generate outsized results. In this case, intelligence reached a point of diminishing marginal returns. After a moderate threshold, more IQ did not produce more achievement.

 

Threshold Theory emerged from this insight. It suggests that once someone has “enough” intelligence to understand and navigate the world, their long-term success depends far more on consistency, deliberate practice, and attention to detail. In other words, it is the boring habits that win, not the brilliance. You can see this in the lives of people like Isaac Asimov, who published more than 500 books not because he had superhuman intelligence but because he wrote every day. Picasso, often celebrated as a natural genius, produced an estimated 20,000 works, and that relentless productivity was responsible for his influence far more than any single stroke of innate talent.

 

These patterns appear clearly in our context as well. In my teaching and supervision, the student who simply shows up, writes a little every day, reflects regularly, and keeps refining their work eventually surpasses the student who delivers occasional bursts of brilliance but lacks rhythm. It is the slow, steady accumulation of effort that compounds over time. It is difficult to accept this truth because dedication feels less glamorous than talent, yet it explains far more about real outcomes.

 

This brings us to the present era where artificial intelligence has rewritten the economics of human capability. A century after Terman, we live in a world where tools like ChatGPT and Claude have made cognitive ability widely accessible. An undergraduate in Gulu can now generate summaries, explanations, models, and arguments that once required years of academic experience. AI has lifted almost everyone above the old intelligence threshold. The scarcity has shifted. Intelligence is no longer the differentiator. The new constraint is attention.

 

Attention is fast becoming a rare commodity. While knowledge is infinite, the real challenge isn't access, but sustained focus. In my online classes, students are often managing dozens of tabs, buzzing phones, and multiple background conversations, leading to fragmented concentration. They skim rather than read, and jump between tasks without reflection. The deepest poverty of our generation is no longer information poverty, but attentional poverty. In economic terms, focus is emerging as the new source of comparative advantage.

 

This phenomenon matters even more for African learners and institutions. The continent does not suffer from a shortage of intelligent people. What we struggle with are the habits that make intelligence useful: sustained concentration, deliberate practice, refinement, and a culture that values slow thinking as much as quick recall. Our education systems often reward memorisation, not reasoning. Our learners tend to fear discomfort instead of embracing it as part of growth. And when AI enters such an environment, it does not fix these gaps. It magnifies them. A distracted student given AI becomes even more distracted, because the illusion of shortcuts becomes stronger. But a focused student who pairs AI with discipline suddenly becomes incredibly productive.

 

This is where Threshold Theory becomes deeply relevant for the AI age. If intelligence is widespread and cheap, and AI has lifted everyone above the threshold, then the difference between people will increasingly come from their habits. The human work now is to protect attention, practise something meaningful every day, use AI to expand thinking rather than avoid effort, build routines that compound, and stay curious long after others settle into laziness. AI can assist with reasoning, but it cannot replace judgment, contextual understanding, ethical interpretation, or the capacity to sustain deep effort. These remain profoundly human strengths.

 

In the end, genius is slowly shifting from something you are born with to something you practice. AI gives everyone the same starting point. Discipline and attention determine the destination. The real question for each of us, especially in Africa where the opportunity is enormous but unevenly captured, is simple: what will you do with your attention?

Wednesday, 12 November 2025

 

Seeing the Whole System: How Economists Should Think About AI 

By Richard Sebaggala (PhD)

Recently, while reading an article from The Economist debating whether economists or technologists are right about artificial intelligence, I found myself uneasy with how both camps framed the issue. Economists, true to their discipline, approached AI with caution. Erik Brynjolfsson of Stanford has long argued that “technology alone is never enough,” reminding us that productivity gains arise only when organizations redesign workflows, invest in skills, and realign incentives. Daron Acemoglu at MIT makes a similar point when he notes that “there is nothing automatic about new technologies bringing shared prosperity.” These warnings echo a familiar historical pattern: earlier general-purpose technologies, whether electricity, computers, or the internet, took decades before their full impact on productivity materialized.

Technologists, on the other hand, describe AI as a decisive break from that past. Sam Altman, CEO of OpenAI, has called AI “the most important technology humanity has ever developed,” emphasizing the speed and magnitude of socioeconomic disruption that may follow. Jensen Huang of NVIDIA goes even further, claiming we are “at the beginning of a new industrial revolution” driven by accelerated computing and machine intelligence. For thinkers in this camp, AI is not merely another digital tool, but a system endowed with reasoning capabilities that can automate cognitive functions once reserved for humans.

 

Both perspectives carry important truths, yet each misses a critical dimension. The debate often assumes AI is a self-contained phenomenon, detached from the digital infrastructure on which it actually operates. In reality, AI does not replace the computer or the internet; it builds on them. It exists because of them. The more productive question, therefore, is not how powerful AI is in isolation, but what happens when the billions of people who already use computers and the internet begin to work, learn, and think with AI assistance embedded in their daily routines.

From a pragmatic perspective, this framing changes everything. Pragmatism, unlike optimism or scepticism, asks what works, for whom, and under what conditions. It is concerned less with prediction and more with functionality. A pragmatic economist sees technology as capital whose productivity depends on how it is organized and incentivized within an institutional system. A pragmatic technologist, in turn, recognises that adoption depends on human adaptation—habits, trust, and training. The convergence of these two sensibilities produces a more grounded understanding of AI: not as a revolutionary force that will automatically transform society, but as an evolutionary layer that extends the power of existing digital infrastructures.

In my own thinking, the most useful way to understand AI is to see it as “computer plus internet plus intelligence.” This perspective recognizes that every technological breakthrough builds on the foundations laid by earlier digital layers. Computers automated calculation and data processing. The internet automated connectivity and access. AI now automates reasoning, prediction, and creation. Seen this way, AI is not an isolated revolution but the next evolutionary layer in a long digital continuum. The computer and internet revolution required complementary investments in education, governance, and organizational design before their full economic effects could materialize. When computers became widespread, firms had to reorganize workflows and hire IT specialists. When the internet emerged, they had to create digital marketing, cybersecurity, and logistics functions. The same will hold for AI: productivity gains will depend not merely on access to algorithms but on how societies redesign work, education, and decision-making to make intelligent tools genuinely useful.

 

This logic holds particular relevance for Africa. The continent’s technological progress has always been characterised by pragmatic adaptation rather than linear imitation. The success of mobile money, for instance, emerged not from cutting-edge infrastructure but from creatively reconfiguring existing resources to solve pressing coordination problems. In the same way, the potential of AI in African contexts may depend less on hardware and more on cognitive integration—how intelligently people and institutions use the tools already within reach. A university lecturer with a laptop, stable internet, and access to ChatGPT represents a new kind of productivity unit: a human–AI partnership capable of reimagining teaching, research, and supervision. But this transformation will not occur automatically; it requires investment in AI literacy, ethical awareness, and institutional readiness.

The economists are correct that such transformations take time. Every general-purpose technology has exhibited a lag between invention and impact, as economies struggle through a reorganization phase before productivity surges. But the technologists are equally right about the scope of change. Unlike earlier digital tools that mechanized physical or transactional processes, AI extends automation into the cognitive realm. It can assist in writing, designing, diagnosing, predicting, and problem-solving. It changes not only the speed of work but its very composition. The synthesis of both positions yields a pragmatic insight: AI’s short-term effects are often overestimated, but its long-term restructuring power is profoundly underestimated. The path to productivity follows a J-curve, with initial disruption followed by enduring dividends.

 

To think like an economist in the age of AI is to resist both technological euphoria and excessive caution. It is to examine incentives, complementarities, and institutional conditions rather than merely forecasting growth or disruption. The central question is not whether AI will replace human labour but how humans will reorganize around intelligence. The transformative potential of AI lies not in replacing human reasoning but in amplifying it, turning disciplined thought into augmented creativity.

This perspective is especially vital for developing regions where digital infrastructure already exists but underperforms. The challenge is to build the human and institutional complements that convert computational power into social and economic value. As teachers, researchers, and policymakers, the task is not to wait for AI to be perfected elsewhere but to make it work within our realities—to make AI ready for us. That is what it means to think pragmatically and, indeed, to think like an economist.

Tuesday, 4 November 2025

Beyond the Turing Test: Where Human Curiosity Meets AI Creation

 

Beyond the Turing Test: Where Human Curiosity Meets AI Creation

By Richard Sebaggala (PhD)

A few weeks ago, while attending a validation workshop, I had an engaging conversation with an officer from Uganda’s Ministry of Local Government. She described a persistent puzzle they have observed for years: why do some local governments in Uganda perform exceptionally well in local revenue collection while others, operating under the same laws and using the same digital systems, remain stagnant? It was not a new question, but the way she framed it revealed both urgency and frustration. Despite years of administrative reforms and data-driven initiatives, no one had found a clear explanation for the variation.

The question stayed with me long after the workshop ended. As a researcher and supervisor of graduate students, I have been working closely with one of my students who is studying the relationship between technology adoption and revenue performance. We recently obtained data from the Integrated Revenue Administration System (IRAS) and other public sources that could potentially answer this very question. On my journey to Mbarara, I decided to explore it further. I opened my laptop on the bus and began a conversation with an AI model to see how far it could help me think through the problem. What happened next became a lesson in how human curiosity and artificial intelligence can work together to deepen understanding.

The exchange reminded me of an ongoing debate that has been rekindled in recent months around the legacy of the Turing test. In 1950, the British mathematician Alan Turing proposed what he called the “imitation game”, an experiment to determine whether a computer could imitate human conversation so convincingly that a judge could not tell whether they were speaking to a person or a machine. For decades, this thought experiment has shaped how we think about machine intelligence. Yet, as several scientists recently argued at a Royal Society conference in London marking the 75th anniversary of Turing’s paper, the test has outlived its purpose.

 

At the meeting, researchers such as Anil Seth of the University of Sussex and Gary Marcus of New York University challenged the assumption that imitation is equivalent to intelligence. Seth urged that instead of measuring how human-like machines can appear, we should ask what kinds of systems society actually needs and how to evaluate their usefulness and safety. Marcus added that the pursuit of so-called “artificial general intelligence” may be misplaced, given that some of the most powerful AI systems (like DeepMind’s AlphaFold) are effective precisely because they focus on specific, well-defined tasks rather than trying to mimic human thought. The discussion, attended by scholars, artists, and public figures such as musician Peter Gabriel and actor Laurence Fishburne, represented a turning point in how we think about the relationship between human and artificial cognition.

Patterning and Parallax Cognition

It was against this backdrop that I found myself conducting an experiment of my own. When I asked ChatGPT why certain districts in Uganda outperform others in local revenue collection, the system responded not with answers, but with structure. It organised the problem into measurable domains: performance indicators such as revenue growth and taxpayer expansion; institutional adaptability reflected in IRAS adoption, audit responsiveness, and staff capacity; and governance context including political alignment and leadership stability. It even suggested how these could be investigated through a combination of quantitative techniques (panel data models, difference-in-differences estimation, and instrumental variables) and qualitative approaches like process tracing or comparative case analysis.

 

What the AI provided was not knowledge in itself but an architectural framework for inquiry. It revealed patterns that a researcher might take days or weeks to discern through manual brainstorming. Within a few minutes, I could see clear analytical pathways: which variables could be measured, how they might interact, and which data sources could be triangulated. It was a vivid demonstration of what John Nosta has called parallax cognition—the idea that when human insight and machine computation intersect, they produce cognitive depth similar to how two eyes create depth of vision. What one eye sees is never exactly what the other perceives, and it is their combination that produces true perspective. I am beginning to think that, in work-related terms, many of us have been operating for years with only one eye (limited by time, inadequate training, knowledge gaps, weak analytical grounding, and sometimes by poor writing and grammatical skills). Artificial intelligence may well be the second eye, enabling us to see problems and possibilities in fuller dimension. This should not be taken lightly, as it changes not only how knowledge is produced but also how human potential is developed and expressed.

The Human Contribution: Depth and Judgement

However, seeing with two eyes is only the beginning; what follows is the act of making sense of what is seen. Patterns alone do not create meaning, and once the scaffolding is in place, it becomes the researcher’s task to interpret and refine it. I examined the proposed research ideas and variables, assessing which reflected genuine institutional learning and which were merely bureaucratic outputs. For example, staff training frequency reveals more about adaptive capacity than the mere number of reports filed. I also adjusted the proposed econometric models to suit Uganda’s data realities, preferring fixed-effects estimation over pooled OLS to account for unobserved heterogeneity among districts. Each decision required contextual knowledge and an appreciation of the political dynamics, administrative cultures, and data constraints that shape local government operations.

 

This is where the collaboration between human and machine became intellectually productive. The AI contributed breadth (its ability to draw quickly from a vast array of statistical and conceptual possibilities). The human side provided depth (the judgement needed to determine what was relevant, credible, and ethically grounded). The process did not replace thinking; it accelerated and disciplined it. It transformed a loosely defined curiosity into a structured, methodologically sound research design within the space of a single journey.

The Future of Human–Machine Interaction

Reflecting on this experience later, I realised how it paralleled the arguments made at the Royal Society event. The real value of AI lies not in its capacity to imitate human reasoning, but in its ability to extend it. When aligned with human purpose, AI becomes an amplifier of curiosity rather than a substitute for it. This partnership invites a new kind of research practice (one that moves beyond competition between human and machine and towards complementarity).

For researchers, especially those in data-rich but resource-constrained environments, this shift carries significant implications. AI can help reveal relationships and structures that are easily overlooked when working alone. But it cannot determine what matters or why. Those judgements remain uniquely human, grounded in theory, experience, and ethical responsibility. In this sense, AI functions as a mirror, reflecting our intellectual choices back to us, allowing us to refine and clarify them.

The experience also challenged me to reconsider how we define intelligence itself. The Turing test, for all its historical importance, measures imitation; parallax cognition measures collaboration. The former asks whether a machine can fool us; the latter asks whether a machine can help us. In a world where AI tools increasingly populate academic, policy, and professional work, this distinction may determine whether technology deepens understanding or simply accelerates superficiality.

My brief encounter with AI on a bus to Mbarara became more than an experiment in convenience; it became a lesson in the epistemology of research. The system identified what was invisible; I supplied what was indispensable. Together, we achieved a kind of cognitive depth that neither could reach alone. This is the real future of human–machine interaction: not imitation, but illumination; not rivalry, but partnership.

If the death of the Turing test marks the end of one era, it also signals the beginning of another. The new measure of intelligence will not be how convincingly machines can pretend to be human, but how effectively they can collaborate with humans to generate insight, solve problems, and expand the boundaries of knowledge. The task before us, as researchers and educators, is to embrace this partnership thoughtfully, to ensure that in gaining computational power, we do not lose intellectual purpose.