Tuesday, 14 October 2025

 

When Schools Hold the Cards: How Information Asymmetry Is Hurting Our Children

By Richard Sebaggala (PhD)

Today, I have decided not to write about artificial intelligence, as I usually do in my weekly reflections on the economics of technology. This change of topic was prompted by watching the evening news on NTV. The story featured students in Mukono who fainted and cried after discovering they could not sit their UCE examinations because their school had not registered them, even though their parents had already paid the required fees. It was painful to watch young people who had worked hard for four years, now stranded on the day that was supposed to mark a major step in their education.

Unfortunately, this is not a new problem. Every examination season, similar stories emerge from different parts of the country. Some schools collect registration fees but fail to remit them to the Uganda National Examinations Board (UNEB). When the examinations begin, students find their names missing from the register. In many cases, the head teachers or directors responsible are later arrested, but that does little to help the students. By then, the exams are already underway, and the victims have lost an entire academic year. Parents lose their savings, and the education system loses public trust.

 

What is most troubling is how easily this could be prevented. Uganda has made progress in using technology to deliver public services. UNEB already allows students to check their examination results through a simple SMS system. If the same technology can instantly display a student’s grades after the exams, why can it not confirm a student’s registration before the exams? Imagine if every candidate could send an SMS reading “UCE STATUS INDEX NUMBER” to a UNEB shortcode and receive a message showing whether they are registered, their centre name, and the date their payment was received. If registration was missing, the student would be alerted early enough to follow up.

Such a system would protect thousands of students from unnecessary loss and reduce the incentives for dishonest school administrators to exploit their informational advantage. In economic terms, this situation reflects a classic case of information asymmetry, where one party (the school) possesses critical information that the other parties (the parents and students) do not. This imbalance distorts decision-making and accountability, creating room for opportunistic behaviour and moral hazard. The most effective remedy is to restore information symmetry through transparency and timely access to verifiable data, enabling parents and students to make informed choices and hold institutions accountable.

The Ministry of Education and UNEB already have the basic tools to make this work. The registration database is digital, and the SMS platform for results is already in use. A simple update could link the two. The cost would be small compared to the harm caused each year by fraudulent registration practices. This would shift the system from reacting after harm has occurred to preventing harm before it happens.

Other institutions in Uganda have shown that such solutions work. The National Identification and Registration Authority allows citizens to check the status of their national ID applications by SMS. During the COVID-19 pandemic, the Ministry of Health used mobile phones to share health updates and collect data. Even savings groups and telecom companies send instant confirmations for every transaction. If a mobile money user can confirm a payment in seconds, surely a student should be able to confirm their examination registration.

 

This issue goes beyond technology. It is about governance and trust. When public institutions act only after problems have occurred, citizens lose confidence in them. But when they act early and give people access to information, trust begins to grow. An SMS registration system would be a simple but powerful way to show that the Ministry of Education and UNEB care about transparency and fairness as much as they care about performance. It would protect families from unnecessary loss and strengthen public confidence in the examination process.

As I watched those students in Mukono crying at their school gate, I kept thinking how easily their situation could have been avoided. A single text message could have told them months in advance that their registration had not been completed. They could have taken action, sought help, or transferred to another school. Instead, they found out when it was already too late.

Uganda does not need a new commission or an expensive reform to solve this. It only needs a small, practical innovation that gives students and parents control over information that directly affects their lives. Such steps would make the education system more transparent, efficient, and fair.

Although this article is not about artificial intelligence, it conveys a familiar lesson. Technology has little value without imagination and accountability. If we can use digital tools to issue results and manage national exams, we can also use them to ensure that every student who deserves to sit those exams has the opportunity to do so. True accountability begins before the exams, not after.

Saturday, 4 October 2025

 

Head in the Sand vs. Pragmatism Economics: Which Way Should We Face the AI Storm?

By Richard Sebaggala (PhD)

When societies encounter uncertainty, two habitual responses emerge. One is to deny or downplay change, hoping the future will resemble the past. This is what we might call the head-in-the-sand approach. The other is to accept uncertainty as inevitable and act: experiment, adapt, and build resilience. With Artificial Intelligence advancing rapidly, we once again stand at that crossroads.

AI is no longer speculative; it is already reshaping research, education, healthcare, industry, and governance. Yet its long-term impact remains ambiguous. Some predict modest disruption; others foresee transformation on the scale of the industrial revolution. What is certain is that AI is progressing faster than educational systems, regulatory frameworks, and labour markets can adapt. That widening gap is precisely where choice matters.

The head-in-the-sand approach treats AI as if it were just another incremental upgrade. We see this in universities that ban ChatGPT instead of teaching students to use it critically and responsibly. The message is: ignore it, hope it goes away. Graduates then enter the workforce without AI literacy, unprepared for an economy where such skills are increasingly essential. Governments that adopt this posture often relegate AI to ICT departments, focusing on broadband rollouts or cloud adoption while avoiding tougher economic questions: Who benefits when cognitive labour becomes abundant? How do we tax new forms of value? How do we prevent data monopolies? Countries that take this route risk becoming passive importers of AI technologies, unable to influence their trajectory or capture their benefits. When shocks come, they will feel them most acutely.

Pragmatism looks very different. It does not claim to know exactly how AI will unfold, but it acts as if preparation matters. Singapore, for instance, has committed more than S$1 billion (about US$778 million) over five years to AI compute, talent, and industrial development. Its AI research spending, relative to GDP, is estimated to be eighteen times higher than comparable US public investments. Nearly a third of Singaporean businesses now allocate more than US$1 million annually to AI initiatives, higher than the share in the UK or US. Yet even there, progress is uneven: only about 14% of firms have managed to scale AI enterprise-wide. The lesson is clear: investment is essential, but assimilation, governance, and skills are equally critical.

South Korea offers another example of pragmatism. The AI boom there has fuelled record semiconductor exports, with chip sales rising 22% year-on-year in September 2025, driven in part by global demand for AI infrastructure. This underscores how embedding in the AI supply chain allows a country not merely to consume imported systems but to capture significant value from their production.

Africa presents a contrasting picture. A Cisco–Carnegie Mellon white paper stresses the importance of building lifelong learning ecosystems that embed AI into vocational training, promote micro-credentials, and offer offline access in local languages. The World Economic Forum’s Future of Jobs 2025 report similarly highlights AI and ICT as major drivers of labour-market change, making reskilling strategies urgent. Yet most governments on the continent are still moving slowly. The danger of head-in-the-sand thinking is stark: Africa could remain a peripheral consumer of AI, locked out of influence and value capture. But the opportunity is also real: with pragmatic strategies, such as integrating AI into education, governance, health, agriculture, and finance, African economies could leapfrog, turning disruption into transformation.

Organisations face similar choices. Aon finds that 75% of firms now demand AI-related skills in their workforce, yet only 31% have adopted a coherent company-wide AI strategy. Meanwhile, Salesforce reports that more than four in five HR leaders are already planning or implementing AI reskilling programmes. The private sector feels the pressure: denial is no longer an option.

The difference between denial and pragmatism can be illustrated with a simple thought experiment. Imagine two countries facing the same AI storm. Country A bans AI in schools, neglects retraining, and ignores data governance. Five years later, its graduates are unemployable in AI-augmented sectors, its industries depend entirely on foreign systems, and inequality deepens. Country B, by contrast, integrates AI literacy into curricula, retrains workers, and builds regulatory frameworks. Five years on, its workforce is more adaptable, its firms capture value from AI, and it helps shape global rules. Both faced uncertainty, but only one built resilience.

The stakes are high. Economists Erik Brynjolfsson, Anton Korinek, and Ajay Agrawal have identified nine “grand challenges” for transformative AI: growth, innovation, income distribution, power concentration, geoeconomics, knowledge and information, safety and alignment, well-being, and transition dynamics. None of these challenges can be addressed by denial. Each requires pragmatic experimentation in policy, governance, and institutional adaptation.

The AI storm is already here. We do not know if it will hit like a hurricane or come slowly like steady rain, but we do know that failing to prepare is dangerous. Hiding from change may feel safe for a while, but it leaves us weak. A practical approach takes effort, patience, and resources, yet it gives us the strength to adjust, to find new chances, and to survive shocks. Think of two farmers who see dark clouds. One covers his eyes and hopes the rain will pass. The other repairs the roof and stores extra food. When the storm arrives, only the prepared farmer is left standing.

In the age of AI, pragmatism, not denial, is the path that leads to survival, and perhaps to thriving. History will not be kind to the ostrich. Time and again, the head-in-the-sand approach has proven disastrous. Industrial revolutions have always punished the complacent. Nations that dismissed early mechanisation in the nineteenth century fell behind those that industrialised. Companies that ignored the digital revolution of the 1990s,Kodak is the famous example, lost their dominance when they refused to adapt to digital photography. Even at the national level, countries that underestimated globalisation or financial innovation found themselves playing catch-up after crises had already swept through. In each of these cases, denial did not slow the storm; it only increased the damage when inevitable change arrived.

That is why I have personally chosen the pragmatic path in facing AI. As a researcher, AI has already transformed my work by accelerating data analysis, enabling new forms of literature synthesis, and freeing time for deeper conceptual thinking. Rather than fearing it, I experiment with it daily, testing its strengths and identifying its limits. As a teacher, I refuse to banish AI from the classroom. Instead, I encourage students to engage with it critically, to learn how to use it responsibly, and to see it not as a substitute for human thought but as a tool for augmenting it. My conviction is simple: by embracing AI pragmatically, I can prepare my students not just to survive in an AI-shaped economy, but to lead within it.

The ostrich buries its head when danger approaches. The builder, by contrast, looks at the storm clouds and reinforces the roof. History has shown which one endures. The choice before us is no different today.