Tuesday, 4 November 2025

Beyond the Turing Test: Where Human Curiosity Meets AI Creation

 

Beyond the Turing Test: Where Human Curiosity Meets AI Creation

By Richard Sebaggala (PhD)

A few weeks ago, while attending a validation workshop, I had an engaging conversation with an officer from Uganda’s Ministry of Local Government. She described a persistent puzzle they have observed for years: why do some local governments in Uganda perform exceptionally well in local revenue collection while others, operating under the same laws and using the same digital systems, remain stagnant? It was not a new question, but the way she framed it revealed both urgency and frustration. Despite years of administrative reforms and data-driven initiatives, no one had found a clear explanation for the variation.

The question stayed with me long after the workshop ended. As a researcher and supervisor of graduate students, I have been working closely with one of my students who is studying the relationship between technology adoption and revenue performance. We recently obtained data from the Integrated Revenue Administration System (IRAS) and other public sources that could potentially answer this very question. On my journey to Mbarara, I decided to explore it further. I opened my laptop on the bus and began a conversation with an AI model to see how far it could help me think through the problem. What happened next became a lesson in how human curiosity and artificial intelligence can work together to deepen understanding.

The exchange reminded me of an ongoing debate that has been rekindled in recent months around the legacy of the Turing test. In 1950, the British mathematician Alan Turing proposed what he called the “imitation game”, an experiment to determine whether a computer could imitate human conversation so convincingly that a judge could not tell whether they were speaking to a person or a machine. For decades, this thought experiment has shaped how we think about machine intelligence. Yet, as several scientists recently argued at a Royal Society conference in London marking the 75th anniversary of Turing’s paper, the test has outlived its purpose.

 

At the meeting, researchers such as Anil Seth of the University of Sussex and Gary Marcus of New York University challenged the assumption that imitation is equivalent to intelligence. Seth urged that instead of measuring how human-like machines can appear, we should ask what kinds of systems society actually needs and how to evaluate their usefulness and safety. Marcus added that the pursuit of so-called “artificial general intelligence” may be misplaced, given that some of the most powerful AI systems (like DeepMind’s AlphaFold) are effective precisely because they focus on specific, well-defined tasks rather than trying to mimic human thought. The discussion, attended by scholars, artists, and public figures such as musician Peter Gabriel and actor Laurence Fishburne, represented a turning point in how we think about the relationship between human and artificial cognition.

Patterning and Parallax Cognition

It was against this backdrop that I found myself conducting an experiment of my own. When I asked ChatGPT why certain districts in Uganda outperform others in local revenue collection, the system responded not with answers, but with structure. It organised the problem into measurable domains: performance indicators such as revenue growth and taxpayer expansion; institutional adaptability reflected in IRAS adoption, audit responsiveness, and staff capacity; and governance context including political alignment and leadership stability. It even suggested how these could be investigated through a combination of quantitative techniques (panel data models, difference-in-differences estimation, and instrumental variables) and qualitative approaches like process tracing or comparative case analysis.

 

What the AI provided was not knowledge in itself but an architectural framework for inquiry. It revealed patterns that a researcher might take days or weeks to discern through manual brainstorming. Within a few minutes, I could see clear analytical pathways: which variables could be measured, how they might interact, and which data sources could be triangulated. It was a vivid demonstration of what John Nosta has called parallax cognition—the idea that when human insight and machine computation intersect, they produce cognitive depth similar to how two eyes create depth of vision. What one eye sees is never exactly what the other perceives, and it is their combination that produces true perspective. I am beginning to think that, in work-related terms, many of us have been operating for years with only one eye (limited by time, inadequate training, knowledge gaps, weak analytical grounding, and sometimes by poor writing and grammatical skills). Artificial intelligence may well be the second eye, enabling us to see problems and possibilities in fuller dimension. This should not be taken lightly, as it changes not only how knowledge is produced but also how human potential is developed and expressed.

The Human Contribution: Depth and Judgement

However, seeing with two eyes is only the beginning; what follows is the act of making sense of what is seen. Patterns alone do not create meaning, and once the scaffolding is in place, it becomes the researcher’s task to interpret and refine it. I examined the proposed research ideas and variables, assessing which reflected genuine institutional learning and which were merely bureaucratic outputs. For example, staff training frequency reveals more about adaptive capacity than the mere number of reports filed. I also adjusted the proposed econometric models to suit Uganda’s data realities, preferring fixed-effects estimation over pooled OLS to account for unobserved heterogeneity among districts. Each decision required contextual knowledge and an appreciation of the political dynamics, administrative cultures, and data constraints that shape local government operations.

 

This is where the collaboration between human and machine became intellectually productive. The AI contributed breadth (its ability to draw quickly from a vast array of statistical and conceptual possibilities). The human side provided depth (the judgement needed to determine what was relevant, credible, and ethically grounded). The process did not replace thinking; it accelerated and disciplined it. It transformed a loosely defined curiosity into a structured, methodologically sound research design within the space of a single journey.

The Future of Human–Machine Interaction

Reflecting on this experience later, I realised how it paralleled the arguments made at the Royal Society event. The real value of AI lies not in its capacity to imitate human reasoning, but in its ability to extend it. When aligned with human purpose, AI becomes an amplifier of curiosity rather than a substitute for it. This partnership invites a new kind of research practice (one that moves beyond competition between human and machine and towards complementarity).

For researchers, especially those in data-rich but resource-constrained environments, this shift carries significant implications. AI can help reveal relationships and structures that are easily overlooked when working alone. But it cannot determine what matters or why. Those judgements remain uniquely human, grounded in theory, experience, and ethical responsibility. In this sense, AI functions as a mirror, reflecting our intellectual choices back to us, allowing us to refine and clarify them.

The experience also challenged me to reconsider how we define intelligence itself. The Turing test, for all its historical importance, measures imitation; parallax cognition measures collaboration. The former asks whether a machine can fool us; the latter asks whether a machine can help us. In a world where AI tools increasingly populate academic, policy, and professional work, this distinction may determine whether technology deepens understanding or simply accelerates superficiality.

My brief encounter with AI on a bus to Mbarara became more than an experiment in convenience; it became a lesson in the epistemology of research. The system identified what was invisible; I supplied what was indispensable. Together, we achieved a kind of cognitive depth that neither could reach alone. This is the real future of human–machine interaction: not imitation, but illumination; not rivalry, but partnership.

If the death of the Turing test marks the end of one era, it also signals the beginning of another. The new measure of intelligence will not be how convincingly machines can pretend to be human, but how effectively they can collaborate with humans to generate insight, solve problems, and expand the boundaries of knowledge. The task before us, as researchers and educators, is to embrace this partnership thoughtfully, to ensure that in gaining computational power, we do not lose intellectual purpose.

Sunday, 19 October 2025

 

Don’t Blame the Hammer: How Poor Use of AI Tools Reveals Deeper Competence Gaps

By Richard Sebaggala (PhD)

When Deloitte was recently exposed for submitting a government report in Australia filled with fabricated citations produced by Azure OpenAI's GPT-4o, the headlines quickly became accusatory. Commentators framed it as yet another failure of artificial intelligence– a cautionary tale about machines gone wrong. However, that interpretation misses the essence of what happened. AI did not fail; people did. The Deloitte incident is not evidence that the technology is unreliable, but that its users lacked the skill to use it responsibly. Like every tool humanity has invented, artificial intelligence merely amplifies the quality of its handler. It does not make one lazy, careless, or mediocre; it only exposes those qualities if they are already present.

 

Generative AI is a mirror. It reflects the discipline, understanding, and ethics of the person behind the keyboard. The Deloitte report was not the first time this mirror has revealed uncomfortable truths about modern knowledge work. Many professionals, consultants, and even academics have quietly adopted AI tools to draft documents, write proposals, or summarise literature, yet few invest time in learning the principles of proper prompting, verification, and validation. When errors emerge, the reflex is to blame the tool rather than acknowledge the absence of rigour. But blaming the hammer when the carpenter misses the nail has never built a better house.

The fear surrounding hallucinations (AI’s tendency to produce plausible but false information) has become the favourite defence of those unwilling to adapt. Yes, hallucination remains a legitimate limitation of large language models. These systems predict language patterns rather than verify factual accuracy, and early versions of ChatGPT frequently produced citations that did not exist. Yet the scale of that problem has fallen sharply. Current models generate fewer hallucinations, and most can be avoided through simple measures: specifying reliable data sources, directing the model to cite only verifiable material, and performing a quick cross-check on references. The issue is not that AI cannot think; it is that many users do not.

 

I saw this firsthand while revising a research manuscript with a co-author. Several references generated in earlier drafts (while we were expanding our theoretical background) turned out to be incomplete or fabricated. Instead of blaming the tool, we treated the discovery as a test of our own academic discipline: cross-verifying every citation through Google Scholar and refining our theoretical background until it met publication standards. The experience reinforced a simple truth: AI is not a substitute for scholarly rigour; it is a magnifier of it.

In the process, I also discovered that when one identifies gaps or errors in AI-generated outputs and explicitly alerts the system, it responds with greater caution and precision. It not only corrects itself but often proposes credible alternative sources through targeted search. Over time, I have learned to instruct AI during prompting to be careful, critical, and to verify every fact or reference it cites. This practice has consistently improved the quality and reliability of its responses, turning AI from a speculative assistant into a more disciplined research partner.

This pattern is not new. Each technological leap in the history of work has produced the same anxiety and the same blame. When calculators arrived, some accountants abandoned mental arithmetic. When Excel spread through offices, others stopped understanding the logic of their formulas. When search engines became ubiquitous, some students stopped reading beyond the first page of results. Every generation confronts a moment when a tool reveals that the real deficit lies not in technology but in human effort. Artificial intelligence is simply the latest example.

The responsible use of AI therefore depends on three habits that serious professionals already practise: clarity, verification, and complementarity. Clarity means knowing exactly what one is asking for– just as an economist designs a model with clear variables and assumptions, a user must frame a prompt with precision and boundaries. Verification requires treating every AI output as a hypothesis, not a conclusion, and testing it against credible data or literature. Complementarity is the understanding that AI is a collaborator, not a substitute. The most capable researchers use it to draft, refine, and challenge their thinking, while maintaining ownership of judgement and interpretation. Those who surrender that judgement end up automating their own ignorance.

Refusing to learn how to work with AI will not preserve professional integrity; it will only ensure obsolescence. Every major innovation, from the printing press to the spreadsheet, initially appeared to threaten expertise but ultimately expanded it for those who embraced it. What AI changes is the return on competence. It increases the productivity of skilled workers far more than that of the unskilled, widening the gap between the thoughtful and the thoughtless. In economic terms, it shifts the production function upward for those who know how to use it and flattens it for those who do not.

This has important implications for universities, firms, and public institutions. Rather than issuing blanket bans on AI, the real task is to integrate AI literacy into education, professional training, and policy practice. Students must learn to interrogate information generated by machines. Analysts must learn to audit AI-assisted reports before submission. Organisations must cultivate a culture where transparency about AI use is encouraged rather than concealed. Using AI is not unethical; misusing it is.

The Deloitte episode will not be the last. Other institutions will repeat it because they see AI as a shortcut rather than an instrument of discipline. Yet the lesson remains clear: AI is not a threat to competence; it is a test of it. The technology does not replace understanding; it exposes whether understanding exists in the first place. Those who master it will multiply their insight and efficiency; those who misuse it will multiply their mistakes.

In truth, artificial intelligence has simply revived an old economic principle: productivity gains follow learning. The faster we acquire the skills to use these tools well, the more valuable our human judgement becomes. Blaming the hammer for the bent nail may feel satisfying, but it changes nothing. The problem is not the hammer; it is the hand that never learned how to swing it. Every correction, every verified reference, every disciplined prompt is part of that learning curve. Each moment of alertness – when we question an output, verify a citation, or refine an instruction – makes both the user and the tool more intelligent. The technology will keep improving, but whether knowledge improves with it depends entirely on us.

Tuesday, 14 October 2025

 

When Schools Hold the Cards: How Information Asymmetry Is Hurting Our Children

By Richard Sebaggala (PhD)

Today, I have decided not to write about artificial intelligence, as I usually do in my weekly reflections on the economics of technology. This change of topic was prompted by watching the evening news on NTV. The story featured students in Mukono who fainted and cried after discovering they could not sit their UCE examinations because their school had not registered them, even though their parents had already paid the required fees. It was painful to watch young people who had worked hard for four years, now stranded on the day that was supposed to mark a major step in their education.

Unfortunately, this is not a new problem. Every examination season, similar stories emerge from different parts of the country. Some schools collect registration fees but fail to remit them to the Uganda National Examinations Board (UNEB). When the examinations begin, students find their names missing from the register. In many cases, the head teachers or directors responsible are later arrested, but that does little to help the students. By then, the exams are already underway, and the victims have lost an entire academic year. Parents lose their savings, and the education system loses public trust.

 

What is most troubling is how easily this could be prevented. Uganda has made progress in using technology to deliver public services. UNEB already allows students to check their examination results through a simple SMS system. If the same technology can instantly display a student’s grades after the exams, why can it not confirm a student’s registration before the exams? Imagine if every candidate could send an SMS reading “UCE STATUS INDEX NUMBER” to a UNEB shortcode and receive a message showing whether they are registered, their centre name, and the date their payment was received. If registration was missing, the student would be alerted early enough to follow up.

Such a system would protect thousands of students from unnecessary loss and reduce the incentives for dishonest school administrators to exploit their informational advantage. In economic terms, this situation reflects a classic case of information asymmetry, where one party (the school) possesses critical information that the other parties (the parents and students) do not. This imbalance distorts decision-making and accountability, creating room for opportunistic behaviour and moral hazard. The most effective remedy is to restore information symmetry through transparency and timely access to verifiable data, enabling parents and students to make informed choices and hold institutions accountable.

The Ministry of Education and UNEB already have the basic tools to make this work. The registration database is digital, and the SMS platform for results is already in use. A simple update could link the two. The cost would be small compared to the harm caused each year by fraudulent registration practices. This would shift the system from reacting after harm has occurred to preventing harm before it happens.

Other institutions in Uganda have shown that such solutions work. The National Identification and Registration Authority allows citizens to check the status of their national ID applications by SMS. During the COVID-19 pandemic, the Ministry of Health used mobile phones to share health updates and collect data. Even savings groups and telecom companies send instant confirmations for every transaction. If a mobile money user can confirm a payment in seconds, surely a student should be able to confirm their examination registration.

 

This issue goes beyond technology. It is about governance and trust. When public institutions act only after problems have occurred, citizens lose confidence in them. But when they act early and give people access to information, trust begins to grow. An SMS registration system would be a simple but powerful way to show that the Ministry of Education and UNEB care about transparency and fairness as much as they care about performance. It would protect families from unnecessary loss and strengthen public confidence in the examination process.

As I watched those students in Mukono crying at their school gate, I kept thinking how easily their situation could have been avoided. A single text message could have told them months in advance that their registration had not been completed. They could have taken action, sought help, or transferred to another school. Instead, they found out when it was already too late.

Uganda does not need a new commission or an expensive reform to solve this. It only needs a small, practical innovation that gives students and parents control over information that directly affects their lives. Such steps would make the education system more transparent, efficient, and fair.

Although this article is not about artificial intelligence, it conveys a familiar lesson. Technology has little value without imagination and accountability. If we can use digital tools to issue results and manage national exams, we can also use them to ensure that every student who deserves to sit those exams has the opportunity to do so. True accountability begins before the exams, not after.

Saturday, 4 October 2025

 

Head in the Sand vs. Pragmatism Economics: Which Way Should We Face the AI Storm?

By Richard Sebaggala (PhD)

When societies encounter uncertainty, two habitual responses emerge. One is to deny or downplay change, hoping the future will resemble the past. This is what we might call the head-in-the-sand approach. The other is to accept uncertainty as inevitable and act: experiment, adapt, and build resilience. With Artificial Intelligence advancing rapidly, we once again stand at that crossroads.

AI is no longer speculative; it is already reshaping research, education, healthcare, industry, and governance. Yet its long-term impact remains ambiguous. Some predict modest disruption; others foresee transformation on the scale of the industrial revolution. What is certain is that AI is progressing faster than educational systems, regulatory frameworks, and labour markets can adapt. That widening gap is precisely where choice matters.

The head-in-the-sand approach treats AI as if it were just another incremental upgrade. We see this in universities that ban ChatGPT instead of teaching students to use it critically and responsibly. The message is: ignore it, hope it goes away. Graduates then enter the workforce without AI literacy, unprepared for an economy where such skills are increasingly essential. Governments that adopt this posture often relegate AI to ICT departments, focusing on broadband rollouts or cloud adoption while avoiding tougher economic questions: Who benefits when cognitive labour becomes abundant? How do we tax new forms of value? How do we prevent data monopolies? Countries that take this route risk becoming passive importers of AI technologies, unable to influence their trajectory or capture their benefits. When shocks come, they will feel them most acutely.

Pragmatism looks very different. It does not claim to know exactly how AI will unfold, but it acts as if preparation matters. Singapore, for instance, has committed more than S$1 billion (about US$778 million) over five years to AI compute, talent, and industrial development. Its AI research spending, relative to GDP, is estimated to be eighteen times higher than comparable US public investments. Nearly a third of Singaporean businesses now allocate more than US$1 million annually to AI initiatives, higher than the share in the UK or US. Yet even there, progress is uneven: only about 14% of firms have managed to scale AI enterprise-wide. The lesson is clear: investment is essential, but assimilation, governance, and skills are equally critical.

South Korea offers another example of pragmatism. The AI boom there has fuelled record semiconductor exports, with chip sales rising 22% year-on-year in September 2025, driven in part by global demand for AI infrastructure. This underscores how embedding in the AI supply chain allows a country not merely to consume imported systems but to capture significant value from their production.

Africa presents a contrasting picture. A Cisco–Carnegie Mellon white paper stresses the importance of building lifelong learning ecosystems that embed AI into vocational training, promote micro-credentials, and offer offline access in local languages. The World Economic Forum’s Future of Jobs 2025 report similarly highlights AI and ICT as major drivers of labour-market change, making reskilling strategies urgent. Yet most governments on the continent are still moving slowly. The danger of head-in-the-sand thinking is stark: Africa could remain a peripheral consumer of AI, locked out of influence and value capture. But the opportunity is also real: with pragmatic strategies, such as integrating AI into education, governance, health, agriculture, and finance, African economies could leapfrog, turning disruption into transformation.

Organisations face similar choices. Aon finds that 75% of firms now demand AI-related skills in their workforce, yet only 31% have adopted a coherent company-wide AI strategy. Meanwhile, Salesforce reports that more than four in five HR leaders are already planning or implementing AI reskilling programmes. The private sector feels the pressure: denial is no longer an option.

The difference between denial and pragmatism can be illustrated with a simple thought experiment. Imagine two countries facing the same AI storm. Country A bans AI in schools, neglects retraining, and ignores data governance. Five years later, its graduates are unemployable in AI-augmented sectors, its industries depend entirely on foreign systems, and inequality deepens. Country B, by contrast, integrates AI literacy into curricula, retrains workers, and builds regulatory frameworks. Five years on, its workforce is more adaptable, its firms capture value from AI, and it helps shape global rules. Both faced uncertainty, but only one built resilience.

The stakes are high. Economists Erik Brynjolfsson, Anton Korinek, and Ajay Agrawal have identified nine “grand challenges” for transformative AI: growth, innovation, income distribution, power concentration, geoeconomics, knowledge and information, safety and alignment, well-being, and transition dynamics. None of these challenges can be addressed by denial. Each requires pragmatic experimentation in policy, governance, and institutional adaptation.

The AI storm is already here. We do not know if it will hit like a hurricane or come slowly like steady rain, but we do know that failing to prepare is dangerous. Hiding from change may feel safe for a while, but it leaves us weak. A practical approach takes effort, patience, and resources, yet it gives us the strength to adjust, to find new chances, and to survive shocks. Think of two farmers who see dark clouds. One covers his eyes and hopes the rain will pass. The other repairs the roof and stores extra food. When the storm arrives, only the prepared farmer is left standing.

In the age of AI, pragmatism, not denial, is the path that leads to survival, and perhaps to thriving. History will not be kind to the ostrich. Time and again, the head-in-the-sand approach has proven disastrous. Industrial revolutions have always punished the complacent. Nations that dismissed early mechanisation in the nineteenth century fell behind those that industrialised. Companies that ignored the digital revolution of the 1990s,Kodak is the famous example, lost their dominance when they refused to adapt to digital photography. Even at the national level, countries that underestimated globalisation or financial innovation found themselves playing catch-up after crises had already swept through. In each of these cases, denial did not slow the storm; it only increased the damage when inevitable change arrived.

That is why I have personally chosen the pragmatic path in facing AI. As a researcher, AI has already transformed my work by accelerating data analysis, enabling new forms of literature synthesis, and freeing time for deeper conceptual thinking. Rather than fearing it, I experiment with it daily, testing its strengths and identifying its limits. As a teacher, I refuse to banish AI from the classroom. Instead, I encourage students to engage with it critically, to learn how to use it responsibly, and to see it not as a substitute for human thought but as a tool for augmenting it. My conviction is simple: by embracing AI pragmatically, I can prepare my students not just to survive in an AI-shaped economy, but to lead within it.

The ostrich buries its head when danger approaches. The builder, by contrast, looks at the storm clouds and reinforces the roof. History has shown which one endures. The choice before us is no different today.

Sunday, 28 September 2025

 

The market for lemons in higher education: What fake bibliographies reveal about AI and credibility

 

By Richard Sebaggala (PhD)

Generated image

 

In economics, one of the most enduring insights is that markets collapse when information asymmetries exist. George Akerlof’s “Market for Lemons” has shown how buyers who cannot distinguish between good and bad used cars distrust the market as a whole. The credibility of the seller becomes crucial. Once trust has been eroded, assurances are no longer enough to restore trust. The seller must show with words and deeds that they know more than the buyer and use this knowledge responsibly. Education is also a market, even if it is not always seen in this light. Professors sell specialised knowledge, and students are the consumers. The same problem of information asymmetry now arises with the use of artificial intelligence in teaching.

A recent case at the National University of Singapore illustrates this problem. A professor assigned his literature students a reading list of six scholarly works. The students tried to locate the references but realised that none of them existed. The list had been compiled from a machine-generated bibliography. When the professor was confronted with this, he admitted that he “suspected” that some of the material came from an AI tool. At first glance, the incident seemed insignificant, as no grades were affected and the exercise was labelled as “optional” However, from a business perspective, the consequences were serious. The relationship of trust between professor and student was weakened. Students realised that even those who set the rules for the use of AI did not always know how to use the technology responsibly.

The irony is clear. Professors often warn students against outsourcing their learning to AI, citing the danger of hallucinations, fake citations or shallow thinking. But the professor who published a reading list of non-existent works made the same mistake. When the gatekeeper is unable to distinguish fact from fiction in his own assignments, students rightly question his authority to penalise them for similar transgressions. The situation is similar to that of a car dealer who asks buyers to trust his inspections but fails to recognise defective vehicles. In the long term, such failures undermine the credibility of the entire market, in this case higher education itself.

Economists also speak of signalling. People and institutions send out signals to create credibility. A degree signals competence; a guarantee signals trust in a product. Professors signal expertise through carefully designed syllabi, carefully constructed reading lists, and rigorous assessments. When students discover that a reading list is nothing more than an unchecked AI output, the signal is reversed. What should have conveyed care and competence instead conveys carelessness and over-reliance on poorly understood tools. The signal spreads: When a professor makes such a mistake, students will wonder how many others also rely on AI without educating themselves about it. If the experts appear confused, why should the rules they set be legitimate?

The economics of education depends on credibility. Students cannot directly test the quality of teaching the way they can test the durability of a chair or the performance of a phone. They have to trust their teachers. The value of their tuition, time and intellectual effort rests on the assumption that professors know what they are doing. This assumption is a fragile contract. When AI is abused, the contract comes under pressure. The information asymmetry is no longer just between professors and students, but also between the people and the technology that both groups are trying to control. If professors are unable to demonstrate their expertise, their advantage dwindles. The mentor runs the risk of becoming a middleman who could be displaced by the tools he or she does not know how to use.

This is why the debate about AI at universities cannot be reduced to prohibiting or controlling its use by students. The future will require AI skills and universities should recognise this. Professors have a responsibility not only to set rules, but also to model responsible use. This requires checking sources, cross-checking results, disclosing the use of AI and explaining its limitations and strengths. Just as central banks maintain market confidence by consistently demonstrating expertise, professors support the learning market by showing that they can use these tools with care and transparency.

The episode at NUS is more than just a minor embarrassment. It shows that the teaching profession risks losing credibility when those who are supposed to guide students appear unsure, careless or inconsistent in their use of technology. Students notice the double standard. They see that their own use of AI is strictly regulated while professors’ experiment without consequence. They hear over and over that critical thinking is important but are given assignments based on untested material. They are told that integrity is essential, yet they see the lines blurring. Economics tells us what happens as a result: Trust declines and the value of exchanges between teachers and learners diminishes.

To avoid this outcome, universities need to advocate for AI literacy rather than bans. Professors should lead by example and signal through their practise that they can guide students responsibly. This is not just a technical issue, but one of institutional credibility. Without it, the education market risks a similar loss of trust as Akerlof’s used car market. Students may begin to question why they should trust their teachers at all when the signals are inconsistent and the asymmetry so obvious. When that happens, the value of higher education itself is diminished in a way that is far more damaging than a single incorrect reading list.

To think like an economist, one must shed illusions about authority and examine the incentives and signals at work. Professors cannot warn their students about AI while abusing it themselves. They need to understand that credibility is a currency in the marketplace of learning. Once squandered, it is very difficult to regain.

Friday, 19 September 2025

 

When more is not better: Rethinking rationality in the age of AI

By Richard Sebaggala (PhD)

Economists love simple assumptions, and one of the most enduring is the idea that more is better, or the non-satiation principle. More income, more production, more consumption: in our economics textbooks, a rational actor never rejects an additional unit of utility. By and large, this principle has proven to be reliable. Who would turn down more wealth, food or opportunity? However, there are exceptions. In monogamous marriages, “more” is rarely better and certainly not rational. Such humorous caveats aside, this assumption has informed much of our understanding of economic behaviour.

 

Economists refer to this principle as the monotonicity assumption, i.e. the idea that consumers always prefer more of a good over less. As Shon (2008) explains, monotonicity underpins key findings of microeconomics: utility maximisation takes individuals to the limit of their budget, and indifference curves cannot intersect. Even Gary Becker, who argued that monotonicity need not be explicitly assumed, concluded that rational agents behave as if “more is better” because they adjust their labour and consumption up to that point. In short, the discipline has long assumed that “more” is a safe rule of thumb for rational decision-making.

 

Artificial intelligence poses a challenge to this axiom. While most people recognise its potential, many are quick to emphasise the risks of overreliance, focusing on the negative impacts and overlooking the benefits that come from deeper engagement. My own experience is different. The more I use AI, the better I get at applying it to complex problems that once seemed unsolvable. It sharpens my thinking, increases my productivity and reveals patterns that were previously difficult to recognise. However, the critics are often louder. A recent essay in the Harvard Crimson warned that students use ChatGPT in ways that weaken human relationships: they look for recipes there instead of calling their mothers, they consult ChatGPT to complete assignments instead of going to office hours, and they even lean on ChatGPT to find companionship. For the author, any additional use of AI diminishes the richness of human interaction.

 

This view highlights a paradox. A technology that clearly creates abundance also creates hesitation. Economics offers a few explanations. One of them is diminishing marginal utility. The first experience with AI can be liberating as it saves time and provides new insights. However, with repeated use, there is a risk that the benefits will diminish if users accept the results uncritically. Another problem is that of external effects. For an individual, using ChatGPT for a task seems rational- faster and more efficient. However, if every student bypasses discussions with fellow students or avoids professors’ office hours, the community loses the opportunity for dialogue and deeper learning. The private benefit comes with a public price.

 

There is also the nature of the goods that are displaced. Economists often assume that goods are interchangeable, but AI shows the limits of this logic. It can reproduce an explanation or a recipe, but it cannot replace friendship, mentorship or the warmth of a shared conversation. These are relational goods whose value depends on their human origin. Finally, there is the issue of bounded rationality. Humans strive for more than efficiency; they seek belonging, trust and reflection. If students accept AI’s answers unquestioningly, what seems efficient in the short term undermines their judgement in the long term.

 

It is important to recognise these concerns, but it is equally important not to let them obscure the other side of the story. My own practise shows that the regular, deliberate use of AI does not lead to dependency, but to competence. The more I engage with it, the better I get at formulating questions, interpreting results and applying them to real-world problems. The time previously spent on routine work is freed up for thinking in higher dimensions. In this sense, the increased use does not make me less thoughtful but allows me to focus my thoughts where they are most important. So, the paradox is not that more AI is harmful. The problem is unthinking use, which can crowd out the relational and cognitive goods we value. The solution lies in balance: using AI sufficiently to build capabilities while protecting spaces for human relationships and critical engagement.

 

The implications are far-reaching. If AI undermines reflection, we weaken human capital. If it suppresses interaction, we weaken social capital. Both are essential for long-term growth and social cohesion. However, if we use AI as a complement rather than a substitute, it can strengthen both. This is important not only at elite universities, but also in African classrooms where I teach. Here, AI could help close resource gaps and expand access to knowledge. But if students only see it as a shortcut, they will miss out on the deeper learning that builds resilience. Used wisely, however, AI can help unlock skills that our education systems have struggled to cultivate.

 

For this reason, I characterise my perspective as pragmatic. I do not ignore the risks, nor do I believe that technology alone guarantees progress. Instead, I recognise both sides: the fears of those who see AI undermining relationships, and the reality that regular, deliberate use will make me better at solving problems. The challenge for economists is to clarify what we mean by rationality. It is no longer enough to say that more is always better. Rationality in the age of AI requires attention to quality, depth and sustainability. We need to measure not only the efficiency of obtaining answers, but also the strength of the human and social capital we obtain in the process.

 

So yes, more is better, until it isn't. The most sensible decision today may be to put the machine aside and reach out to a colleague, a mentor or a friend. And when it's time to return to the machine, do so with sharper questions and clearer judgement. In this way, we can preserve the human while embracing the transformative. That, I believe, is how to think like an economist in the age of AI.

Sunday, 7 September 2025

 

Humans, Nature, and Machines: Will AI Create More Jobs Than It Replaces?

By Richard Sebaggala (PhD)

Economists have long debated whether new technologies create more jobs than they destroy. Each industrial revolution, from steam engines to electricity, sparked fears of mass unemployment, only for new industries and occupations to emerge. Artificial intelligence, however, feels different. It does not only automate physical tasks; it reaches into the cognitive space once thought uniquely human (Brynjolfsson & McAfee, 2014).

So far, the evidence suggests AI is not sweeping workers aside in large numbers. Instead, it is altering the composition of work by reshaping tasks rather than eliminating whole professions. Coders now refine AI-generated drafts instead of writing from scratch. Paralegals summarize less case law manually. Marketers polish content rather than produce the first draft. In this sense, AI resembles a new species entering an ecosystem. It does not destroy the entire environment at once but gradually reshapes niches and interactions (Acemoglu & Restrepo, 2019).

Where AI adds the most value is in partnership with people. In chess, teams of humans and AI working together often beat both the best human players and the best AI systems. The same pattern is emerging in business, law, and research: AI accelerates analysis and routine drafting, while humans provide judgment, context, and values (Big Think, 2025). I have seen this in my own work as a researcher. Recently when reviewing a colleague’s draft paper, I began by reading it closely and noting my own independent observations arising from my rich research experience. I realized the paper seemed to have too many objectives mentioned in abstract, introduction and in the conceptual framework, the moderating role was not reflected in the title but rather smuggled in the theoretical discussions and methodology, and the case study design did not align with the quantitative approach. These were my own reflections, grounded in my reading. Only afterwards did I turn to ChatGPT, asking it to check the validity of my comments, highlight further weaknesses, and frame the feedback in a structured way. The model confirmed my insights, sharpened the phrasing, and suggested revisions. In that process, the AI acted as a sparring partner rather than a substitute. My reasoning stayed intact, but my communication became clearer. This kind of human–machine cooperation illustrates why complementarities matter more than simple substitution.

I have also seen this dynamic in data analysis. When I begin with clear objectives and a dataset, AI tools can be very useful as a starting point. They can suggest methods for analysis, highlight possible weaknesses, and even recommend additional checks such as sensitivity tests or robustness tests. Some of these insights might have taken me much longer to discover on my own, and in some cases I might not have uncovered them at all. Yet the value lies not in letting the tool run the entire analysis, but in using its suggestions to sharpen my own approach. I have discovered that if you are proficient in data analysis using Stata, as I am, you can allow AI tools such as ChatGPT, Avidnote, or Julius to run analysis in Python, while staying in control by asking the AI to generate Stata do-files for each analysis. Since I already have the data, I can validate the results in Stata. The efficiency gains are significant: less time spent on routine coding, more time to ask deeper questions, and occasional exposure to advanced methods that the AI suggests from its wider knowledge base.

 

Nature reinforces the point. Disruption is rarely the end of the story. When new species enter an ecosystem, some niches disappear, but others open. Grasslands gave way to forests. Forests gave way to cultivated fields and cities. The same is true of labor markets. AI is closing some roles but creating others such as prompt engineers, AI auditors, ethicists, and data curators. The central economic question is not whether niches vanish, but whether workers are supported in adapting to new ones. Without adaptation, extinction occurs not of species, but of livelihoods (Acemoglu & Restrepo, 2019).

Some commentators imagine a post-work society, where intelligent machines carry most productive effort and people focus on creativity, care, or leisure. Keynes (1930) once speculated that technological progress would eventually reduce the working week to a fraction of what it was. More recent writers describe this possibility as cognitarism, an economy led by cognitive machines. Yet history shows that transitions are rarely smooth. Without preparation, displacement can outpace creation. That is why policy choices matter. Retraining programs, investments in AI literacy, experiments with shorter workweeks, and social safety nets can soften shocks and broaden opportunity. Just as ecosystems survive through diversity and resilience, economies need deliberate institutions to spread the benefits of transformation.

AI, then, is powerful but not destiny. Like natural forces, it can be guided, shaped, and managed. The real risk lies not in the technology itself but in neglecting to align human institutions, social values, and machine capabilities. If we approach AI as gardeners who prune, plant, and tend, we can cultivate a labor ecosystem that grows new abundance rather than fear. If we fail, the outcome may be scarcity and division.

History suggests that technology does not eliminate work; it transforms it. The challenge today is to ensure that transformation is inclusive and sustainable. Human ingenuity, like nature, adapts under pressure. Machines are the newest force in that story. The question is not whether AI will take all jobs, but whether we will design the future of work or leave it to evolve without guidance. My own practice of drafting first and using ChatGPT second reflects the broader lesson: societies must take the lead, with AI as an assistant, not a replacement.

References

Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3–30. https://doi.org/10.1257/jep.33.2.3

Big Think. (2025, September). Will AI create more jobs than it replaces? Big Think. https://bigthink.com/business/will-ai-create-more-jobs-than-it-replaces/

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.

Keynes, J. M. (1930). Economic possibilities for our grandchildren. In Essays in persuasion (pp. 358–373). Macmillan.