Sunday, 28 September 2025

 

The market for lemons in higher education: What fake bibliographies reveal about AI and credibility

 

By Richard Sebaggala (PhD)

Generated image

 

In economics, one of the most enduring insights is that markets collapse when information asymmetries exist. George Akerlof’s “Market for Lemons” has shown how buyers who cannot distinguish between good and bad used cars distrust the market as a whole. The credibility of the seller becomes crucial. Once trust has been eroded, assurances are no longer enough to restore trust. The seller must show with words and deeds that they know more than the buyer and use this knowledge responsibly. Education is also a market, even if it is not always seen in this light. Professors sell specialised knowledge, and students are the consumers. The same problem of information asymmetry now arises with the use of artificial intelligence in teaching.

A recent case at the National University of Singapore illustrates this problem. A professor assigned his literature students a reading list of six scholarly works. The students tried to locate the references but realised that none of them existed. The list had been compiled from a machine-generated bibliography. When the professor was confronted with this, he admitted that he “suspected” that some of the material came from an AI tool. At first glance, the incident seemed insignificant, as no grades were affected and the exercise was labelled as “optional” However, from a business perspective, the consequences were serious. The relationship of trust between professor and student was weakened. Students realised that even those who set the rules for the use of AI did not always know how to use the technology responsibly.

The irony is clear. Professors often warn students against outsourcing their learning to AI, citing the danger of hallucinations, fake citations or shallow thinking. But the professor who published a reading list of non-existent works made the same mistake. When the gatekeeper is unable to distinguish fact from fiction in his own assignments, students rightly question his authority to penalise them for similar transgressions. The situation is similar to that of a car dealer who asks buyers to trust his inspections but fails to recognise defective vehicles. In the long term, such failures undermine the credibility of the entire market, in this case higher education itself.

Economists also speak of signalling. People and institutions send out signals to create credibility. A degree signals competence; a guarantee signals trust in a product. Professors signal expertise through carefully designed syllabi, carefully constructed reading lists, and rigorous assessments. When students discover that a reading list is nothing more than an unchecked AI output, the signal is reversed. What should have conveyed care and competence instead conveys carelessness and over-reliance on poorly understood tools. The signal spreads: When a professor makes such a mistake, students will wonder how many others also rely on AI without educating themselves about it. If the experts appear confused, why should the rules they set be legitimate?

The economics of education depends on credibility. Students cannot directly test the quality of teaching the way they can test the durability of a chair or the performance of a phone. They have to trust their teachers. The value of their tuition, time and intellectual effort rests on the assumption that professors know what they are doing. This assumption is a fragile contract. When AI is abused, the contract comes under pressure. The information asymmetry is no longer just between professors and students, but also between the people and the technology that both groups are trying to control. If professors are unable to demonstrate their expertise, their advantage dwindles. The mentor runs the risk of becoming a middleman who could be displaced by the tools he or she does not know how to use.

This is why the debate about AI at universities cannot be reduced to prohibiting or controlling its use by students. The future will require AI skills and universities should recognise this. Professors have a responsibility not only to set rules, but also to model responsible use. This requires checking sources, cross-checking results, disclosing the use of AI and explaining its limitations and strengths. Just as central banks maintain market confidence by consistently demonstrating expertise, professors support the learning market by showing that they can use these tools with care and transparency.

The episode at NUS is more than just a minor embarrassment. It shows that the teaching profession risks losing credibility when those who are supposed to guide students appear unsure, careless or inconsistent in their use of technology. Students notice the double standard. They see that their own use of AI is strictly regulated while professors’ experiment without consequence. They hear over and over that critical thinking is important but are given assignments based on untested material. They are told that integrity is essential, yet they see the lines blurring. Economics tells us what happens as a result: Trust declines and the value of exchanges between teachers and learners diminishes.

To avoid this outcome, universities need to advocate for AI literacy rather than bans. Professors should lead by example and signal through their practise that they can guide students responsibly. This is not just a technical issue, but one of institutional credibility. Without it, the education market risks a similar loss of trust as Akerlof’s used car market. Students may begin to question why they should trust their teachers at all when the signals are inconsistent and the asymmetry so obvious. When that happens, the value of higher education itself is diminished in a way that is far more damaging than a single incorrect reading list.

To think like an economist, one must shed illusions about authority and examine the incentives and signals at work. Professors cannot warn their students about AI while abusing it themselves. They need to understand that credibility is a currency in the marketplace of learning. Once squandered, it is very difficult to regain.

Friday, 19 September 2025

 

When more is not better: Rethinking rationality in the age of AI

By Richard Sebaggala (PhD)

Economists love simple assumptions, and one of the most enduring is the idea that more is better, or the non-satiation principle. More income, more production, more consumption: in our economics textbooks, a rational actor never rejects an additional unit of utility. By and large, this principle has proven to be reliable. Who would turn down more wealth, food or opportunity? However, there are exceptions. In monogamous marriages, “more” is rarely better and certainly not rational. Such humorous caveats aside, this assumption has informed much of our understanding of economic behaviour.

 

Economists refer to this principle as the monotonicity assumption, i.e. the idea that consumers always prefer more of a good over less. As Shon (2008) explains, monotonicity underpins key findings of microeconomics: utility maximisation takes individuals to the limit of their budget, and indifference curves cannot intersect. Even Gary Becker, who argued that monotonicity need not be explicitly assumed, concluded that rational agents behave as if “more is better” because they adjust their labour and consumption up to that point. In short, the discipline has long assumed that “more” is a safe rule of thumb for rational decision-making.

 

Artificial intelligence poses a challenge to this axiom. While most people recognise its potential, many are quick to emphasise the risks of overreliance, focusing on the negative impacts and overlooking the benefits that come from deeper engagement. My own experience is different. The more I use AI, the better I get at applying it to complex problems that once seemed unsolvable. It sharpens my thinking, increases my productivity and reveals patterns that were previously difficult to recognise. However, the critics are often louder. A recent essay in the Harvard Crimson warned that students use ChatGPT in ways that weaken human relationships: they look for recipes there instead of calling their mothers, they consult ChatGPT to complete assignments instead of going to office hours, and they even lean on ChatGPT to find companionship. For the author, any additional use of AI diminishes the richness of human interaction.

 

This view highlights a paradox. A technology that clearly creates abundance also creates hesitation. Economics offers a few explanations. One of them is diminishing marginal utility. The first experience with AI can be liberating as it saves time and provides new insights. However, with repeated use, there is a risk that the benefits will diminish if users accept the results uncritically. Another problem is that of external effects. For an individual, using ChatGPT for a task seems rational- faster and more efficient. However, if every student bypasses discussions with fellow students or avoids professors’ office hours, the community loses the opportunity for dialogue and deeper learning. The private benefit comes with a public price.

 

There is also the nature of the goods that are displaced. Economists often assume that goods are interchangeable, but AI shows the limits of this logic. It can reproduce an explanation or a recipe, but it cannot replace friendship, mentorship or the warmth of a shared conversation. These are relational goods whose value depends on their human origin. Finally, there is the issue of bounded rationality. Humans strive for more than efficiency; they seek belonging, trust and reflection. If students accept AI’s answers unquestioningly, what seems efficient in the short term undermines their judgement in the long term.

 

It is important to recognise these concerns, but it is equally important not to let them obscure the other side of the story. My own practise shows that the regular, deliberate use of AI does not lead to dependency, but to competence. The more I engage with it, the better I get at formulating questions, interpreting results and applying them to real-world problems. The time previously spent on routine work is freed up for thinking in higher dimensions. In this sense, the increased use does not make me less thoughtful but allows me to focus my thoughts where they are most important. So, the paradox is not that more AI is harmful. The problem is unthinking use, which can crowd out the relational and cognitive goods we value. The solution lies in balance: using AI sufficiently to build capabilities while protecting spaces for human relationships and critical engagement.

 

The implications are far-reaching. If AI undermines reflection, we weaken human capital. If it suppresses interaction, we weaken social capital. Both are essential for long-term growth and social cohesion. However, if we use AI as a complement rather than a substitute, it can strengthen both. This is important not only at elite universities, but also in African classrooms where I teach. Here, AI could help close resource gaps and expand access to knowledge. But if students only see it as a shortcut, they will miss out on the deeper learning that builds resilience. Used wisely, however, AI can help unlock skills that our education systems have struggled to cultivate.

 

For this reason, I characterise my perspective as pragmatic. I do not ignore the risks, nor do I believe that technology alone guarantees progress. Instead, I recognise both sides: the fears of those who see AI undermining relationships, and the reality that regular, deliberate use will make me better at solving problems. The challenge for economists is to clarify what we mean by rationality. It is no longer enough to say that more is always better. Rationality in the age of AI requires attention to quality, depth and sustainability. We need to measure not only the efficiency of obtaining answers, but also the strength of the human and social capital we obtain in the process.

 

So yes, more is better, until it isn't. The most sensible decision today may be to put the machine aside and reach out to a colleague, a mentor or a friend. And when it's time to return to the machine, do so with sharper questions and clearer judgement. In this way, we can preserve the human while embracing the transformative. That, I believe, is how to think like an economist in the age of AI.

Sunday, 7 September 2025

 

Humans, Nature, and Machines: Will AI Create More Jobs Than It Replaces?

By Richard Sebaggala (PhD)

Economists have long debated whether new technologies create more jobs than they destroy. Each industrial revolution, from steam engines to electricity, sparked fears of mass unemployment, only for new industries and occupations to emerge. Artificial intelligence, however, feels different. It does not only automate physical tasks; it reaches into the cognitive space once thought uniquely human (Brynjolfsson & McAfee, 2014).

So far, the evidence suggests AI is not sweeping workers aside in large numbers. Instead, it is altering the composition of work by reshaping tasks rather than eliminating whole professions. Coders now refine AI-generated drafts instead of writing from scratch. Paralegals summarize less case law manually. Marketers polish content rather than produce the first draft. In this sense, AI resembles a new species entering an ecosystem. It does not destroy the entire environment at once but gradually reshapes niches and interactions (Acemoglu & Restrepo, 2019).

Where AI adds the most value is in partnership with people. In chess, teams of humans and AI working together often beat both the best human players and the best AI systems. The same pattern is emerging in business, law, and research: AI accelerates analysis and routine drafting, while humans provide judgment, context, and values (Big Think, 2025). I have seen this in my own work as a researcher. Recently when reviewing a colleague’s draft paper, I began by reading it closely and noting my own independent observations arising from my rich research experience. I realized the paper seemed to have too many objectives mentioned in abstract, introduction and in the conceptual framework, the moderating role was not reflected in the title but rather smuggled in the theoretical discussions and methodology, and the case study design did not align with the quantitative approach. These were my own reflections, grounded in my reading. Only afterwards did I turn to ChatGPT, asking it to check the validity of my comments, highlight further weaknesses, and frame the feedback in a structured way. The model confirmed my insights, sharpened the phrasing, and suggested revisions. In that process, the AI acted as a sparring partner rather than a substitute. My reasoning stayed intact, but my communication became clearer. This kind of human–machine cooperation illustrates why complementarities matter more than simple substitution.

I have also seen this dynamic in data analysis. When I begin with clear objectives and a dataset, AI tools can be very useful as a starting point. They can suggest methods for analysis, highlight possible weaknesses, and even recommend additional checks such as sensitivity tests or robustness tests. Some of these insights might have taken me much longer to discover on my own, and in some cases I might not have uncovered them at all. Yet the value lies not in letting the tool run the entire analysis, but in using its suggestions to sharpen my own approach. I have discovered that if you are proficient in data analysis using Stata, as I am, you can allow AI tools such as ChatGPT, Avidnote, or Julius to run analysis in Python, while staying in control by asking the AI to generate Stata do-files for each analysis. Since I already have the data, I can validate the results in Stata. The efficiency gains are significant: less time spent on routine coding, more time to ask deeper questions, and occasional exposure to advanced methods that the AI suggests from its wider knowledge base.

 

Nature reinforces the point. Disruption is rarely the end of the story. When new species enter an ecosystem, some niches disappear, but others open. Grasslands gave way to forests. Forests gave way to cultivated fields and cities. The same is true of labor markets. AI is closing some roles but creating others such as prompt engineers, AI auditors, ethicists, and data curators. The central economic question is not whether niches vanish, but whether workers are supported in adapting to new ones. Without adaptation, extinction occurs not of species, but of livelihoods (Acemoglu & Restrepo, 2019).

Some commentators imagine a post-work society, where intelligent machines carry most productive effort and people focus on creativity, care, or leisure. Keynes (1930) once speculated that technological progress would eventually reduce the working week to a fraction of what it was. More recent writers describe this possibility as cognitarism, an economy led by cognitive machines. Yet history shows that transitions are rarely smooth. Without preparation, displacement can outpace creation. That is why policy choices matter. Retraining programs, investments in AI literacy, experiments with shorter workweeks, and social safety nets can soften shocks and broaden opportunity. Just as ecosystems survive through diversity and resilience, economies need deliberate institutions to spread the benefits of transformation.

AI, then, is powerful but not destiny. Like natural forces, it can be guided, shaped, and managed. The real risk lies not in the technology itself but in neglecting to align human institutions, social values, and machine capabilities. If we approach AI as gardeners who prune, plant, and tend, we can cultivate a labor ecosystem that grows new abundance rather than fear. If we fail, the outcome may be scarcity and division.

History suggests that technology does not eliminate work; it transforms it. The challenge today is to ensure that transformation is inclusive and sustainable. Human ingenuity, like nature, adapts under pressure. Machines are the newest force in that story. The question is not whether AI will take all jobs, but whether we will design the future of work or leave it to evolve without guidance. My own practice of drafting first and using ChatGPT second reflects the broader lesson: societies must take the lead, with AI as an assistant, not a replacement.

References

Acemoglu, D., & Restrepo, P. (2019). Automation and new tasks: How technology displaces and reinstates labor. Journal of Economic Perspectives, 33(2), 3–30. https://doi.org/10.1257/jep.33.2.3

Big Think. (2025, September). Will AI create more jobs than it replaces? Big Think. https://bigthink.com/business/will-ai-create-more-jobs-than-it-replaces/

Brynjolfsson, E., & McAfee, A. (2014). The second machine age: Work, progress, and prosperity in a time of brilliant technologies. W. W. Norton & Company.

Keynes, J. M. (1930). Economic possibilities for our grandchildren. In Essays in persuasion (pp. 358–373). Macmillan.