Sunday, 10 August 2025

What the Calculator Panic of the 1980s Can Teach Us About AI Today

By Richard Sebaggala


In 1986, a group of American math teachers took to the streets holding signs that read, “Ban Calculators in Classrooms.” They feared that these small electronic devices would strip students of the ability to perform basic calculations. If a machine could handle addition, subtraction, multiplication, and division, what incentive would students have to learn those skills at all? At the time, the concern felt genuine and even reasonable.

With the benefit of hindsight, the story unfolded quite differently. Within a decade, calculators were not only accepted but actively encouraged in classrooms across many countries. Standardized exams began permitting their use, textbook problems were redesigned to incorporate them, and teachers found that students could tackle more complex, multi-step problems once freed from the grind of manual computation. Far from destroying mathematical thinking, calculators shifted the focus toward problem-solving, modeling, and a deeper grasp of underlying concepts.

 

Almost forty years later, the same conversation is happening, but the technology has changed. Artificial intelligence tools such as ChatGPT, Avidnote, and Gemini can now generate essays, solve problems, and summarize complex ideas in seconds. Today's concern is familiar: that students will stop thinking for themselves because the machine can do the thinking for them. The parallel with the calculator debate is striking. In the 1980s, the worry was that calculators would erase basic arithmetic skills; today, it is that AI will erode the capacity for critical and independent thought. In both cases, the tool itself is not the real problem. What matters is how it is introduced, how it is used, and how deeply it is woven into the learning process.

In economics, this recurring pattern is well understood through the study of general-purpose technologies, which are transformations such as electricity, the internet, and now AI, whose applications cut across multiple industries and fundamentally alter productivity potential. History shows that these technologies almost always meet initial resistance because they unsettle existing skills, workflows, and even the identities of entire professions. Yet, once institutions adjust and complementary innovations emerge, such as new teaching methods, updated regulations, or redesigned curricula, the long-run productivity gains become undeniable. In Africa, the mobile phone offers a clear example. Initially dismissed as a luxury, it became a platform for innovations like mobile money, which transformed financial inclusion, market access, and small business operations across the continent. The calculator did not diminish mathematical thinking; it reshaped it, shifting effort from mechanical tasks to higher-order reasoning. AI holds the same potential, but only if education systems are willing to reimagine how learning is structured around it.

 

When calculators entered the classroom, they prompted a shift in teaching and assessment. Teachers began creating problems where the calculator was useful, but understanding was still essential. Tests required not only the correct answer but also evidence of the reasoning behind it. The arrival of AI demands a similar change. Students can be taught to use AI for tasks such as brainstorming, structuring arguments, or refining drafts, but they should still be held accountable for evaluating and improving the output. Assessments can reward transparency in how AI is used and the quality of judgment applied to its suggestions.

This is where metacognition becomes essential. Metacognition is simply thinking about one's own thinking. In economics, we often speak of comparative advantage: doing what you do best while letting others handle the rest. AI shifts the boundaries of that calculation. The risk is that by outsourcing too much of our cognitive work, we weaken the very skills we need to make sense of the world. If universities fail to train students to integrate AI into their own reasoning, graduates may not only face economic disadvantages but may also experience a deeper sense of psychological displacement, feeling out of place in settings where AI competence is assumed.

Metacognition keeps us in control. It allows us to question the assumptions behind AI-generated answers, spot gaps in reasoning, align outputs with our goals, and know when to override automation in favor of deeper understanding. It is like applying the economist’s habit of examining incentives, not to markets, but to our own minds and to the machine’s mind.

Consider two graduate research students assigned to write a literature review. Both have access to the same AI tools. The first pastes the topic into the system, accepts the generated text without question, and drops it straight into the draft. The result is neat and coherent, with plenty of references, but some of the citations are fabricated, important regional studies are missing, and the structure is generic. Because the student never interrogates the output, the gaps remain. The supervisor flags the work as shallow and overly dependent on AI.

The second student uses AI to produce an initial outline and a list of possible sources. They then ask the tool follow-up questions: "What is this evidence based on? Are there African studies on the subject? Which perspectives might be missing?" They verify each reference, read key sources, and restructure the review to balance global theory with local findings. The final paper is richer, more original, and meets the highest academic standards. The difference lies in metacognition, not only thinking about one's own reasoning but also critically evaluating the machine's reasoning. Over time, this approach strengthens analytical skills and turns AI into a genuine thinking partner rather than a shortcut.

The real opportunity is to treat AI as a thinking accelerator. It can take over repetitive work like drafting, summarizing, and running quick computations so that human effort can be directed toward framing the right questions, challenging assumptions, and making judgments that depend on values and context. History shows that those who learn to work with transformative tools, rather than resist them, gain the advantage. The calculator era offers a clear lesson for our time: instead of banning the tool and sometimes focusing on who has used it or not, we should teach the skill of using it wisely and thinking about our thinking while we do so.

Monday, 4 August 2025

"You're Safe!": What This Joke Really Says About AI and the Future of Education

By Richard Sebaggala

Conversations about AI have become increasingly divided. Some see it as a breakthrough that will transform every sector, education included. Others still treat it as overblown or irrelevant to their day-to-day work. Most people are simply exhausted by the constant updates, ethical dilemmas, and uncertainty. This split has left many universities stuck, circling around the topic without moving forward in any meaningful way.

A recent WhatsApp exchange I saw was both humorous and unsettling: "Artificial intelligence cannot take your job if your job has never needed intelligence." The reply was, "I don't understand..." and the answer came back, "You're safe!" The joke's quiet truth is that if your work relies on knowledge, judgment, and problem-solving, then AI is already capable of doing parts of it. And the parts it replaces may be the very ones that once gave your job value.

For many of us, including lecturers, researchers, and analysts, our core productivity has come from how efficiently we produce or communicate knowledge. But AI is changing the way that knowledge is generated and shared. Tasks like reviewing literature, coding data, summarizing papers, and grading assignments are no longer things only humans can do. Tools like Elicit, Avidnote ai, and GPT-based platforms now handle many of these tasks faster, and in some cases, better.

Some universities are already moving ahead. Arizona State University has partnered with OpenAI to embed ChatGPT into coursework, research, and even administrative work. The University of Helsinki’s "Elements of AI" course has attracted learners from around the world and built a new foundation for digital literacy. These aren't theoretical exercises; they're practical steps that show what's possible when institutions stop hesitating.

I’ve seen individual lecturers using ChatGPT and Avidnote to draft student feedback, which frees up time for more direct engagement. Others are introducing AI tools like Perplexity and avidnote to help students refine their research questions and build better arguments. These are not just efficiency hacks; they’re shifts in how academic work is done.

Yet many universities remain stuck in observation mode. Meanwhile, the labour market is already changing. Companies like Klarna and IBM have openly said that AI is helping them reduce staffing costs. When AI can write reports, summarise meetings, or process data in seconds, the demand for certain types of graduate jobs will shrink. If universities fail to update what they offer, the value of a degree may start to fall. We're already seeing signs of a skills revaluation in the market.

This shift isn’t without complications. AI also brings new problems that institutions can’t ignore. Equity is one of them. Access to reliable AI tools and internet connections is far from universal. If only well-funded institutions can afford high-quality access and training, the digital divide will only widen. Universities need to think about how they support all learners, not just the privileged few.

There’s also the question of academic integrity. If students can complete assignments using generative AI, then we need to rethink how we assess learning. What kinds of skills are we really measuring? It’s time to move away from assignments that test simple recall and toward those that build judgment, ethical reasoning, and the ability to engage with complexity.

Data privacy matters too. Many AI platforms store and learn from user input. That means student data could be exposed if universities aren’t careful. Before rolling out AI tools at scale, institutions need clear, transparent policies for how data is collected, stored, and protected.

And then there’s bias. AI tools reflect the data they’re trained on, and that data often carries hidden assumptions. Without proper understanding, students may mistake bias for truth. Educators have a role to play in teaching not just how to use these tools, but how to question them.

These are serious concerns, but they are not reasons to stall. They are reasons to move forward thoughtfully. Just as we had to learn how to teach with the internet and digital platforms, we now need to learn how to teach with AI. Delaying action only increases the cost of catching up later.

What matters most now is how we prepare students for the labour market they’re entering. The safest jobs will be those that rely on adaptability, creativity, and ethical thinking traits that are harder to automate. Routine tasks will become commodities. What will set graduates apart is their ability to ask good questions, work across disciplines, and collaborate effectively with technology.

These changes are no longer hypothetical. They’re happening. Institutions that embrace this moment will continue to be relevant. Those that don’t may struggle to recover their footing when the changes become impossible to ignore.

Universities must lead, not lag. The time for think pieces and committee formation has passed. We need curriculum updates, collaborative investment in training, and national plans that ensure no institution is left behind. The early adopters will shape the new rules. Everyone else will follow or be left out.

That WhatsApp joke made us laugh, but its warning was real. AI is changing how the world defines intelligence and value. If education wants to stay meaningful, it has to change with it. We cannot afford to wait.