AI in Education: Shifting from Policing to Partnering
By Richard Sebaggala
I recently gave a keynote address at the University of South Africa in which I discussed the transformative power of AI for online and distance learning. The Institute for Open and Distance Learning (IODL) had invited me to explore these possibilities at its symposium, “Growing Excellence in Teaching, Learning and Research in Open Distance and e-Learning: Learning Analytics and Post-graduate Research”, but the real conversation began in the Q&A session. While talking about the AI revolution, a deep-seated fear surfaced: students "cheating" with AI. What struck me was that this fear stemmed not from ethical concerns but from a lack of understanding of the AI capabilities and how to use them responsibly.
This pattern is not unique; I have experienced the same fear-driven reaction back home in Uganda. The discussion about AI in higher education is fuelled by panic, not a thoughtful pedagogical approach, and this has an unfair impact on students trying to learn responsibly. In the United States, some students are reported to be recording themselves writing essays just to prove that these essays were not created by ChatGPT, after faulty AI detectors like Turnitin have mislabelled them. A Ugandan student recently confided that after discovering how generative tools can support her learning, she could no longer ignore them. Her university now warns that even “10 per cent” AI help amounts to misconduct. Honest learners are forced into a ritual of self-monitoring simply because the machines can no longer recognise whose words belong to whom.
The academic world has faced similar fears before. We used to worry that tools like SPSS or Stata would jeopardise the integrity of research; today, no one asks whether "Stata wrote the regression." We judge the researcher's interpretation, not the software's calculations. Generative AI is no different. True intellectual labour has never consisted of typing out every word by hand. It lies in judgement, synthesis, and the courage to stand by one's own conclusions. When the AI writes a paragraph that you refine, contextualise, and critique, it is still unmistakably your work.
Despite this historical precedent, a disproportionate amount of institutional energy has gone into policing rather than pedagogy since ChatGPT debuted in late 2022. Publishers are trying to perfect their detection algorithms, universities are imposing blanket bans and memos often read like prosecutorial briefs. Meanwhile, the very academics who need AI skills the most, especially in the Global South, are left without adequate training or support. Basically, we have prioritised building walls of surveillance over building bridges of understanding.
A healthier equilibrium is possible. Teachers need practical AI skills so they can model responsible use of AI instead of criminalising it. Guidelines should distinguish between unethical outsourcing and legitimate support, just as style guidelines already distinguish between plagiarism and citations. Assessments can encourage reflection and transparency by asking students to outline their process, explain their prompts, and annotate revisions made. Detectors, if used at all, should initiate a dialogue and not pass a guilty verdict.
Above all, we need to remember the real purpose of education: to prepare learners for tomorrow's economy, not to preserve outdated work processes. The future job market will require students to have the critical AI skills, adaptive problem-solving abilities, and ethical judgement to effectively navigate and use artificial intelligence. The economist in me sees a simple opportunity cost. The time spent on a witch hunt could be invested in training students to use AI for literature reviews, exploratory coding, data visualisation, or the messy first draft that every writer knows only too well. Done right, AI lowers the fixed costs of scientific production and frees up scarce cognitive resources for more in-depth investigations - a productivity gain that every economist should welcome.
The fundamental question is not whether students will use AI. They are already doing so and that will not change. The real question is whether universities will choose to rule by control or lead by empowerment. If we cling to fear, we will stifle innovation and undermine confidence. If we embrace a coaching mentality, we will train thinkers who can collaborate with machines without giving up their agency.
Education has outlived calculators, spreadsheets, and statistical software. It will also survive generative AI, provided we stop focusing on detection and start focusing on development, from panic to pedagogy. My own PhD supervisor, Professor Roy Mersland, is an example of this open approach. His open encouragement to explore, test, and appreciate AI tools was truly inspiring. It fuelled my own passion for learning and teaching AI to others. This would not have been possible if he had taken the policing, negative stance of many professors. The decision we make now will determine whether we navigate the age of AI with suspicion or with the power of possibility, and whether future generations write with fear or with the confidence to think like innovators in a world where intelligence is increasingly a common commodity.
The writing flow is impressive. Carry on ...
ReplyDeleteWhy are we always in defensive mode?
ReplyDelete🙌🏼
ReplyDeleteIndeed, the case for AI use in academia is well articulated here. Empowerment is the mindset to adopt and adapt.
ReplyDelete