"You're Safe!": What This Joke Really Says About AI and the Future of Education
By Richard Sebaggala
Conversations about AI have become increasingly divided. Some see it as a breakthrough that will transform every sector, education included. Others still treat it as overblown or irrelevant to their day-to-day work. Most people are simply exhausted by the constant updates, ethical dilemmas, and uncertainty. This split has left many universities stuck, circling around the topic without moving forward in any meaningful way.
A recent WhatsApp exchange I saw was both humorous and unsettling: "Artificial intelligence cannot take your job if your job has never needed intelligence." The reply was, "I don't understand..." and the answer came back, "You're safe!" The joke's quiet truth is that if your work relies on knowledge, judgment, and problem-solving, then AI is already capable of doing parts of it. And the parts it replaces may be the very ones that once gave your job value.
For many of us, including lecturers, researchers, and analysts, our core productivity has come from how efficiently we produce or communicate knowledge. But AI is changing the way that knowledge is generated and shared. Tasks like reviewing literature, coding data, summarizing papers, and grading assignments are no longer things only humans can do. Tools like Elicit, Avidnote ai, and GPT-based platforms now handle many of these tasks faster, and in some cases, better.
Some universities are already moving ahead. Arizona State University has partnered with OpenAI to embed ChatGPT into coursework, research, and even administrative work. The University of Helsinki’s "Elements of AI" course has attracted learners from around the world and built a new foundation for digital literacy. These aren't theoretical exercises; they're practical steps that show what's possible when institutions stop hesitating.
I’ve seen individual lecturers using ChatGPT and Avidnote to draft student feedback, which frees up time for more direct engagement. Others are introducing AI tools like Perplexity and avidnote to help students refine their research questions and build better arguments. These are not just efficiency hacks; they’re shifts in how academic work is done.
Yet many universities remain stuck in observation mode. Meanwhile, the labour market is already changing. Companies like Klarna and IBM have openly said that AI is helping them reduce staffing costs. When AI can write reports, summarise meetings, or process data in seconds, the demand for certain types of graduate jobs will shrink. If universities fail to update what they offer, the value of a degree may start to fall. We're already seeing signs of a skills revaluation in the market.
This shift isn’t without complications. AI also brings new problems that institutions can’t ignore. Equity is one of them. Access to reliable AI tools and internet connections is far from universal. If only well-funded institutions can afford high-quality access and training, the digital divide will only widen. Universities need to think about how they support all learners, not just the privileged few.
There’s also the question of academic integrity. If students can complete assignments using generative AI, then we need to rethink how we assess learning. What kinds of skills are we really measuring? It’s time to move away from assignments that test simple recall and toward those that build judgment, ethical reasoning, and the ability to engage with complexity.
Data privacy matters too. Many AI platforms store and learn from user input. That means student data could be exposed if universities aren’t careful. Before rolling out AI tools at scale, institutions need clear, transparent policies for how data is collected, stored, and protected.
And then there’s bias. AI tools reflect the data they’re trained on, and that data often carries hidden assumptions. Without proper understanding, students may mistake bias for truth. Educators have a role to play in teaching not just how to use these tools, but how to question them.
These are serious concerns, but they are not reasons to stall. They are reasons to move forward thoughtfully. Just as we had to learn how to teach with the internet and digital platforms, we now need to learn how to teach with AI. Delaying action only increases the cost of catching up later.
What matters most now is how we prepare students for the labour market they’re entering. The safest jobs will be those that rely on adaptability, creativity, and ethical thinking traits that are harder to automate. Routine tasks will become commodities. What will set graduates apart is their ability to ask good questions, work across disciplines, and collaborate effectively with technology.
These changes are no longer hypothetical. They’re happening. Institutions that embrace this moment will continue to be relevant. Those that don’t may struggle to recover their footing when the changes become impossible to ignore.
Universities must lead, not lag. The time for think pieces and committee formation has passed. We need curriculum updates, collaborative investment in training, and national plans that ensure no institution is left behind. The early adopters will shape the new rules. Everyone else will follow or be left out.
That WhatsApp joke made us laugh, but its warning was real. AI is changing how the world defines intelligence and value. If education wants to stay meaningful, it has to change with it. We cannot afford to wait.
This is a very fascinating article
ReplyDeleteInsightful !
ReplyDeleteNice Reading
ReplyDelete