Monday, 31 March 2025

 Rethinking Intelligence: The Economics of Thinking in the Age of AI

By Richard Sebaggala

Humans today are much more intelligent than artificial intelligence and more intelligent than AI will ever be, says Selmer Bringsjord, Professor of Logic and Cognitive Science. For him, intelligence isn’t just about solving problems or producing impressive outputs—it’s about consciousness, moral reasoning, creativity, and lived experience. In his view, machines may simulate aspects of intelligence, but they will never truly be intelligent in the deep, human sense.

Yet despite such caution, we often hear people say things like, “AI is now more intelligent than most humans.” At first, it sounds like praise for machines. But listen closely, and you’ll hear something else—it’s a reflection of how narrowly we’ve come to define intelligence itself. Statements like these often reduce intelligence to speed, accuracy, or output, ignoring the deeper qualities that thinkers like Bringsjord insist are uniquely human.

This comparison reveals a deeper question we rarely ask: What exactly is intelligence, and what happens when it no longer looks like us? As AI tools like ChatGPT rise in capability and confidence, we are not simply witnessing smarter software—we are witnessing a moment that challenges centuries of assumptions about what thinking is, what it’s for, and who or what can do it.

For a long time, we’ve assumed that intelligence meant human-style thinking—reasoning, remembering, and reflecting. Our brains became the gold standard. But with the rise of generative AI (GenAI) systems like ChatGPT, we’re seeing a different kind of intelligence emerge—one that doesn’t rely on memory, self-awareness, or consciousness. And yet, it can write, solve problems, adapt to context, and generate ideas in ways that surprise even the experts. This isn’t just a technical shift—it’s a wake-up call. We’re not watching machines become better versions of us. We’re witnessing a new form of intelligence altogether. And that changes how we think about thinking itself.

In the past, intelligence was thought to reside inside our heads. We stored facts, recalled memories, and solved problems in a step-by-step fashion. But GenAI systems don’t work that way. They don’t remember in the traditional sense or reflect in the way we do. Instead, they operate through patterns, probabilities, and predictions. They don’t need to “understand” in order to generate meaning. That’s a profound shift. It suggests that intelligence may not need to feel, recall, or even “know” anything in the way we do. It can arise from systems that don’t resemble us at all. What we’re seeing isn’t a better version of human cognition—it’s a fundamentally different one.

As GenAI evolves, we’re also changing—especially in how we process and relate to information. More and more, we rely on machines to remember, to summarize, even to decide for us. Our minds are adapting to search, access, and prompt rather than store, reflect, and analyze. Researchers have described this shift as a kind of “digital amnesia,” where we gradually retain less internally because we’ve outsourced our memory to devices. This is convenient, but it comes with a cognitive cost. We are training ourselves to consume answers instead of engage in inquiry. It’s as though we are outsourcing not just tasks, but thought itself.

This transformation has economic consequences, too. Human thinking has long been a scarce and valuable resource. It powered innovation, leadership, and learning. But GenAI has changed the equation. When machines can generate intelligent output at scale, the marginal cost of “thinking” drops close to zero. As this abundance increases, the value of raw information declines. What becomes scarce—and therefore valuable—is something else entirely: judgment, discernment, and the ability to synthesize meaning. In this new landscape, attention, wisdom, and ethical reasoning may become the new forms of cognitive capital.

There’s also a growing risk that as GenAI becomes more powerful, we become less engaged. We may end up as cognitive consumers—using tools we barely understand—rather than cognitive producers who question, interpret, and create. This echoes an ancient warning from Socrates, who feared that the invention of writing would weaken human memory. He worried that people would appear wise without truly understanding. In a similar way, we may become information-rich but thinking-poor—surface-level knowledgeable but hollow in our understanding.

This brings us back to the heart of the matter. Maybe the question isn’t, “Is AI more intelligent than humans?” but rather, “What kind of intelligence do we want to preserve, and what kind are we comfortable outsourcing?” The rise of AI forces us to revisit the meaning of intelligence itself—not as a competition between humans and machines, but as a moment of reflection about how thinking is changing, and what we want to hold on to.

We’re not losing intelligence. We’re redefining it. And if we pay attention, we might realize that AI is not just a tool—it’s a mirror. It reflects back our assumptions, our habits, and our blind spots. It invites us to think differently about thinking, and to decide—individually and collectively—what we value in a world where intelligence is no longer uniquely human.

Our greatest challenge in this new age may not be to catch up with AI, but to remain deeply human—to think, to wonder, to discern, and to shape the kind of intelligence we want in the world.

Wednesday, 26 March 2025

AI and the Future of Work: Closing the Productivity Gap or Undermining Human Skills?

By Richard Sebaggala



The rise of artificial intelligence has sparked intense discussions, with reactions ranging from excitement about its potential to concerns about its implications. Some view it as a powerful tool that enhances human capabilities, while others fear it may eventually replace human intelligence. One reader of my blog found it fascinating that someone would praise machines over humans, prompting me to reflect on why some perceive discussions about AI’s transformative power as a challenge to human intelligence. This made me realize a crucial distinction—machines do not need what they do; humans do. AI is not a sentient force striving for progress but an extension of human ingenuity designed to serve specific needs. Its impact, however, depends on how it is used. For those with strong foundational skills, AI serves as a powerful enabler, enhancing efficiency and productivity. But for those who cannot discern, evaluate, and refine its output, it can become a crutch that weakens critical thinking rather than strengthening it.

One of the fundamental mistakes in discussions about AI is the tendency to attribute human-like motivations to machines. AI does not think, desire, or act independently. It does not need to analyze data, generate text, or create art. It simply performs these tasks based on training data and algorithms built by humans. Those who fear that AI is replacing human intelligence miss an essential truth: AI is only as powerful as the humans using it. The real challenge is not whether AI will surpass human intelligence but whether humans will learn to use it effectively. Instead of framing AI as a competitor, it should be seen as an accelerator of human potential, much like past technological advancements that initially met resistance before becoming indispensable.

Yet, as with any tool, AI’s impact depends on how it is used. For those who lack critical skills, AI can create the illusion of competence without actually improving ability. This is particularly evident in areas like writing, where AI-generated content can be impressively structured and coherent. However, without the ability to critically evaluate and refine what AI produces, users risk intellectual stagnation. Over-reliance on AI, much like excessive dependence on GPS can erode a sense of direction and weaken essential cognitive skills over time. AI should not become a crutch for those who struggle with foundational skills but a tool that enhances and builds upon existing strengths.

The numbers are staggering. Productivity at work faces unprecedented challenges, with managers losing over 683 hours yearly to distractions, according to research by Economist Impact. The average knowledge worker now spends nearly 30% of their workday just managing emails, leading to a productivity crisis that demands innovative solutions. AI, when used strategically, offers one such solution. Research from MIT Sloan reveals that AI can increase a highly skilled worker’s productivity by nearly 40%. These statistics underscore the reality that AI is not a replacement but a tool that amplifies human efficiency—provided it is leveraged correctly.

This distinction is especially crucial when considering how AI impacts different groups of people. While those with weak skills might become overly dependent on AI, those who are already proficient in their fields find that AI significantly enhances efficiency and productivity. Skilled writers, researchers, analysts, and professionals use AI not as a substitute for their abilities but as a way to streamline their work, refine ideas, and push creative boundaries. A researcher, for instance, no longer has to spend hours manually compiling data; AI can automate that process, allowing more time for deeper analysis and interpretation. A lawyer can use AI to process vast amounts of legal text efficiently, but the application of legal reasoning and judgment remains distinctly human. AI enhances expertise rather than replaces it, enabling professionals to focus on high-value tasks rather than repetitive processes.

At the same time, shifts in hiring trends reflect AI’s role in the workplace. Experts from Harvard Business Review indicate that AI will likely reduce the demand for entry-level hires, particularly in jobs with significant learning curves. As AI automates routine tasks, employers may opt for more seasoned workers, whose expertise can be augmented rather than replaced by AI. This shift suggests a growing gap in the workforce—those who already possess strong foundational skills will thrive, while those who rely solely on AI to bridge their skill gaps may find themselves left behind. Companies, therefore, must rethink their retention strategies, ensuring they invest in skilled professionals who can maximize AI’s potential rather than succumb to its limitations.

The challenge, then, is not whether AI is inherently good or bad but how it is integrated into daily life and work. A calculator does not eliminate the need for mathematical understanding; it complements it. Google does not make research obsolete; it enhances access to knowledge. Likewise, AI does not remove the need for critical thinking, creativity, or discernment—it strengthens these skills when used wisely. The most effective approach is to strike a balance between leveraging AI’s capabilities and maintaining human judgment. Rather than fearing displacement, we should embrace augmentation. Rather than relying blindly, we should engage critically. The future belongs to those who can harness AI’s power while maintaining adaptability and discernment.

AI is not here to replace us but to amplify what we can do. However, its impact will differ depending on how individuals use it. Those who lack foundational skills should be cautious, as AI can either empower or diminish their abilities. For the skilled, AI represents an unprecedented opportunity to achieve greater efficiency, creativity, and impact. The real question is not whether AI is taking over but whether we are prepared to use it wisely.

Saturday, 22 March 2025

The age of AI in academic writing: Reducing anxiety with real human engagement

By Richard Sebaggala


In the rapidly evolving landscape of academic research, artificial intelligence tools like generative AI are revolutionizing the way we approach writing and data analysis. As a promoter and educator of the use of generative AI in research, I often encounter a range of emotions and concerns in the academic community. Many students and fellow researchers are concerned about their texts being recognized as AI-generated. They fear that this could jeopardize the authenticity of their work and compromise their academic integrity. Furthermore, some are reluctant to openly admit the use of AI tools due to the stigma attached to automated content creation. These concerns underscore the need for a balanced dialog about the ethical use of AI in academia to ensure that these powerful tools enhance, rather than detract from, the value of human insight and creativity. This article explores how we can allay these fears through genuine human engagement, emphasizing the complementary role of AI in the research process and its potential to supplement, rather than replace, the human element in academic writing.

 

    The concern stems mainly from AI detection tools that label texts as AI-generated, even if these tools are only used as an aid and not as a replacement for human intellect. This can be extremely frustrating for researchers who put a lot of effort into their work if they then come under suspicion that their texts have been artificially generated. Such concerns are not without merit, as many academic institutions and journals are closely monitoring the origins of the works they receive in order to maintain the standard of human-led research and discourse.

AI-generated texts tend to exhibit certain patterns: overly formal structures, unnatural coherence, or an excess of high-frequency technical terms. These patterns can make even authentic, human-written content appear too polished or mechanical, often triggering plagiarism and AI content detection alerts.

    One practical approach to overcoming these challenges is to create a thoughtful prompt before the AI generates any text. By guiding the AI with clear instructions to avoid overly formal and repetitive wording, avoiding the words often labeled as “AI-like” (such as “utilize",” “foster,” and “innovate") and balancing the sentence structure for a more natural flow, you can significantly improve the quality of the generated content. This method also encourages the AI to provide concrete examples and unique insights instead of generic or formulaic explanations. This creates text that reads more like a real human narrative and is less likely to be flagged as AI-generated.

    Another effective strategy is to rework an already written text to make it sound more human. This approach does not involve rewriting the central ideas or main arguments, but focuses on revising sentence structures, removing or replacing words that are often overused by AI tools, and introducing specific examples where possible. This includes checking for redundant or filler words that can throw AI detectors for a loop. While retaining the original meaning of the text, this method reshapes the language into something more dynamic and conversational, ensuring that each paragraph flows naturally and remains free from the characteristics of automated writing.

 

    The key is not to avoid using AI tools, but to adapt the way we use them in our writing. Start with a clear thesis and express it in a personal tone that conveys your individual point of view. Not only does this make the content relatable, but it's also deeply rooted in personal experiences that AI can't mimic. Use detailed case studies, examples or stories that relate directly to your research question. Opting for less common examples will reduce the likelihood of your content being mistaken for AI-generated material. Avoid complex jargon and frequently tagged AI words such as "optimize"," "exploit" or "paradigm" Simple, clear language not only improves readability but also engages readers better.

 

    Try to present fresh ideas and thorough analyses that are deeply related to the topic. AI has a hard time coming up with new insights based on innovative research or unique data interpretations. Break away from the typical rigid AI structures. Organize your paper in a way that best showcases your arguments, edit it rigorously to eliminate any repetitive phrasing, and ensure that each sentence contributes meaningfully to your narrative. End your paper with a compelling personal insight or call to action that reflects your own conclusions and vision for the future. This puts a personal stamp on your work and anchors it firmly in the human mind.

 

The goal is to use AI as a tool to enhance human capabilities, not as a replacement. By carefully integrating AI tools and maintaining human oversight, we can reduce anxiety around AI and preserve the authenticity and integrity of academic work. As we move through the new digital age, it is critical to create an environment in which AI supports intellectual development and productivity without replacing the essential human element in academic research and writing. By recognizing and addressing the anxieties associated with AI-generated content and implementing sound human-centric strategies, researchers can take full advantage of AI while ensuring that their work remains truly human.

Monday, 17 March 2025

Less Is More: Avoiding AI Overload and Making Smarter Choices

By Richard Sebaggala


The flood of AI tools coming onto the market every day is overwhelming. Every hour, a new tool is announced that promises to be better, faster and smarter than the last. Even for existing tools, there are constant updates, new integrations and add-ons. The result is a choice overload, leaving many unsure of which tools to use or constantly switching between them to try and keep up. But the real problem isn't finding the best AI tool — it’s understanding our cognitive limitations when dealing with an excess of information.

Alvin Toffler popularized the term “information overload” in Future Shock (1970), describing how the massive flow of data in modern societies can overwhelm individuals and institutions. Economic models now attempt to explain how decision-makers process large amounts of data despite limited cognitive resources. Milord and Perry (1977) define information overload as a situation in which “the amount of input to a system exceeds the processing capacity of that system.” The human brain is simply not designed to absorb, analyze, and process an infinite stream of information. When confronted with too much information at once, we are forced to prioritize — processing some information and ignoring others, often resulting in inefficient decisions.

Research in the field of cognitive psychology has practically highlighted these limitations. Miller (1956) proposed the theory of the magic number seven plus or minus two, which states that our working memory can only hold about seven separate pieces of information at a time. This limitation explains why, in a famous experiment by Kaufman et al. (1949), subjects could accurately count up to five or six dots flashing on a screen, but had difficulty when the number exceeded seven. Their cognitive process shifted from precise recognition (“subitizing”) to a rough estimate, illustrating the limits of our ability to process multiple pieces of information simultaneously.

This cognitive bottleneck is of great importance for the AI landscape. Every new tool, feature, or update competes for our attention, and decision-makers - whether researchers, professionals, or everyday users — must constantly filter, evaluate and select the most relevant information. However, as our processing capacity is limited, the constant introduction of new tools often leads to hesitation, doubt, and even paralysis. As Klingberg (2000) found, our performance deteriorates rather than improves when we try to process multiple tasks or inputs at once.

 

Back when I was doing my master’s, my econometrics professor had a simple but powerful lesson about learning and decision-making. He told us, “You don’t have to drive all the cars to be a good driver.” At that time, the market was flooded with statistical software — SPSS, EViews, Stata, R, and more. I chose Stata and considered it the Benz of statistical analysis. Over the years, I stuck with it, mastered its functions, and honed my skills. I have never regretted that decision. The same principle applies to AI. You don’t have to try every tool to be effective. You just need to choose the right ones for your needs and focus on mastering them.

Instead of chasing the latest AI trends, it's smarter to first ask yourself what challenges you want to solve. Are there tasks in your work or personal life that feel slow and repetitive? Do you spend too much time researching, writing, summarizing or organizing information? The right AI tool should help you streamline these tasks instead of making your workflow even more complex. In my experience, two tools are enough to handle most research-related tasks — Avidnote and ChatGPT. Avidnote is designed for researchers and makes it easy to  work on various research tasks given its expansive AI templates designed to streamline research process. ChatGPT, on the other hand, is a powerful, creative research assistant. If you use it with well-structured prompts, it can outperform many specialized tools on the market. The key to unlocking its full potential lies in developing strong prompt engineering skills. With the right prompts, ChatGPT can generate ideas, summarize content, improve writing and help with almost any research task.

One of the biggest mistakes people make is to constantly jump from one AI tool to another without fully exploring what a single tool can do. The reality is that most AI tools offer overlapping functionalities. The difference between one tool and another is often minor, and the time spent learning a new tool could be better spent refining the capabilities in an existing tool. Therefore, avoiding AI overload is not about finding the perfect tool, but about settling on a few good tools and mastering them.

 

AI marketing thrives on hype. Every new tool claims to be revolutionary, but does it really change the way you work? Before you add a new AI tool to your workflow, ask yourself if it really solves a problem or is just another distraction. The best way to filter through the noise is to focus on the practical benefits. If a tool doesn’t significantly improve efficiency or add value to what you already have, it’s not worth the time.

The approach to AI should be minimalist. Less is more. A well-chosen and well-mastered toolset will always be superior to an ever-growing collection of half-explored tools. Just as I didn't have to learn every statistical software to master data analysis, you don’t have to use every AI tool to benefit from AI. Focus on what will help you work smarter, invest time to understand it thoroughly, and resist the temptation to constantly switch tools. The real power of AI lies not in its novelty, but in how effectively you use it.

Thursday, 6 March 2025

 Teaching AI to Write Like Us: The Art of Refining AI-Generated Text 

By Richard Sebaggala

 

 

While traveling on a bus, I had a rare moment of uninterrupted reading.I found myself drawn to an article discussing a fascinating study on the differences between AI-generated and human-written text. Conducted by researchers at Carnegie Mellon University, the study attempted to pinpoint stylistic differences in how large language models (LLMs) like ChatGPT and Llama construct sentences compared to human writers. What made it particularly compelling was its relevance to an ongoing shift in the conversation about generative AI.  Not long ago, discussions about AI in writing were dominated by concerns over misconceptions, academic plagiarism, and ethical dilemmas. However, the debate has evolved. Today, a pressing question is: how much content is AI-generated, and can we truly differentiate it from human writing?

 

The study findings were fascinating. The study revealed that AI-generated writing tends to rely more on structured, information-dense phrasing, with certain quirks that set it apart from human expression. For example, LLMs often overuse present participle clauses (e.g., "Bryan, leaning on his agility, dances around the ring."), favor abstract nominalizations (e.g., "implementation of the policy" instead of "the policy was implemented"), and show strong preferences for specific vocabulary choices—words like “camaraderie,” “tapestry,” and “palpable” appeared far more frequently than in human writing.

 

While these insights are valuable, they raise an important question: Are these differences a limitation of AI, or simply a result of how we use it? A closer look at the study's methodology suggests that some of its conclusions may be shaped by underlying assumptions rather than definitive proof that AI cannot adapt to human writing styles.

At first glance, the study's results seem to confirm a widespread belief that AI-generated writing lacks the nuance and adaptability of human expression. However, a critical examination reveals several methodological flaws that may have influenced its conclusions:

1. A Narrow View of Writing Contexts

The study evaluated AI writing using specific genres—TV scripts and academic writing—but did not consider other common forms of human writing, such as personal essays, journalism, or business communication.

Human writing is highly diverse, shaped by context, purpose, and audience. Did the researchers test AI’s adaptability across a broader spectrum of writing styles?

2. The Lack of Iterative Refinement

One key difference between human and AI writing is the revision process. Writers rarely produce polished work in a single draft; they refine, edit, and restructure.

The study did not test whether AI writing could become more human-like through iterative refinements, where users provide feedback and make incremental improvements.

3. Prompting Matters More Than We Think

nAI models do not generate text in isolation; they respond to prompts. Poorly designed prompts often lead to unnatural or rigid outputs.

The study does not clarify whether the AI was prompted in a way that encouraged contextual adaptability. Could different prompt engineering techniques yield more human-like writing?

4. Comparing AI to Expert Writers vs. the Average Writer

If the AI-generated text was compared to professional authors or seasoned academics, it would naturally fall short in style and depth.

A fairer comparison would involve a broader range of human writing abilities, including that of everyday writers, students, and non-expert authors.

These gaps suggest that while the study highlights genuine AI tendencies, its conclusions may not fully reflect AI’s potential when used strategically.

Instead of dismissing AI-generated writing as inherently flawed, I decided to test whether ChatGPT could refine its own writing to align more closely with human standards. I asked ChatGPT o1, an advanced reasoning model, to generate a structured prompt for improving AI-assisted writing based on article concerns. The results were impressive:

"I have an AI-generated article that I want to refine to make it more natural and closer to human writing. Please follow these steps to adjust the style:

1. Adapt Sentence Structure

· Avoid overuse of present participle clauses and vary sentence beginnings.

· Mix short and long sentences to create a natural rhythm.

2. Balance Voice and Tone

· Reduce excessive nominalizations (e.g., replace "implementation of the policy" with "the policy was implemented").

· Adjust passive voice usage where necessary to improve clarity.

· Adapt the tone to match the context—formal for academic writing, conversational for blog posts.

3. Diversify Vocabulary

· Replace overly sophisticated or AI-favored words (e.g., "camaraderie," "tapestry," "palpable") with more natural alternatives.

· Use synonyms and rephrase sentences to avoid unnecessary repetition.

4. Enhance Natural Flow

· Use smooth transitions to guide the reader logically through ideas.

· Ensure paragraphs connect seamlessly rather than feeling like separate, disjointed sections.

5. Simulate Human Revision

· Edit the text progressively, focusing on clarity, conciseness, and logical progression.

· Introduce slight variations in sentence structures, as humans naturally do when revising their work."

Applying this prompt transformed the AI-generated text significantly. Instead of rigid, overly structured sentences, the writing became more dynamic and fluid. Ideas flowed naturally, and the vocabulary felt more organic. More importantly, the revision process mirrored how human writers refine their work—by iterating, editing, and fine-tuning for clarity.

 

This experiment revealed a key takeaway: AI is not inherently incapable of writing like humans—it simply needs structured guidance and refinement. Instead of seeing AI as a threat to human writing, we should embrace it as a collaborative tool. Whether in academia, professional writing, or creative storytelling, AI can enhance writing quality when used thoughtfully. However, it should not be treated as a shortcut for bypassing the critical thinking, creativity, and revision processes that make writing truly human.

 

If you use AI in your writing, try refining its output using structured prompts like the one outlined here. The best writing—whether human or AI-assisted—is always a result of careful revision. AI may not naturally write like us yet, but with the right approach, we can guide it to produce work that is technically correct, genuinely engaging, nuanced, and human-like.

Tuesday, 4 March 2025

 The Secret to AI Proficiency: Why Repetition Is the Key to Mastery

By Richard Sebaggala



Jensen Huang, CEO of NVIDIA, once said: "You are not going to lose your job to AI, but rather to someone who uses it." This statement gets to the heart of what separates those who succeed in the age of AI from those who fall behind: a willingness to learn, adapt, and practice.

Since I started delivering training on the use of generative AI in research, I’ve met countless people who want to realize the full potential of AI. Many are looking for shortcuts — tools that promise immediate productivity gains with minimal effort. What is often overlooked, however, is the dedication and time required to achieve true proficiency. My journey with generative AI has been anything but instant. It has been a process of structured learning and deliberate repetition, sometimes requiring hours of practice to master just one tool.

 

The remarkable results I have achieved are not due to my talent alone. They came about because I understood and embraced the value of repetition — what Stan Goldberg aptly calls "neurological secret sauce" in his studies on memory and learning. Repetition may sound mundane, but it's the cornerstone of mastery in any field, whether it’s delivering a flawless speech, performing a symphony or, as in my case, refining workflows with generative AI.

Goldberg explains that the difference between beginners and experts lies in the strength of their "memory layers". When you start a new task, your first actions will feel clumsy and awkward. Think back to the first time you typed on a keyboard or learned to drive a car. But over time, your neural connections will strengthen, and the process will become second nature.

 

My first interactions with AI tools like ChatGPT were far from seamless. I often questioned my prompts, overlooked important features and misinterpreted the results. But through repetition, I was able to create a mental "blueprint" for using these tools. Today, writing structured prompts, refining results based on feedback, and integrating AI-generated content into research workflows feels as natural as typing.

Each iteration has strengthened my skills. Like a carpenter who masters his tools after thousands of uses, I have honed my expertise step by step. However, repetition alone is not enough. As Goldberg points out, training must be precise. Michael Phelps’ coach famously advised him not to practice strokes that deviated from perfection. And why? Practicing mistakes only reinforces bad habits.

 

This principle also applies to AI. In my practice sessions, I was not content to randomly perform queries. I critically examined the results, identified gaps and refined my techniques. For example, when l was practicing  literature review tools such as Avidnote or Elicit for systematic review , l compared the AI-generated summaries with those written by humans to identify weaknesses and adjust my prompts. This focus on "perfect practice" helped me internalize the subtleties of using AI effectively, which led to consistently better results.

 

Malcolm Gladwell popularized the idea of the "10,000 hour rule" in his book, in which he emphasizes that mastery requires an extraordinary amount of deliberate practice. Even though AI tools are designed to simplify tasks, maximizing their potential requires a similar commitment to practice. True mastery — the kind that allows you not only to use AI, but to realize its full potential — is based on time, effort and intention.

 

In my experience, the key to unlocking the transformative potential of AI is to regularly invest hours in learning and refining AI workflows. Every hour you spend practicing structured prompts, analyzing feedback, and applying the tools to real-world problems adds to the cumulative expertise that separates a proficient user from a casual user. Just as musicians, athletes and writers achieve greatness through thousands of hours of practice, AI users must approach their craft with the same dedication.

 

Even mastery requires nurturing. Without reinforcement, skills diminish. Jascha Heifetz, the legendary violinist, practiced scales daily despite his world-renowned talent. Similarly, I regularly engage with AI tools to ensure my knowledge remains up to date.

 

When tools like Chatgpt, Gemini and Avidnote release updates, I make it a priority to practice with their new features. I subscribe to several AI platforms to keep up to date with new developments in the AI industry. This continuous engagement ensures that I stay up to date and adapt to advances rather than becoming complacent.

One of the most valuable lessons from Goldberg is to "practice slowly". In the early days of learning AI tools, I deliberately slowed down the pace to focus on understanding each step. By resisting the urge to rush, I built a solid foundation that eventually allowed me to work faster and more efficiently.

For example, before I ventured into complex tasks such as automating literature reviews, I started with the simple creation of notes. This deliberate progression gave me the confidence and competence to tackle larger, more complicated projects with fewer mistakes.

A common mistake many learners make is practicing in an isolated environment. Goldberg’s analogy of delivering a speech in a quiet room — only to run into trouble in a noisy room — is very apt. Mastery requires the application of skills in dynamic, real-world environments. In the early days of learning AI, I initially practiced in a controlled environment and experimented with tools in private. But to really hone my skills, I decided to apply them in live workshops, group discussions and collaborative research projects. These real-world applications tested my adaptability and boosted my confidence so that my skills were not just theoretical, but practical.

Repetition is not the enemy of creativity, it is its foundation. Without structured practice, even the most innovative ideas fail to be realized. Just as athletes and artists rely on repetition to excel, mastering generative AI requires deliberate, repetitive practice. For those looking for shortcuts, this is a reminder: mastery isn’t about finding the perfect tool, it’s about cultivating the right habits. Take the time to practice, refine your techniques and make repetition your own. The results will speak for themselves.

As Stan Goldberg eloquently puts it, repetition reinforces the "blueprints" that guide our actions. Mastery is not a destination, but an continuous journey shaped by the time and effort we invest every day.

In the words of Jensen Huang, your job is not in danger because of AI, but if you do not learn how to use it, it's in danger. Embrace the process and let repetition and a commitment to mastering the 10,000-hour rule be your best allies in mastering the tools that will shape the future.