Human superpowers – Creative, Analytical and Critical thinking

Are you sure Gen AI doesn’t make mistakes Mr Spock? Because this just “feels” wrong to me.

Back in July 2022, I wrote about the importance of critical thinking, a skill long considered essential in education, leadership, and the workplace.

But that was before Gen AI arrived in the November, bringing with it the ability to answer almost any question within seconds. Its presence prompted reflection on the nature of learning, how education might change and what role humans should now play, if any.

If you don’t have time to read this month’s blog – listen to my AI alter ego summarise the key points.

But all is not lost we still have one last card to play, our ability to think and feel, okay maybe that’s two cards. Thinking is hopefully what you are doing whilst reading this blog, neurons will be firing as you reflect, analyse and question what is being said. It’s something we do in between day dreaming, sleeping and unconscious behaviours such as cleaning our teeth.

Thinking is however a little more nuanced, and there are many different types, for example you can think creatively, analytically, or critically. Whichever mode you engage in, there’s another essential human attribute that quietly shapes the process…. our emotions. These are the subjective experiences rooted in our limbic system that help us interpret information and as such see the world. Together these are our superpowers offering something AI can’t replicate, not yet at least!

An Artist, Pathologist and Judge walk into a bar
Critical thinking, creative thinking, and analytical thinking are often grouped under the umbrella of “higher-order cognitive skills,” but each one is different, playing a role in how we process, evaluate, and generate ideas.

  • Critical thinking is fundamentally about evaluation, it involves questioning assumptions, weighing evidence, and forming reasoned judgments. It’s the internal referee that asks, “Does this make sense? Is it credible? What are the implications?”
  • Meanwhile, analytical thinking breaks down complexity into more manageable components, identifying patterns, and applying logic so that we can better understand relationships.
  • And creative thinking is generative. It thrives on ambiguity, imagination, and novelty. Where critical thinking narrows and refines, creative thinking expands and explores. It’s the spark that asks, “What if? Why not? Could we do this differently?”

Humans are emotional – Far from being a distraction, emotions actively shape how we think, judge, and create. In creative thinking, emotion is the spark that fuels imagination and unlocks divergent ideas. In analytical thinking, emotion plays a subtler role influencing how we interpret data, what patterns we notice, and our levels of motivation.  Critical thinking, meanwhile, relies on emotion to provide an ethical compass and improve our self-awareness.

Learning to be a better thinker
Critical, creative, and analytical thinking aren’t fixed traits, they’re learnable skills. It’s tempting to believe they can only be acquired through the slow drip of wisdom from those who have had a lifetime of experience. The truth is, with good instruction, these skills can be learned well enough for any novice to get started. At first the beginner may simply replicate what they have been taught but with practice and reflection, they begin to refine, adapt, and eventually think for themselves.

By way of an example, this is how you might start to learn to think more critically.

  1. Start with knowledge – Critical thinking is the analysis of available facts, evidence, observations, and arguments to form a judgement.
  2. Use a framework
    • Formulate the question – what problem(s) are you trying to solve?
    • Gather information – what do you need to know more about?
    • Analyse and evaluate – ask challenging questions, consider implications, and prioritise.
    • Reach a conclusion – form an opinion, and reflect.
  3. Bring in Tools – These can provide ideas or change perspective, for example Edward de Bono’s six thinking hats.
  4. Apply by practicing with real world problems. This is largely experiential, and requires continual reflection and looping back to check you have asked the right question, gathered enough information, and correctly prioritised.

The real challenge and deeper learning take place in the application phase.  By working in groups, your arguments may well be questioned and potentially exposed by the use of Socratic type questions and differing views.  Your only defence is to start thinking about what others might say in advance. Over time like any other skill, it can begin to feel more like an instinct, requiring less conscious effort, simply popping into your mind when most needed.

To boldly go
Generative AI may offer logic, precision, and even flashes of creativity but it does not feel the weight of a decision, nor wrestle with the moral ambiguity that defines human experience. It is Spock without Kirk, brilliant, efficient, and deeply insightful, yet missing the emotional compass that gives judgment its humanity. True thinking is not just analysis, its empathy, intuition, and the courage to act without certainty. AI can advise, assist, and illuminate, but it cannot replace the uniquely human interplay of reason and emotion. Like Kirk and Spock, the future belongs not to one or the other, but to the partnership. Or at least I hope so…..

I will leave the last word to Dr McCoy.

The virtual educator has arrived!

But which one is me?

Inspired by a recent LinkedIn post I made regarding what it might be like to have an avatar as a teacher, I thought I should check out the evidence in terms of the effectiveness of avatars to improve learning before I get too carried away with the technology itself.

What is an avatar?
An avatar is a digital or computer-generated representation of a person or character in a virtual environment. It can take various forms, for example a simple profile picture on social media or an audio avatar talking about a specific subject using a synthetic voice. However, with major advancements in generative AI, avatars are evolving beyond static images or basic voice interactions. We are increasingly seeing lifelike digital humans emerge, sophisticated AI-driven avatars capable of “understanding” what we say and generating intelligent responses, speaking with realistic voices and impressive synchronised lip movements. This transformation is redefining how humans engage with AI-powered virtual beings, blurring the lines between digital representation and authentic interaction.

As to what they look like, here are some examples:

  • Firstly, an audio avatar that I have now built into my blog to provide a different perspective on what has been written. Here the avatar “chats” about the blog rather than simply reading it out loud. See above.
  • Secondly a Pixar style avatar. The goal here is to challenge the assumption that an avatar must resemble a real person to be effective.
  • And lastly, this is a more realistic avatar. Effectively an attempt to replicate me, in a slightly imperfect way. This is not about fooling the audience, although this is now possible, but to explore the idea that humans respond better to a more human like character.

The talking head – good or bad?
However there’s an elephant in the room when it comes to avatars, why do we need a talking head in the first place? Wouldn’t a simple voice-over, paired with well-structured content, be just as effective?

If you look at YouTube, almost everyone uses talking-head videos in different ways, surely if they weren’t effective, no one would have them, a kind of “wisdom of crowds.” But does their popularity actually prove their value, or are we just following a trend without questioning its impact?

Let’s have a look at the evidence:
After reviewing multiple studies, the findings are somewhat mixed. However, there’s enough insight to help us find an approach that works.

First, we have research from Christina Sondermann and Martin Merkt – Like it or learn from it: Effects of talking heads in educational videos. They conclude that the learning outcomes were worse for videos with talking heads, their concern was that it resulted in higher levels of cognitive load. But participants rated their perceived learning higher for videos with a talking head and gave better satisfaction ratings, selecting them more frequently. Secondly, another piece of research published five months later by Christina Sondermann and Martin Merkt, yes, the same people, What is the effect of talking heads in educational videos with different types of narrated slides. Here they found that “the inclusion of a talking head offers neither clear advantages nor disadvantages.” In effect using a talking head had no detrimental impact, which is slightly at odds with their previous conclusion.

A little confussing I agree, but stick with it….

Maybe we should move away from trying to prove the educational impact and consider the student’s perception of avatars. In this first report, student Perceptions of AI-Generated Avatars, the students said “there was little difference between having an AI presenter or a human delivering a lecture recording.” They also thought that the AI-generated avatar was an efficient vehicle for content delivery. However, they still wanted human connection in their learning and thought some parts of learning needed to be facilitated by teachers and that the avatar presentations “were ‘not … like a real class.” The second report, Impact of Using Virtual Avatars in Educational Videos on User Experience raised two really interesting points. Students found that high-quality video enhanced their learning, emotional experience, and overall engagement. Furthermore, when avatars displayed greater expressiveness, they felt more connected to the content, leading to improved comprehension and deeper involvement.

For those designing avatars, this means prioritising both technical quality and expressive alignment. Avatars should be visually clear, well animated, and their facial expressions should reinforce the message being conveyed.

What does this all mean?
Bringing everything together, we can conclude that avatars or talking heads are not distractions that lead to cognitive overload. Instead, students appreciate them, relate to them emotionally, in fact they see little difference between a recorded tutor and an avatar. Their expressiveness enhances engagement and might prove highly effective in helping student remember key points.

To balance differing perspectives, a practical approach might be to omit the talking head when explaining highly complex topics, (reducing cognative load) allowing students to focus solely on the material. However, keeping the avatar visible in most other situations, particularly for emphasising key concepts or prompting action to ensure maximum impact. Alternatively, why not let the student decide by offering them a choice to have the talking head or not.

How might avatars be used?
One important distinction in the use of avatars is whether they are autonomous or scripted. Autonomous avatars are powered by large language models, such as ChatGPT, allowing them to generate responses dynamically based on user interactions. In contrast, scripted avatars are entirely controlled by their creator, who directs what they say.

A scripted avatar could be particularly useful in educational settings where consistency, accuracy, and intentional messaging are crucial. Because its responses are predetermined, educators can ensure that the avatar aligns with specific learning goals, maintains an appropriate tone, and avoids misinformation.

This makes it ideal for scenarios such as:
– Delivering structured lessons with carefully crafted explanations.
– Providing standardised guidance, ensuring every student receives the same high-quality information.
– Reinforcing key concepts without deviation, which can be especially beneficial when high stake assessments are used, as is the case with professional exams.

However, if we power these avatars with Generative AI, the possibilities increase significantly:

  • More personalised learning. One of the most exciting prospects is the ability of avatars to offer personalised and contextualised instruction.
  • Help with effective study. Avatars could be used to remind students about a specific learning strategy or a deadline for completion of a piece of work. A friendly face, at the right time might be more effective than an email from your tutor or worse still an automated one.
  • Motivational and engaging. These avatars could also have a positive effect on motivation and feelings about learning. They could be designed to match an individual’s personality and interests, making them far more effective both in terms of higher levels of motivation and engagement.
  • Contextualised Learning. AI-based avatars can support teaching in practical, real-world scenarios, including problem solving and case-based learning. Traditionally, creating these types of environments required significant resources such as trained actors or expensive designed virtual worlds.

A few concerns – autonomous avatars
Of course, as with any new technology there are some concerns and challenges:

Autonomous avatars pose several risks, including their ability to make mistakes, the problem with avatars in particular is, they will be very convincing. We are already acutely aware that large language models can sometimes ‘hallucinate’ or simply make things up. Data protection is another concern, with risks ranging from deepfake misuse to avatars persuading users into sharing personal or confidential information, which could be exploited. Finally, value bias is a challenge, as AI trained avatars may unintentionally reflect biased perspectives that a professional educator would recognise and navigate more responsibly.

Conclusions
Avatars, whether simple or lifelike, are gaining traction in education. Research indicates that while talking heads don’t necessarily improve learning outcomes, they don’t harm them, and students perceive them positively. A key distinction lies between scripted avatars, offering consistent and accurate pre-determined content, ideal for structured lessons, and autonomous avatars powered by AI that open up a world of possibility in several areas including personalisation.

Avatars are a powerful and exciting new tool that offer capabilities that in many ways go beyond previous learning technologies, but their effectiveness very much depends on how they are designed and used. But hasn’t that always the case….

Finally – This is an excellent video that talks about some of the research I have referred to. It is of course presented by an avatar.  What Does Research Say about AI Avatars for Learning?

PS – which one is me – none of them, including the second one from the left.

Predicting Learning in 2025 +

Making predictions is of course a mugs game. Most people start with what they currently know and project forwards using logic to justify their conclusions. This however leads to our first problem – “you don’t know what you don’t know.”

Secondly, a prediction is more likely to be true if the environment is stable, and that leads to problem number two – we are living in hugely unpredictable times. Technology, in particular AI is moving so fast it’s hard to keep up, politically there is both change and instability, making it difficult to say with any certainly what policies or regulatory requirements will come into effect, and the climate is shouting at us, although we don’t seem to be listening.

And yet it’s still worth making predictions, not so much to see if you can get it right, but to play around with what the future might bring, take advantage where you can and make changes or at least warn others if you don’t like what you see. 

So here goes – what might happen this year in the world of learning? 

1.Learning will not change: But learners will adapt to different ways of studying
While the world around us continues to change, the fundamental way we learn as humans remains largely unchanged. Despite advancements in AI, neuroscience, and educational tools, the core processes of how our brains absorb and retain information are rooted in biology and consequently, relatively stable. But learners will begin to adapt to this new world, and some will take full advantage of what these new technologies have to offer.

“There are no gains without pains” Benjamin Franklin

However, there will be those who fall into the trap of taking the easy way out and using the technology to “offload” learning, and as a consequence, learn very little. My prediction is that we will see far more people offloading learning in 2025, which is clearly a concern!

2.AI (GenAI) will continue to dominate: Bye Bye Text Books
Possibly the easiest prediction is that AI will dominate. Almost every day we are met with a new model that is easier to use, providing more effective ways of answering questions, summarising complex information and responding with high quality opinion. By the end of 2025 AI (GenAI) will have firmly established itself as a tool for learning, offering instant access to vast amounts of information. Traditional textbooks will become increasingly less valuable as students and professionals turn to large language models to provide real-time answers and explanations.

But AI’s capabilities extend far beyond simply generating content.  We will also see the expansion of the use of chat bots (study buddies) to not only answer questions but, provide coaching, motivation, and personalised feedback. The natural progression for these “study buddies” is that they will develop into “intelligent agents/tutors.” Agents are more autonomous and can perceive the environment, process information, and take actions to achieve specific goals. This means they will be able to analyse individual progress, suggest next steps, adapt materials in real time, and offer tailored support.  

3.Sector disruption: Content, assessment, and the thirst for data
The focus here is on the educational publishers, institutions that produce textbooks, workbooks, teacher guides and even digital learning platforms. With AI being capable of generating high-quality content quickly and at scale, traditional content providers will need to rethink their business models. Although there is unlikely to be a significant impact in 2025, we will begin to see changes in how some of these businesses operate. The focus will shift from storing knowledge inside “books” to recognising the value is in curation and providing meaningful learning experiences, organising knowledge into effective sequences and simplifying complex topics to support deeper understanding.

Assessments are in need of significant transformation and we will increasingly see calls for alternative methods of assessment to be used. This is mostly driven by concerns around plagiarism, but AI brings some interesting and arguably more robust ways of testing knowledge and skills. For example, AI-driven adaptive testing, offering real-time performance analysis and personalised assessments that move beyond standardised testing. Skills will remain high on the agenda this year with even more pressure being applied on educators to encourage them to close the gap between what is taught in the classroom compared to what is required in the workplace.  A change in assessment, perhaps using real world simulation to assess these skills, could be part of the solution.

It is easy to get carried away with predictions and forget about some of the reasons they may not come to fruition. One such barrier is that organisations will struggle to get their data in one place to provide meaningful information for these models. 2025 will see organisations spending significant amounts of both time and money cleaning and tagging data so that it is useful

4.Regulation, the green agenda, and commercial pressures
Speaking of barriers, by the end of 2025, regulatory frameworks for AI in education will be far more developed but the landscape for adoption is likely to be patchy. Governments and educational bodies both in the UK and around the world will seek to strike a balance between innovation and ethical concerns, ensuring that AI-driven learning tools are used responsibly. The environmental impact of AI will become an increasingly critical consideration, with growing awareness of the substantial energy consumption required to train and run large AI models. These high energy costs add complexity to discussions about AI’s role in education.

Commercial interests and financial investments will heavily influence the direction of policy. Big technology firms will continue to play a major role in shaping the future of education, with their AI tools becoming more deeply embedded in learning institutions. As AI becomes an integral part of the education ecosystem, the debate will centre around who controls the technology and how it can best serve learners rather than corporate interests, all while addressing the significant environmental footprint of the AI infrastructure.

Summary
I have summarised the key points below, briefer of course but as a consequence less nuanced. Will they come true, maybe but as Yogi Berra once said – “It’s tough to make predictions, especially about the future.”

  • Prediction 1 – learning will not change but learners will. They will begin to adapt, changing how they study, the problem is this could lead to them not learning at all.
  • Prediction 2 – AI (GenAI) will continue to dominate. This will lead to the demise of the text book, the development of GenAI study buddies and in time, intelligent agents/tutors.
  • Prediction 3 – Watch out for  sector disruption, especially for educational publishers. In addition, assessments are due for a revamp but data will remain king.
  • Prediction 4 – Regulation will be in conflict with innovation. There will be a growing tension between regulation and the responsible use of AI with commercial organisations having the space to innovate.

Transforming Learning – GenAI is two years old

ChatGPT – Happy second birthday
Generative AI (GenAI), specifically ChatGPT exploded onto the scene in November 2022, which means it is only two years old. Initially people were slow to react, trying to figure out what this new technology was, many were confused, thinking it was a “bit like Google.” But when they saw what it could do – “generating” detailed, human-like responses to human generated “prompts,” ideas as to what it could be used for started to emerge. The uptake was extraordinary with over 1 million people using it within the first five days, a year later this had grown to 153 million monthly users and as at November 2024 its around 200 million. The use of GPTs across all platforms is difficult to estimate but it could be something in the region of 400 – 500 million. That said, and to put this in perspective, google search has over 8.5 billion searches every day, that’s the equivalent to the world’s population!

From Wow to adoption
Initially there was the WOW moment, true AI had been around for a long time but GenAI made it accessible to ordinary people. In the period from November 2022 to early 2023 we saw the early adopters, driven mostly by curiosity and a desire to experiment. By mid 2023 it became a little more mainstream as other GPTs emerged e.g. Googles Bard (Now Gemini), and Microsoft’s Copilot to name just two. But it was not all plain sailing, ethical concerns began to grow and by the end of 2023 there were people talking about misinformation, problems with academic integrity, and job displacement. This led to calls for greater regulation especially in Europe, where AI governance frameworks were developed to address some of the risks.

In terms of education, initially there were calls to ban learners from using these tools in response to answers being produced that were clearly not the work of the individual. And although many still worry, by early 2024, there was a creeping acceptance that the genie was out of the bottle and it was time for schools, colleges, and universities to redefine their policies, accept GPTs, and integrate rather than ban. 2024 saw even greater adoption, according to a recent survey, 48% of teachers are now using GenAI tools in some way.

GenAI – Educational disrupter
There have been significant changes in education over the last 50 years e.g. the introduction of personal computers and the Internet (1980s -1990s), making content far more accessible, and changing some learning practices. Then in the 2000 – 2010s we saw the development of E-learning Platforms and MOOCs such as Moodle, Blackboard and Coursera. This fuelled the growth of online education providing learners with access to quality courses across the globe.

But I am going to argue that as important as these developments were, not least because they are essential underpinning technologies for GenAI, we are always “standing on the shoulders of giants” – GenAI is by far the biggest educational disrupter than anything that has come before. Here are a few of the reasons:

  • Personalised Learning at scale: GenAI tools make it possible for everyone to enjoy a highly personalised learning experience. For instance, AI can adapt to an individual’s learning style, pace, and level of understanding, offering custom explanations and feedback. This brings us far closer to solving the elusive two sigma problem.
  • Easier access to knowledge and resources: Although it could be argued the internet already offers the worlds information on a page, the nature of the interaction has improved making it far easier to use, and have almost human conversations. This means learners can now explore topics in depth, engage in Socratic questioning, produce summaries reducing cognitive load and be inspired by some of the questions the AI might ask.
  • Changing the Teachers role: Teachers and educators can use GenAI to automate administrative tasks such as marking and answering frequently asked questions. And perhaps more importantly the traditional teacher centered instructor role is shifting to that of a facilitator, guiding students rather than “telling” them.
  • Changes the skill set: Learners must rapidly enhance their skills in prompting, AI literacy, critical thinking, and foster a greater level of curiosity if they are to remain desirable to employers.
  • Disrupting Assessment: The use of GenAI for generating essays, reports, and answers has raised concerns about academic integrity. How can you tell if it’s the learners own work? As a result, educational institutions are now having to rethinking assessments, moving towards more interactive, collaborative, and project-based formats.

Transforming learning
GenAI is not only disrupting the way learning is delivered its also having an impact on the way we learn.

A recent study by Matthias Stadler, Maria Bannert and Michael Sailer compared the use of large language models (LLMs), such as ChatGPT, and traditional search engines (Google) in helping with problem-based exploration. They focused on how each influences cognitive load and the quality of learning outcomes. What they found was a trade-off between cognitive ease and depth of learning. LLMs are effective at reducing the barriers to information, making them useful for tasks where efficiency is a priority. But they may not be as beneficial for tasks requiring critical evaluation and complex reasoning. Traditional search engines, need the learner to put in far more effort in terms of thinking, which results in a deeper and better understanding of the subject matter.

The research reveals a fascinating paradox in how learners interact with digital learning tools. When using LLMs, learners experienced a dramatically reduced cognitive burden. In other words, they had far less information to think, making it easier to “see the wood from the trees.” This is what any good teacher does, they simplify. But because there was little effort required (desirable difficulty) they were less engaged and as a result there was little intellectual development.

This leads to one of the biggest concerns about Generative AI, the idea that it is seen as a way of offloading learning – the problem is you cant.

Conclusions
As we celebrate ChatGPT’s second birthday, it’s clear that GenAI is more than a fleeting novelty, it has already begun to disrupt the world of education and learning. Its ability to personalise learning, reduce cognitive barriers, and provide a human friendly access to resources holds immense potential to transform how we teach and learn. However, the opportunities come hand in hand with significant challenges.

The risk of over-reliance on GenAI, where learners disengage from critical thinking and problem solving, cannot be ignored. True learning requires effort, reflection, and the development of independent thought, skills that no technology can substitute.

The role of educators is crucial in ensuring that GenAI is used to complement, not replace, these processes.

You can’t outsource learning – Cognitive offloading 

As we begin to better understand the capabilities of Generative AI (Gen AI) and tools such as ChatGPT, there is also a need to consider the wider implications of this new technology. Much has been made of the more immediate impact, students using Gen AI to produce answers that are not their own, but less is known as to what might be happening in the longer term, the effect on learning and how our brains might change over time.

There is little doubt that Gen AI tools offer substantial benefits, (see previous blogs, Let’s chat about ChatGPT and Chatting with a Chat Bot – Prompting) including access to vast amounts of knowledge, explained in an easy to understand manner, as well as its ability to generate original content  instantly. However, there might be a significant problem of using these tools that has not yet been realised that could have implications for learning and learning efficacy. What if we become too reliant on these technologies, asking them to solve problems before we even think about them ourselves. This fear has found expression in debates well before Gen AI, in particular an article written by Nicholas Carr in 2008 asking “is Google making us stupid’’ – spoiler alert, the debate continues. And an interesting term coined by the neuroscientist and psychiatrist Manfred Spitzer in 2012, “Digital dementia”, describing the changes in cognition as a result of overusing technology.

But the focus of this blog is on cognitive offloading (Circ 1995), which as you might guess is about allowing some of your thinking/processing/learning to be outsourced to a technology.  

Cognitive offloading
Cognitive offloading in itself is neither good nor bad, it refers to the delegation of cognitive processes to external tools or devices such as calculators, the internet and more recently of course Gen AI. In simple terms there is a danger that by instinctively and habitually going to Google or Chat GPT for answers, your brain misses out on an essential part of the learning process. That is reflecting on what you already know, pulling the information forward, and as a result reinforcing that knowledge, (retrieval practice), then combining it with the new information to better understand what is being said or required.

As highlighted by the examples in the paragraph above cognitive offloading is not a new concern, and not specifically related to Gen AI. However, the level of cognitive offloading, the sophistication in the answers and opportunities to use these technologies is increasing, and as a result the scale and impact is greater.

Habitual dependency – one of the main concerns is that even before the question is processed, the student instinctively plugs it into the likes of ChatGPT without any attention or thought. The prompt being regurgitated from memory, “please answer this question in 100 words”. This will lead to possibly the worst situation, where all thought is delegated and worryingly the answer unquestionable believed to be true.

Cognitive offloading in action – Blindly following the Sat Nav! Research has shown that offloading navigation to GPS devices impairs spatial memory.

Benefits of Cognitive offloading – it’s important to add that there are benefits of using cognitive offloading, for example it reduces cognitive load, which is a significant problem in learning. The technology helps reduce the demand on our short-term memory, freeing the brain to focus on what is more important.

Also, some disagree as to the long-term impact, arguing that short term evidence (see below) is not necessarily the best way to form long term conclusions. For example, there were concerns that calculators would affect our ability to do math’s in our heads, but research found little difference whether students used calculators or not. And the debate has moved on to consider how calculators could be used to complement and reinforce mental and written methods of math’s. These benefits have led some to believe that cognitive offloading increases immediate task performance but diminishes subsequent memory performance for the offloaded information.

Evidence
There is little research on the impact of Gen AI due to it being so new, but as mentioned above we have a large amount of evidence on what has happened since the introduction of the internet and search.

  • In the paper Information Without Knowledge. The Effects of Internet Search on Learning, Matthew Fisher, et al found that participants who were allowed to search for information online were overconfident about their ability to comprehend the information and those who used the internet were less likely to remember what they had read. 
  • Dr Benjamin Storm the lead author of Cognitive offloading: How the Internet is increasingly taking over human memory, commented, “Memory is changing. Our research shows that as we use the Internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don’t bother.”

What should you do?
To mitigate the risks of cognitive offloading, the simple answer is to limit or reduce your dependency and use Gen AI to supplement your learning rather than as a primary source. For example, ask it to come with ideas and lists but not the final text, spend your time linking the information together and shaping the arguments.