Back to the future – Reflections and Projections

One of the most valuable parts of learning is discovering new ideas and different ways of thinking. While some of this comes from formal teaching, we all have access to a vast library of knowledge that can help us learn some of these skills for ourselves. We just need to ask the right questions and look in the right place. Oh, and you might find it helpful to have pen and paper.

Simply reflect on a specific experience and critically examine it by asking yourself questions such as, what did I learn from this? What aspects were unclear or confusing? What approaches were effective, and which ones fell short? Reflection not only deepens understanding but will help identify ways in which you can improve.

The value of reflection is well understood and encouraged within education, and although students may not initially see its importance they will be asked to produce reflective statements or keep a journal in at attempt to get them to appreciate its worth. From a cognitive perspective your brain isn’t just opening a file, its actually reconstructing (ironically like an LLM) and rewiring the information. The process will strengthen the synaptic path, develop new associations, which in turn helps integrate different types of information into a “big picture.” Before finally being resaved as a stronger, more complex version of the original idea or thought.

But enough of what it is, let me see if I can put it into practice by reflecting on 2025 and coming up with a few ideas for 2026.

Reflections on 2025
This time last year I set out some predictions as to what might happen in learning from 2025 onwards with the caveat that making predictions is a “mugs game.” Looking back, there was nothing particularly radical in what I suggested. In that sense, if I was being critical, it may be that the ideas themselves may not have helped that much. Even so, I hope that by narrowing the field of possibilities it made the future seem a little less confusing.

The 2025 predictions:

1. Learning will not change but learners will. A reference to how learners will develop different behaviours as a result of AI. This year research from MIT confirmed what many had suspected that using AI has an impact on brain activity, causing what they called “cognitive debt” e.g. saving effort now, but weakening cognitive abilities over time. This will remain a challenge in 2026 and beyond requiring educators to get ahead of the technology rather than simply acknowledging its existence and use.

2. AI (GenAI) will continue to dominate. An easy one perhaps, of course AI was going to play a hugely important part in learning. But there was specific reference to it becoming the “go to” tool for students and the emergence of teaching chatbots.  A survey by Hepi and Kortext early in 2025 found that the proportion of students now using AI has jumped from 66% last year to 92% this year. Which seems conclusive, AI has become an ever-present aspect of student life and one that cannot be ignored. Teacher bots have also advanced significantly, with research showing they now deliver consistently high‑quality learning experiences. Expect these trends to continue as well as the big tech companies developing AI integrated solutions for learners and educators e.g. Gemini for Education, Copilot for Education, ChatGPT Edu, Pearson +.

3. Watch out for sector disruption, the result of, a reduced need for textbooks, a different approach to assessment and data becoming even more important. In 2025 Chegg, the US publisher reported first-quarter 2025 revenues down 30%! naming Googles GenAI intelligent summaries, as significantly contributing to the sharp decrease in its traffic. And they are not the only ones impacted, Pearson et al are changing their plans, hoping that AI‑enhanced textbooks are the solution to declining sales, personally I’m not convinced.

By late 2025, large companies were finding access to quality data was stopping them getting value from AI. In fact,  Gartner found that 30% of GenAI projects fail because of poor data. As for assessment, in a somewhat backward and reactive step some have reverted to the use of more traditional assessment methods. These include oral exams, handwritten exams and portfolios to combat plagiarism. The smarter more proactive solution would be to build AI into the assessment process, with appropriate guardrails for novice leaners. Some have begun to make changes and will continue to do so into next year but its patchy.

4. Regulation will be in conflict with innovation. This year governments have been working hard to balance innovation with responsible oversight. In the UK and EU, policymakers recognise AI’s potential but are introducing strict rules creating a tension for schools and colleges that want to innovate. In contrast, the US are taking a more flexible approach, offering federal guidance rather than strict regulation. Expect this tension to continue well into 2026, and there’s no simple resolution. While slowing down may feel defeatist, the answer isn’t to rush implementation it’s to accelerate the validation process itself. Meet weekly to assess new tools, prioritise solutions based on the biggest challenges, implement, then move on to the next.

Reflections and projections for 2026 – The level of investment in AI has driven what feels like an arms race in technological development. This has meant keeping up to date with new AI solutions has become increasingly difficult, as has understanding why the latest tool is better than the one, you’re currently using. Technology is advancing faster than individuals or institutions can sensibly integrate and manage within their existing practices. There is no single pathway forward, no consensus on best practice, and little time to evaluate what actually works before the next wave of tools arrive. This mismatch creates risks. Without proper integration, barriers may emerge, whether through poorly designed policies that restrict innovation or the development of tools that undermine rather than support learning. Personalisation and more authentic methods of assessment will remain the North Star for many in navigating this disruptive environment. Keep them in mind, but remember to look down every now and again, you dont want to trip up.

Personally, I’m excited about 2026. AI is opening doors we couldn’t have imagined even a few years ago, and the potential to do good things, to truly make a difference, feels within reach. However, the pace of development is uneven and the world remains unpredictable. More realistically, we are likely to see parts of the education sector make genuine breakthroughs, while others hold back and wait, the result of indecision, or taking a more cautious approach. There is of course no way of knowing which one will succeed in the long run.

Whatever the reason, 2026 looks set to intensify the “Jagged frontier”.

Perhaps Winston Churchill should close out 2025.

Merry Xmas and a Happy New year everyone – put your running shoes on, but make sure the race is worth running and the prize worth having!

Transforming Learning – GenAI is two years old

ChatGPT – Happy second birthday
Generative AI (GenAI), specifically ChatGPT exploded onto the scene in November 2022, which means it is only two years old. Initially people were slow to react, trying to figure out what this new technology was, many were confused, thinking it was a “bit like Google.” But when they saw what it could do – “generating” detailed, human-like responses to human generated “prompts,” ideas as to what it could be used for started to emerge. The uptake was extraordinary with over 1 million people using it within the first five days, a year later this had grown to 153 million monthly users and as at November 2024 its around 200 million. The use of GPTs across all platforms is difficult to estimate but it could be something in the region of 400 – 500 million. That said, and to put this in perspective, google search has over 8.5 billion searches every day, that’s the equivalent to the world’s population!

From Wow to adoption
Initially there was the WOW moment, true AI had been around for a long time but GenAI made it accessible to ordinary people. In the period from November 2022 to early 2023 we saw the early adopters, driven mostly by curiosity and a desire to experiment. By mid 2023 it became a little more mainstream as other GPTs emerged e.g. Googles Bard (Now Gemini), and Microsoft’s Copilot to name just two. But it was not all plain sailing, ethical concerns began to grow and by the end of 2023 there were people talking about misinformation, problems with academic integrity, and job displacement. This led to calls for greater regulation especially in Europe, where AI governance frameworks were developed to address some of the risks.

In terms of education, initially there were calls to ban learners from using these tools in response to answers being produced that were clearly not the work of the individual. And although many still worry, by early 2024, there was a creeping acceptance that the genie was out of the bottle and it was time for schools, colleges, and universities to redefine their policies, accept GPTs, and integrate rather than ban. 2024 saw even greater adoption, according to a recent survey, 48% of teachers are now using GenAI tools in some way.

GenAI – Educational disrupter
There have been significant changes in education over the last 50 years e.g. the introduction of personal computers and the Internet (1980s -1990s), making content far more accessible, and changing some learning practices. Then in the 2000 – 2010s we saw the development of E-learning Platforms and MOOCs such as Moodle, Blackboard and Coursera. This fuelled the growth of online education providing learners with access to quality courses across the globe.

But I am going to argue that as important as these developments were, not least because they are essential underpinning technologies for GenAI, we are always “standing on the shoulders of giants” – GenAI is by far the biggest educational disrupter than anything that has come before. Here are a few of the reasons:

  • Personalised Learning at scale: GenAI tools make it possible for everyone to enjoy a highly personalised learning experience. For instance, AI can adapt to an individual’s learning style, pace, and level of understanding, offering custom explanations and feedback. This brings us far closer to solving the elusive two sigma problem.
  • Easier access to knowledge and resources: Although it could be argued the internet already offers the worlds information on a page, the nature of the interaction has improved making it far easier to use, and have almost human conversations. This means learners can now explore topics in depth, engage in Socratic questioning, produce summaries reducing cognitive load and be inspired by some of the questions the AI might ask.
  • Changing the Teachers role: Teachers and educators can use GenAI to automate administrative tasks such as marking and answering frequently asked questions. And perhaps more importantly the traditional teacher centered instructor role is shifting to that of a facilitator, guiding students rather than “telling” them.
  • Changes the skill set: Learners must rapidly enhance their skills in prompting, AI literacy, critical thinking, and foster a greater level of curiosity if they are to remain desirable to employers.
  • Disrupting Assessment: The use of GenAI for generating essays, reports, and answers has raised concerns about academic integrity. How can you tell if it’s the learners own work? As a result, educational institutions are now having to rethinking assessments, moving towards more interactive, collaborative, and project-based formats.

Transforming learning
GenAI is not only disrupting the way learning is delivered its also having an impact on the way we learn.

A recent study by Matthias Stadler, Maria Bannert and Michael Sailer compared the use of large language models (LLMs), such as ChatGPT, and traditional search engines (Google) in helping with problem-based exploration. They focused on how each influences cognitive load and the quality of learning outcomes. What they found was a trade-off between cognitive ease and depth of learning. LLMs are effective at reducing the barriers to information, making them useful for tasks where efficiency is a priority. But they may not be as beneficial for tasks requiring critical evaluation and complex reasoning. Traditional search engines, need the learner to put in far more effort in terms of thinking, which results in a deeper and better understanding of the subject matter.

The research reveals a fascinating paradox in how learners interact with digital learning tools. When using LLMs, learners experienced a dramatically reduced cognitive burden. In other words, they had far less information to think, making it easier to “see the wood from the trees.” This is what any good teacher does, they simplify. But because there was little effort required (desirable difficulty) they were less engaged and as a result there was little intellectual development.

This leads to one of the biggest concerns about Generative AI, the idea that it is seen as a way of offloading learning – the problem is you cant.

Conclusions
As we celebrate ChatGPT’s second birthday, it’s clear that GenAI is more than a fleeting novelty, it has already begun to disrupt the world of education and learning. Its ability to personalise learning, reduce cognitive barriers, and provide a human friendly access to resources holds immense potential to transform how we teach and learn. However, the opportunities come hand in hand with significant challenges.

The risk of over-reliance on GenAI, where learners disengage from critical thinking and problem solving, cannot be ignored. True learning requires effort, reflection, and the development of independent thought, skills that no technology can substitute.

The role of educators is crucial in ensuring that GenAI is used to complement, not replace, these processes.

You can’t outsource learning – Cognitive offloading 

As we begin to better understand the capabilities of Generative AI (Gen AI) and tools such as ChatGPT, there is also a need to consider the wider implications of this new technology. Much has been made of the more immediate impact, students using Gen AI to produce answers that are not their own, but less is known as to what might be happening in the longer term, the effect on learning and how our brains might change over time.

There is little doubt that Gen AI tools offer substantial benefits, (see previous blogs, Let’s chat about ChatGPT and Chatting with a Chat Bot – Prompting) including access to vast amounts of knowledge, explained in an easy to understand manner, as well as its ability to generate original content  instantly. However, there might be a significant problem of using these tools that has not yet been realised that could have implications for learning and learning efficacy. What if we become too reliant on these technologies, asking them to solve problems before we even think about them ourselves. This fear has found expression in debates well before Gen AI, in particular an article written by Nicholas Carr in 2008 asking “is Google making us stupid’’ – spoiler alert, the debate continues. And an interesting term coined by the neuroscientist and psychiatrist Manfred Spitzer in 2012, “Digital dementia”, describing the changes in cognition as a result of overusing technology.

But the focus of this blog is on cognitive offloading (Circ 1995), which as you might guess is about allowing some of your thinking/processing/learning to be outsourced to a technology.  

Cognitive offloading
Cognitive offloading in itself is neither good nor bad, it refers to the delegation of cognitive processes to external tools or devices such as calculators, the internet and more recently of course Gen AI. In simple terms there is a danger that by instinctively and habitually going to Google or Chat GPT for answers, your brain misses out on an essential part of the learning process. That is reflecting on what you already know, pulling the information forward, and as a result reinforcing that knowledge, (retrieval practice), then combining it with the new information to better understand what is being said or required.

As highlighted by the examples in the paragraph above cognitive offloading is not a new concern, and not specifically related to Gen AI. However, the level of cognitive offloading, the sophistication in the answers and opportunities to use these technologies is increasing, and as a result the scale and impact is greater.

Habitual dependency – one of the main concerns is that even before the question is processed, the student instinctively plugs it into the likes of ChatGPT without any attention or thought. The prompt being regurgitated from memory, “please answer this question in 100 words”. This will lead to possibly the worst situation, where all thought is delegated and worryingly the answer unquestionable believed to be true.

Cognitive offloading in action – Blindly following the Sat Nav! Research has shown that offloading navigation to GPS devices impairs spatial memory.

Benefits of Cognitive offloading – it’s important to add that there are benefits of using cognitive offloading, for example it reduces cognitive load, which is a significant problem in learning. The technology helps reduce the demand on our short-term memory, freeing the brain to focus on what is more important.

Also, some disagree as to the long-term impact, arguing that short term evidence (see below) is not necessarily the best way to form long term conclusions. For example, there were concerns that calculators would affect our ability to do math’s in our heads, but research found little difference whether students used calculators or not. And the debate has moved on to consider how calculators could be used to complement and reinforce mental and written methods of math’s. These benefits have led some to believe that cognitive offloading increases immediate task performance but diminishes subsequent memory performance for the offloaded information.

Evidence
There is little research on the impact of Gen AI due to it being so new, but as mentioned above we have a large amount of evidence on what has happened since the introduction of the internet and search.

  • In the paper Information Without Knowledge. The Effects of Internet Search on Learning, Matthew Fisher, et al found that participants who were allowed to search for information online were overconfident about their ability to comprehend the information and those who used the internet were less likely to remember what they had read. 
  • Dr Benjamin Storm the lead author of Cognitive offloading: How the Internet is increasingly taking over human memory, commented, “Memory is changing. Our research shows that as we use the Internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don’t bother.”

What should you do?
To mitigate the risks of cognitive offloading, the simple answer is to limit or reduce your dependency and use Gen AI to supplement your learning rather than as a primary source. For example, ask it to come with ideas and lists but not the final text, spend your time linking the information together and shaping the arguments.

Chatting with a Chat Bot – Prompting

In December last year I wrote about what was then a relatively new technology, Generative AI (GAI). Seven months later it has become one of the most exciting and scary developments we have seen in recent years, it has the potential to create transformative change that will affect our very way of life, how we work and the area I am most interested in, how we learn. Initially it was all about a particular type of GAI called ChatGPT 3.5, a large language model funded by Microsoft. But the market reacted quickly and there are now many more models, including Bard from Google, Llama 2 from Meta and a pay for version of ChatGPT imaginatively entitled ChatGPT 4. And just to make this a little more complicated, in early February, Microsoft unveiled a new version of Bing (Microsoft’s search engine that competes with Google) that includes an AI chatbot powered by the same technology as ChatGPT.

One of the reasons for its rapid adoption is it’s so easy to use, you can literally chat with it as you might a human. However as with people, to have a meaningful conversation you need to plan what you want to say, be clear in how you say it whilst providing sufficient context to avoid misunderstanding.

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” The Turing Test – Alan Turing

Prompting – rubbish in rubbish out

Prompting is how we talk with these GAI models. The quality and specificity of the prompt can significantly influence the response you get. A well-crafted prompt can lead to a coherent and relevant answer, whilst a poorly formulated one offers up ambiguity and irrelevant information. If only people thought as deeply about how they communicate with each other, we might avoid a lot of problems!

How to prompt
• Be clear, use specific and unambiguous language.
• Provide context as to why you are asking the question or who are, and write in complete sentences. For example, “would William Shakespeare be considered a great writer if he were to be published today?”
• Ask open ended questions, you will get more detailed and creative responses.
• Set rules such as the tone required or length of an answer, limiting it to so many words, sentence or paragraphs. For example, “in a sentence could you provide a motivational response as to why learning is important?”
• Ask a follow up question if you don’t get the answer you want. GAI is conversational and will remember what you asked last. For example, if you don’t think the answer goes into sufficient detail, say “could you provide more detail as to why this particular event was considered so important?”.
• Provide examples of what you want will result in far more accurate answers. For example, why not copy in the report you wrote last time and ask GAI to update it with new information but keeping the same style and within 10% of the word count.

……And although there is no need to be polite, I have to admit to saying please and thank you.

Types of questions

ChatGPT is very flexible when it comes to answering question, the secret is in getting the prompt right. Here are just a few examples with responses from ChatGPT.

• General knowledge, Q When was Winston Churchill born? A Winston Churchill was born on November 30, 1874.
• Explainer questions, Q Can you explain in 5 words why Churchill was considered a great prime minister? A Inspiring leadership during wartime victory.
• Comparison questions, Q In a sentence can you compare Winston Churchill with Boris Johnson? A. Winston Churchill was a historical wartime leader known for his inspirational leadership during World War II, while Boris Johnson is a contemporary British politician who served as Prime Minister from 2019 to 2022.
• Others, ask…. How to do something e.g. bake a lemon drizzle cake, Can you define apprentice, Tell me a joke, Write me a poem in Japanese. You are limited only by your imagination.

Using it to learn

There are many ways in which this new technology can help with learning but it comes with a couple of health warnings, firstly – GenAI can be a great teacher but it cannot learn for you. It is an excellent resource, able to provide answers 24/7, clarify, summarise etc but ultimately you are the one that has to do the learning. Secondly – All GenAI bots suffer from something called hallucination, they make stuff up. The problem with this is as a learner you might not be able to tell because the answer will sound so authentic. In terms of how common this is, ChatGPT estimates around 20% of answers given might have something wrong with them, but they are working on reducing this to less than 10%.

Here are a few ways you can use GAI
• Summarise large amounts of text – copy a whole section of text into the model and ask it to summarise the most important points. Remember the more detail you give, the more relevant the response, e.g. Produce me a timeline of key events or identify the theories used in the answer.
• Question practice and marking – copy a question in and ask for the answer in 100 words. Paste your answer in and ask it to give you some feedback against the answer it has just produced. This can be further refined if you put in the examiners answer and if you have it, the marking guide.
• Ask for improvement – put into the model your answer with the examiners answer and ask how you might improve the writing style, making it more concise or highlighting the most important points.
• Produce flip cards – ask the model to write you 5 questions with answers in the style of a flip card.
• Produce an answer for a specific qualification – ask if it could produce an answer that is possible to complete in one hour, that would pass the AQA, GCSE exam in biology.
• Explain something – ask can you explain, for example Photosynthesis in simple terms or as an analogy or metaphor.
• Coach me – Ask it to review your answer against the examiners answer but rather than correct it ask it to coach you through the process so that you develop a better understanding.

There is little doubt as to the potential of GenAI in learning, its biggest impact may be in developing countries where there is limited access to teachers and few resources. Although most would agree that an educated world is a better one, there will need to be some safeguards. It cant be left to the open market, education is simply too important.

“Education is the most powerful weapon which you can use to change the world”
Nelson Mandela

And If you want to see some of these tools in action as well as hear Sal Khan talk about Khanmigo, his version of a teacher chatbot, see below.
Sal Khan talks about Khanmigo
ChatGPT in action for studying and exams
Revise SMARTER, not harder: How to use ChatGPT to ace your exams