AI is the Opium of the people – Cognitive Dependency

A dramatic headline for what I believe could become a significant and damaging problem. Amid all the noise around AI, there’s something creeping up on us, it’s not making headlines or trending on social media but it is reshaping the architecture of the human mind – it’s called Cognitive Dependency.

It was of course Karl Marx, who famously said that religion is the opium of the people, not as a criticism but to highlight the comfort and relief religion brought by distracting from the hardship people experienced in everyday life. However much like opium, religion didn’t eliminate suffering, it simply made it easier to bear. The problem was that over time people lost their capacity to think for themselves, becoming reliant and potentially addicted.

TL;DR – the short audio version

Stay with me….

We live in a world that prizes answers over thought, output over process, and fast is always better than slow. Add to that the relentless pressure to succeed or in some instances simply survive, it becomes not only understandable but logical that people will reach for the easiest solution, regardless of the consequences.

And this is where I hope the parallels can be drawn. Just as opium offered relief in the 19th century from the challenges faced at that time, AI can do the same with the cognitive demands of a world moving too fast to keep up with. This is not about being lazy; the catalyst is exhaustion and a need to be successful or, at least seen to be so.  The danger may not at first be obvious, but similar to the observations of Marx, what begins as an easy solution becomes a quiet dependency and ultimately an amnesia. Over time, not only do you become devoid of your own ideas, but you completely forget what it means to think for yourself.

Cognitive offload or Cognitive dependency
But we need to make sure we don’t throw the baby out with the bathwater. This is not a general criticism of AI and its potential to erode our capacity to think, Its far more specific. AI in itself is not harmful, for now at least but, to better understanding how to work with AI and avoid creating problems for ourselves in the future we need to make a clear distinction between the two very different ways in which we use it. The first, as a tool to free up our mental power, this is called Cognitive offload. The second, as a surrogate for thinking, this is the more sinister Cognitive dependence.

Cognitive offload – The mental effort required to process and hold information in working memory is referred to as cognitive load. One of the reasons people struggle to learn is because they are trying to deal with too much information at any one time, reduce the load and learning becomes easier. A calculator is a good example of how technology can help. By outsourcing or offloading mental arithmetic, the mind is freed to focus on higher-order thinking. This is the use of AI to extend human capability, but without replacing human thought.

Cognitive dependency – Where cognitive offload removes some of the “clutter” freeing the brain to focus on more important ideas, cognitive dependence is far more invasive and results in a situation where the brain’s capacity to think deteriorates because AI is doing all the hard work. In this study Jinrui Tian and Ronghua Zhang) from Wuhan University found that greater AI dependence was associated with lower levels of critical thinking.

The sat nav is a good example of what this looks like in practice. When we follow a voice telling us where to turn, we are not navigating, we are being navigated. Over time, the mental map we once built through attention and experience becomes redundant. Studies (Louisa Dahmani & Véronique D. Bohbot) have shown that regular sat nav users demonstrate measurably reduced spatial awareness and struggle to recall routes they have driven down many times before.

This distinction really matters. A calculator leaves your mathematical reasoning intact, simply handling the “grunt work”.  But continual use of a sat nav, removes our capacity to orient ourselves possibly forever.  There is also something far deeper potentially happening, it’s what Andy Clark and David Chalmers called their extended mind theory. Eventually the tools we rely on stop feeling like tools, and become extensions of our cognitive selves, as intimate as memory or perception. This leads to a difficult question, if the machine is part of who we are, what happens when it’s taken away?

No sleepwalking please
AI is arguably the most transformative technology we have ever seen, and its potential to enhance learning, expand access, and accelerate understanding is genuinely exciting. But as educators and learners, we need to be aware of the problems. A generation that outsources its thinking doesn’t just lose a skill, it loses a sense of self, that quiet certainty that your thoughts are your own.

The good news is that we can do something about it. The question is not whether AI belongs in education, it clearly does. But we need to recognise that there is a problem and then begin to change attitudes and methodologies to combat the negative implications. In practice this might look like designing assessments that reward process over output, asking students to show their reasoning before they reach for AI assistance, or building in regular “unplugged” tasks where thinking has to take place without the support of technology. It means teaching students not just how to use AI but when not to, and helping them to develop the self-awareness to know the difference.

We built tools to save us time so we could think more. Let’s make sure that’s still what we’re doing.

Back to the future – Reflections and Projections

One of the most valuable parts of learning is discovering new ideas and different ways of thinking. While some of this comes from formal teaching, we all have access to a vast library of knowledge that can help us learn some of these skills for ourselves. We just need to ask the right questions and look in the right place. Oh, and you might find it helpful to have pen and paper.

Simply reflect on a specific experience and critically examine it by asking yourself questions such as, what did I learn from this? What aspects were unclear or confusing? What approaches were effective, and which ones fell short? Reflection not only deepens understanding but will help identify ways in which you can improve.

The value of reflection is well understood and encouraged within education, and although students may not initially see its importance they will be asked to produce reflective statements or keep a journal in at attempt to get them to appreciate its worth. From a cognitive perspective your brain isn’t just opening a file, its actually reconstructing (ironically like an LLM) and rewiring the information. The process will strengthen the synaptic path, develop new associations, which in turn helps integrate different types of information into a “big picture.” Before finally being resaved as a stronger, more complex version of the original idea or thought.

But enough of what it is, let me see if I can put it into practice by reflecting on 2025 and coming up with a few ideas for 2026.

Reflections on 2025
This time last year I set out some predictions as to what might happen in learning from 2025 onwards with the caveat that making predictions is a “mugs game.” Looking back, there was nothing particularly radical in what I suggested. In that sense, if I was being critical, it may be that the ideas themselves may not have helped that much. Even so, I hope that by narrowing the field of possibilities it made the future seem a little less confusing.

The 2025 predictions:

1. Learning will not change but learners will. A reference to how learners will develop different behaviours as a result of AI. This year research from MIT confirmed what many had suspected that using AI has an impact on brain activity, causing what they called “cognitive debt” e.g. saving effort now, but weakening cognitive abilities over time. This will remain a challenge in 2026 and beyond requiring educators to get ahead of the technology rather than simply acknowledging its existence and use.

2. AI (GenAI) will continue to dominate. An easy one perhaps, of course AI was going to play a hugely important part in learning. But there was specific reference to it becoming the “go to” tool for students and the emergence of teaching chatbots.  A survey by Hepi and Kortext early in 2025 found that the proportion of students now using AI has jumped from 66% last year to 92% this year. Which seems conclusive, AI has become an ever-present aspect of student life and one that cannot be ignored. Teacher bots have also advanced significantly, with research showing they now deliver consistently high‑quality learning experiences. Expect these trends to continue as well as the big tech companies developing AI integrated solutions for learners and educators e.g. Gemini for Education, Copilot for Education, ChatGPT Edu, Pearson +.

3. Watch out for sector disruption, the result of, a reduced need for textbooks, a different approach to assessment and data becoming even more important. In 2025 Chegg, the US publisher reported first-quarter 2025 revenues down 30%! naming Googles GenAI intelligent summaries, as significantly contributing to the sharp decrease in its traffic. And they are not the only ones impacted, Pearson et al are changing their plans, hoping that AI‑enhanced textbooks are the solution to declining sales, personally I’m not convinced.

By late 2025, large companies were finding access to quality data was stopping them getting value from AI. In fact,  Gartner found that 30% of GenAI projects fail because of poor data. As for assessment, in a somewhat backward and reactive step some have reverted to the use of more traditional assessment methods. These include oral exams, handwritten exams and portfolios to combat plagiarism. The smarter more proactive solution would be to build AI into the assessment process, with appropriate guardrails for novice leaners. Some have begun to make changes and will continue to do so into next year but its patchy.

4. Regulation will be in conflict with innovation. This year governments have been working hard to balance innovation with responsible oversight. In the UK and EU, policymakers recognise AI’s potential but are introducing strict rules creating a tension for schools and colleges that want to innovate. In contrast, the US are taking a more flexible approach, offering federal guidance rather than strict regulation. Expect this tension to continue well into 2026, and there’s no simple resolution. While slowing down may feel defeatist, the answer isn’t to rush implementation it’s to accelerate the validation process itself. Meet weekly to assess new tools, prioritise solutions based on the biggest challenges, implement, then move on to the next.

Reflections and projections for 2026 – The level of investment in AI has driven what feels like an arms race in technological development. This has meant keeping up to date with new AI solutions has become increasingly difficult, as has understanding why the latest tool is better than the one, you’re currently using. Technology is advancing faster than individuals or institutions can sensibly integrate and manage within their existing practices. There is no single pathway forward, no consensus on best practice, and little time to evaluate what actually works before the next wave of tools arrive. This mismatch creates risks. Without proper integration, barriers may emerge, whether through poorly designed policies that restrict innovation or the development of tools that undermine rather than support learning. Personalisation and more authentic methods of assessment will remain the North Star for many in navigating this disruptive environment. Keep them in mind, but remember to look down every now and again, you dont want to trip up.

Personally, I’m excited about 2026. AI is opening doors we couldn’t have imagined even a few years ago, and the potential to do good things, to truly make a difference, feels within reach. However, the pace of development is uneven and the world remains unpredictable. More realistically, we are likely to see parts of the education sector make genuine breakthroughs, while others hold back and wait, the result of indecision, or taking a more cautious approach. There is of course no way of knowing which one will succeed in the long run.

Whatever the reason, 2026 looks set to intensify the “Jagged frontier”.

Perhaps Winston Churchill should close out 2025.

Merry Xmas and a Happy New year everyone – put your running shoes on, but make sure the race is worth running and the prize worth having!

The AI Education Paradox: Answers are cheap, questions are priceless

After 7.5 million years of computation, Deep Thought reveals the answer: “forty-two.”

This was the “Answer to the Ultimate Question of Life, the Universe, and Everything” in The Hitchhiker’s Guide to the Galaxy. 

Coming up with answers to questions is reasonably easy, especially for such a big computer as “Deep Thought,” although in fairness taking 7.5 million years is a little slow by modern standards! When I asked ChatGPT it only needed a few seconds, although it did eventually ask me what I thought the answer was. 

What is far more difficult than answering questions is asking them. Which is why in Hitchhikers they go on to ask Deep Thought if it can produce “The Ultimate Question” to go with the answer 42. See* – spoiler, it doesn’t end well.

AI has all the answers?
Historically it could be argued that the educational model has been largely focussed on knowledge transfer, requiring students to absorb and regurgitate pre-determined facts and solutions. This model, while valuable when information was not so accessible, is however starting to creak under the pressure of new technologies such as GenAI. After all, what’s the point of teaching facts, and answers to questions when you have ChatGPT?

Although you could have made a very similar point about the internet, large language models are different. They are far more accessible and provide credible, if not always correct answers instantly, requiring little or no effort by the individual, which is of course is part of the problem.

This is not however a good argument to avoid teaching knowledge, because without it as a foundation it becomes almost impossible to develop those hugely important higher-level skills such as critical thinking and problem solving.  Dan Willingham, the Cognitive Scientists is very clear on this:

 “Thinking well requires knowing facts, and that’s true not simply because you need something to think about. Critical thinking and processes such as reasoning and problem solving are intimately intertwined with factual knowledge” Dan Willingham (edited).

But that’s not all, in addition to continuing to teach knowledge we need to pivot away from what GenAI does best, e.g. data analysis, repetitive tasks and answering questions, to focus on the areas in which humans excel.

Learning…….to beat AI
There is little doubt that GenAI is eroding human skills and as a consequence reshaping labour markets. The Tony Blair institute (The Impact of AI on the Labour Market) estimates something in the region of one to three million jobs could be displaced**. Take for example my own industry, Finance. GenAI can analyse bank statements, matching transactions with internal records, it can review historical financial data and identify trends and patterns as well as produce forecasts to support financial planning.

However, it’s not all bad news, although GenAI is excellent at processing vast amounts of data and providing rapid output, the quality of what is produced is very dependant on the questions asked, and humans are capable of asking great questions.

The three AI proof human skills

Skill no 1 – Asking the right questions. This may seem counterintuitive, surely “any fool can ask a question” – but can they ask a good one? The ability to ask the right question is far from trivial, it’s a spark for curiosity, and leads to growth and critical thinking. Socrates, built his entire philosophy on the principle of asking questions, he challenged assumptions looking for the underlying truth, and in so doing fostered a deep understanding of the subject.

Questions aren’t merely tools for obtaining answers, they are catalysts for refining our thinking, discovering new perspectives, and embracing intellectual humility.  

How to ask questions:

  • Move beyond simple “what” and “how” questions, ask “why and what if”
  • Break down complex inquiries into smaller, more manageable parts
  • Challenge assumptions, for example, “what are the counterarguments to this idea?” or “What would someone with a different perspective say?”

Skill no 2 – Evaluating the answer. While AI can produce insights, summaries, or responses that may seem well crafted, it lacks the uniquely human ability to contextualise, empathise, and discern subtleties. Think of evaluation in this context as – the “human act” of applying critical thinking, professional judgment, and emotional intelligence to assess the relevance, accuracy and practical value of AI generated content.

This process goes beyond mere interpretation. Human evaluation is, in essence, the bridge that ensures AI contributions remain meaningful and grounded in purpose. In simple terms, interpretation focuses on meaning, while evaluation focuses on judgment.

How to evaluate:

  • Have a clear criterion, be specific and decide on the method of prioritisation
  • Use multiple sources of evidence, combine numerical data with qualitative insights
  • Distinguish facts from assumptions, being careful to separate what you can prove from information that is speculative or anecdotal

Skill no 3 – Maintaining agency and an ethical perspective. Human agency requires the individual to act independently and make informed decisions about the AI output. Agency involves understanding AI’s capabilities and limitations, questioning its outputs, and actively deciding how it is applied rather than passively following its suggestions. By retaining oversight and exercising judgment, we ensure that AI remains a tool serving human needs, rather than a means for delegating responsibility.

Equally important is the ethical perspective. AI is devoid of inherent morality, able only to reflect the values embedded in its training data. Humans must actively define and enforce ethical boundaries, addressing biases and prioritising human values such as compassion and social responsibility.

How to maintain agency and an ethical perspective.

  • Educate yourself about AI, understanding how it works, including its capabilities, limitations, and potential biases
  • Develop an ethical framework. Create a set of guidelines to assess AI use, including its long-term impact on individuals, communities, and the environment
  • Be the Human in the Loop. Remember that you have ultimate responsibility both for the final decision and the ethical impact. This should never be delegated

Conclusion
While AI delivers instant results, true education goes beyond merely retrieving information. It requires deep understanding, a spirit of inquiry, and continuous personal growth. For students, this translates to mastering the art of asking thoughtful, probing questions, and developing the ability to critically evaluate responses.

Educators, have a more complex role. They must not only provide the necessary foundational knowledge base, but also teach and assess those uniquely human skills that AI will find hard to replicate – the ability to ask good questions, judge answers wisely, and maintain ethical agency.

Footnotes
*In Hitchhikers’ Deep Thought is unable to come up with the ultimate question, it needs a bigger and better computer, however it can buid it “one of such infinite complexity that life itself will form part of its operational matrix.” It’s called earth!
**The Impact of AI on the Labour Market report goes on to say that the job displacements will not occur all at once, but instead will rise gradually with the pace of AI adoption across the wider economy. Moreover, the rise in unemployment is likely to be capped and ultimately offset as AI creates new demand for workers, which pulls displaced workers back into the workforce.