Sticky – The Science of Storytelling

Long before writing, and even “classrooms,” people shared knowledge through the telling of stories. These stories conveyed essential lessons in survival and reflected the social norms of their time, handed down through generations.

To fulfil their purpose, they had to be memorable. What remains unclear is, did the story evolve to fit the brain’s natural ability to remember or did stories in some way shape our brains to make them easier to recall – a classic chicken-and-egg dilemma.

Regardless, it could be argued that stories were our first educational technology, influencing culture, guiding decisions, and ensuring knowledge was not lost.

If you don’t have time to read this month’s blog – listen to my AI alter ego summarise the key points.

Today, when we think of stories, we often associate them with novels, films, animations and more recently podcasts. At its core, they are simply a structured way of sharing events and information, with most following a familiar pattern. They begin by setting the scene, move into a middle phase where the story unfolds and end with some form of resolution that provides clarity or closure. This structure helps us make sense of experiences, maintain attention, communicate ideas, evoke emotions, and connect with others in meaningful ways. All of which help with recall.

They are also incredibly persuasive, and can become a vehicle for knowledge transfer, simply saying, “let’s take a moment, relax, I want to tell you a story” changes the mood in the room and opens the mind for a new experience.

If you’re still unsure about their power, Yuval Noah Harari provides a compelling example. He explains that money holds no inherent value, a banknote is simply paper, and digital currency just data. What makes money meaningful is the collective belief in its worth. This shared understanding allows it to function as a medium of exchange for goods, services, and influence.

He goes on to say….

Why are stories sticky?
But what is happening in the brain when you hear a story or read one for yourself? Why do stories stay with us long after we’ve heard them, what makes them stick?

Cognitive rapport – When someone tells a story, something remarkable happens in the brain. Instead of just processing words, the listener’s brain begins to light up in multiple areas all at once. Stories create what researchers call neural coupling, the listener’s brain patterns start to mirror the storyteller’s, helping ideas flow more smoothly and making them easier to understand (Stephens et al., 2010).

Emotional – Importantly, stories also stir emotion and when emotions are triggered, the amygdala and hippocampus work together to strengthen memory (Article McGaugh, 2013). In one test, a neutral learning event was given an emotional focus. Subjects were asked to memorise a list of words, a non-emotional task. They were then exposed to a brief, intense emotional experience e.g. Putting their arms into icy water (Cold Pressor Stress Test), which released stress hormones, epinephrine, and cortisol, telling the brain it is an important event. When tested weeks later, the individuals had forgotten the cold-water experience, but remembered the list of words!

Structured – Stories give knowledge a shape and structure. A beginning, a challenge, and a resolution acting like mental scaffolding, allowing learners to slot new information into place. Structure also reduces cognitive load, (John Sweller 1988), and help create schemas, which are interconnected mental chunks of knowledge that are stored more easily in long term memory.

Engaging – And lastly, stories build a human connection, helping creat greater levels of engagement. Neuroscientist Paul J. Zak (2015) discovered that compelling narratives those with a strong dramatic storyline trigger the release of oxytocin, the neurochemical responsible for trust and empathy. In a learning context, this surge of empathy makes you more receptive to the message and strongly motivates, helping internalise the information and transforming simple facts into knowledge.

A word of caution – seductive details
However not all stories help us learn. The danger is that they include fascinating but irrelevant information known as “seductive details” (Harp & Mayer, 1998). This results in cognitive overload, causing the brain to waste resources processing the more interesting information at the expense of core principles. It can also break down that strong mental scaffolding, misdirecting the brain, to build a new organisational framework around the wrong idea. To avoid this, the detail in your narrative must directly support the learning objective, ensure the story integrates the facts, rather than just decorating them.

The final chapter
For educators, storytelling is not just a “nice extra” it’s a valuable tool and a natural way to help people learn. A well-told story draws attention, lowers resistance, and creates the sense that what follows is worth holding on to. Learners don’t just hear the information, they experience it, making knowledge far more memorable.
For learners, resist the urge to dismiss the story as a diversion from the important stuff, and instead listen with curiosity. They work on the mind in subtle ways connecting ideas, evoking emotions, and helping you see meaning, long after the classrrom door has closed. In this state, your brain does much of the hard work for you.

Want to know more?

The virtual educator has arrived!

But which one is me?

Inspired by a recent LinkedIn post I made regarding what it might be like to have an avatar as a teacher, I thought I should check out the evidence in terms of the effectiveness of avatars to improve learning before I get too carried away with the technology itself.

What is an avatar?
An avatar is a digital or computer-generated representation of a person or character in a virtual environment. It can take various forms, for example a simple profile picture on social media or an audio avatar talking about a specific subject using a synthetic voice. However, with major advancements in generative AI, avatars are evolving beyond static images or basic voice interactions. We are increasingly seeing lifelike digital humans emerge, sophisticated AI-driven avatars capable of “understanding” what we say and generating intelligent responses, speaking with realistic voices and impressive synchronised lip movements. This transformation is redefining how humans engage with AI-powered virtual beings, blurring the lines between digital representation and authentic interaction.

As to what they look like, here are some examples:

  • Firstly, an audio avatar that I have now built into my blog to provide a different perspective on what has been written. Here the avatar “chats” about the blog rather than simply reading it out loud. See above.
  • Secondly a Pixar style avatar. The goal here is to challenge the assumption that an avatar must resemble a real person to be effective.
  • And lastly, this is a more realistic avatar. Effectively an attempt to replicate me, in a slightly imperfect way. This is not about fooling the audience, although this is now possible, but to explore the idea that humans respond better to a more human like character.

The talking head – good or bad?
However there’s an elephant in the room when it comes to avatars, why do we need a talking head in the first place? Wouldn’t a simple voice-over, paired with well-structured content, be just as effective?

If you look at YouTube, almost everyone uses talking-head videos in different ways, surely if they weren’t effective, no one would have them, a kind of “wisdom of crowds.” But does their popularity actually prove their value, or are we just following a trend without questioning its impact?

Let’s have a look at the evidence:
After reviewing multiple studies, the findings are somewhat mixed. However, there’s enough insight to help us find an approach that works.

First, we have research from Christina Sondermann and Martin Merkt – Like it or learn from it: Effects of talking heads in educational videos. They conclude that the learning outcomes were worse for videos with talking heads, their concern was that it resulted in higher levels of cognitive load. But participants rated their perceived learning higher for videos with a talking head and gave better satisfaction ratings, selecting them more frequently. Secondly, another piece of research published five months later by Christina Sondermann and Martin Merkt, yes, the same people, What is the effect of talking heads in educational videos with different types of narrated slides. Here they found that “the inclusion of a talking head offers neither clear advantages nor disadvantages.” In effect using a talking head had no detrimental impact, which is slightly at odds with their previous conclusion.

A little confussing I agree, but stick with it….

Maybe we should move away from trying to prove the educational impact and consider the student’s perception of avatars. In this first report, student Perceptions of AI-Generated Avatars, the students said “there was little difference between having an AI presenter or a human delivering a lecture recording.” They also thought that the AI-generated avatar was an efficient vehicle for content delivery. However, they still wanted human connection in their learning and thought some parts of learning needed to be facilitated by teachers and that the avatar presentations “were ‘not … like a real class.” The second report, Impact of Using Virtual Avatars in Educational Videos on User Experience raised two really interesting points. Students found that high-quality video enhanced their learning, emotional experience, and overall engagement. Furthermore, when avatars displayed greater expressiveness, they felt more connected to the content, leading to improved comprehension and deeper involvement.

For those designing avatars, this means prioritising both technical quality and expressive alignment. Avatars should be visually clear, well animated, and their facial expressions should reinforce the message being conveyed.

What does this all mean?
Bringing everything together, we can conclude that avatars or talking heads are not distractions that lead to cognitive overload. Instead, students appreciate them, relate to them emotionally, in fact they see little difference between a recorded tutor and an avatar. Their expressiveness enhances engagement and might prove highly effective in helping student remember key points.

To balance differing perspectives, a practical approach might be to omit the talking head when explaining highly complex topics, (reducing cognative load) allowing students to focus solely on the material. However, keeping the avatar visible in most other situations, particularly for emphasising key concepts or prompting action to ensure maximum impact. Alternatively, why not let the student decide by offering them a choice to have the talking head or not.

How might avatars be used?
One important distinction in the use of avatars is whether they are autonomous or scripted. Autonomous avatars are powered by large language models, such as ChatGPT, allowing them to generate responses dynamically based on user interactions. In contrast, scripted avatars are entirely controlled by their creator, who directs what they say.

A scripted avatar could be particularly useful in educational settings where consistency, accuracy, and intentional messaging are crucial. Because its responses are predetermined, educators can ensure that the avatar aligns with specific learning goals, maintains an appropriate tone, and avoids misinformation.

This makes it ideal for scenarios such as:
– Delivering structured lessons with carefully crafted explanations.
– Providing standardised guidance, ensuring every student receives the same high-quality information.
– Reinforcing key concepts without deviation, which can be especially beneficial when high stake assessments are used, as is the case with professional exams.

However, if we power these avatars with Generative AI, the possibilities increase significantly:

  • More personalised learning. One of the most exciting prospects is the ability of avatars to offer personalised and contextualised instruction.
  • Help with effective study. Avatars could be used to remind students about a specific learning strategy or a deadline for completion of a piece of work. A friendly face, at the right time might be more effective than an email from your tutor or worse still an automated one.
  • Motivational and engaging. These avatars could also have a positive effect on motivation and feelings about learning. They could be designed to match an individual’s personality and interests, making them far more effective both in terms of higher levels of motivation and engagement.
  • Contextualised Learning. AI-based avatars can support teaching in practical, real-world scenarios, including problem solving and case-based learning. Traditionally, creating these types of environments required significant resources such as trained actors or expensive designed virtual worlds.

A few concerns – autonomous avatars
Of course, as with any new technology there are some concerns and challenges:

Autonomous avatars pose several risks, including their ability to make mistakes, the problem with avatars in particular is, they will be very convincing. We are already acutely aware that large language models can sometimes ‘hallucinate’ or simply make things up. Data protection is another concern, with risks ranging from deepfake misuse to avatars persuading users into sharing personal or confidential information, which could be exploited. Finally, value bias is a challenge, as AI trained avatars may unintentionally reflect biased perspectives that a professional educator would recognise and navigate more responsibly.

Conclusions
Avatars, whether simple or lifelike, are gaining traction in education. Research indicates that while talking heads don’t necessarily improve learning outcomes, they don’t harm them, and students perceive them positively. A key distinction lies between scripted avatars, offering consistent and accurate pre-determined content, ideal for structured lessons, and autonomous avatars powered by AI that open up a world of possibility in several areas including personalisation.

Avatars are a powerful and exciting new tool that offer capabilities that in many ways go beyond previous learning technologies, but their effectiveness very much depends on how they are designed and used. But hasn’t that always the case….

Finally – This is an excellent video that talks about some of the research I have referred to. It is of course presented by an avatar.  What Does Research Say about AI Avatars for Learning?

PS – which one is me – none of them, including the second one from the left.

You can’t outsource learning – Cognitive offloading 

As we begin to better understand the capabilities of Generative AI (Gen AI) and tools such as ChatGPT, there is also a need to consider the wider implications of this new technology. Much has been made of the more immediate impact, students using Gen AI to produce answers that are not their own, but less is known as to what might be happening in the longer term, the effect on learning and how our brains might change over time.

There is little doubt that Gen AI tools offer substantial benefits, (see previous blogs, Let’s chat about ChatGPT and Chatting with a Chat Bot – Prompting) including access to vast amounts of knowledge, explained in an easy to understand manner, as well as its ability to generate original content  instantly. However, there might be a significant problem of using these tools that has not yet been realised that could have implications for learning and learning efficacy. What if we become too reliant on these technologies, asking them to solve problems before we even think about them ourselves. This fear has found expression in debates well before Gen AI, in particular an article written by Nicholas Carr in 2008 asking “is Google making us stupid’’ – spoiler alert, the debate continues. And an interesting term coined by the neuroscientist and psychiatrist Manfred Spitzer in 2012, “Digital dementia”, describing the changes in cognition as a result of overusing technology.

But the focus of this blog is on cognitive offloading (Circ 1995), which as you might guess is about allowing some of your thinking/processing/learning to be outsourced to a technology.  

Cognitive offloading
Cognitive offloading in itself is neither good nor bad, it refers to the delegation of cognitive processes to external tools or devices such as calculators, the internet and more recently of course Gen AI. In simple terms there is a danger that by instinctively and habitually going to Google or Chat GPT for answers, your brain misses out on an essential part of the learning process. That is reflecting on what you already know, pulling the information forward, and as a result reinforcing that knowledge, (retrieval practice), then combining it with the new information to better understand what is being said or required.

As highlighted by the examples in the paragraph above cognitive offloading is not a new concern, and not specifically related to Gen AI. However, the level of cognitive offloading, the sophistication in the answers and opportunities to use these technologies is increasing, and as a result the scale and impact is greater.

Habitual dependency – one of the main concerns is that even before the question is processed, the student instinctively plugs it into the likes of ChatGPT without any attention or thought. The prompt being regurgitated from memory, “please answer this question in 100 words”. This will lead to possibly the worst situation, where all thought is delegated and worryingly the answer unquestionable believed to be true.

Cognitive offloading in action – Blindly following the Sat Nav! Research has shown that offloading navigation to GPS devices impairs spatial memory.

Benefits of Cognitive offloading – it’s important to add that there are benefits of using cognitive offloading, for example it reduces cognitive load, which is a significant problem in learning. The technology helps reduce the demand on our short-term memory, freeing the brain to focus on what is more important.

Also, some disagree as to the long-term impact, arguing that short term evidence (see below) is not necessarily the best way to form long term conclusions. For example, there were concerns that calculators would affect our ability to do math’s in our heads, but research found little difference whether students used calculators or not. And the debate has moved on to consider how calculators could be used to complement and reinforce mental and written methods of math’s. These benefits have led some to believe that cognitive offloading increases immediate task performance but diminishes subsequent memory performance for the offloaded information.

Evidence
There is little research on the impact of Gen AI due to it being so new, but as mentioned above we have a large amount of evidence on what has happened since the introduction of the internet and search.

  • In the paper Information Without Knowledge. The Effects of Internet Search on Learning, Matthew Fisher, et al found that participants who were allowed to search for information online were overconfident about their ability to comprehend the information and those who used the internet were less likely to remember what they had read. 
  • Dr Benjamin Storm the lead author of Cognitive offloading: How the Internet is increasingly taking over human memory, commented, “Memory is changing. Our research shows that as we use the Internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don’t bother.”

What should you do?
To mitigate the risks of cognitive offloading, the simple answer is to limit or reduce your dependency and use Gen AI to supplement your learning rather than as a primary source. For example, ask it to come with ideas and lists but not the final text, spend your time linking the information together and shaping the arguments.

The world of Pure Imagination

There is no life I know, to compare with pure imagination, living there, you’ll be free, if you truly wish to be.  If you want to view paradise, simply look around and view it, anything you want to, do it, want to change the world? There’s nothing to it”

These are a few lines from the song “Pure Imagination” performed by Gene wilder in the original 1971 Willy Wonka movie, always a good watch at Christmas. It was remade with Johnny Depp in 2005 and a prequel called Wonka was released this December to much acclaim, staring Timothée Chalamet. The original story tells of a poor boy named Charlie Bucket who wins a golden ticket to tour the magical chocolate factory of the eccentric Willy Wonka.  Although the story still has a contemporary feel, its appeal has more to do with the magical world Wonka creates, the morality of greed, and recognising that actions have consequences.

The point however is, to create such a fantastical, spectacular, stupendous chocolate factory, Wonka required a very special quality – Imagination!

Imagination
Imagination is tricky to define, with many linking it to creativity and contrasting it with knowledge, but I like this explanation provided by Chat GPT, checked of course.

Imagination is the cognitive ability to form mental images, ideas, or concepts that are not directly perceived through the senses. It involves the capacity to create, manipulate, and combine mental representations, allowing individuals to explore possibilities, envision scenarios, and generate novel ideas.

There is a strong visual element to imagination but it’s not driven by our senses, we are not looking at an object in the real world (external) and creating something new as a consequence. When you use your imagination, its coming from your internal world, often unconsciously influenced by your memories and feelings. In fact, when you imagine something, you don’t have to have experienced it before at all.

Imagination, creativity, and knowledge are intricately connected in the process of thinking, especially at the higher levels. Knowledge is the foundation, providing the raw material for imaginative exploration and creative synthesis. Imagination draws upon knowledge, resulting in mental representations and visual possibilities. Creativity transforms these imaginative ideas into valuable outcomes, for example solving a problem or developing a new product.

Imagination, original thought and Gen AI
I didn’t think this blog was going to have anything to do with Gen AI, apologies, I was trying to make it Gen AI free. But using it by way of contrast might help with our understanding of imagination and to some extent original thought i.e. ideas, concepts, or perspectives that are unique.

At the time of writing no matter how impressive a Gen AI created poem or picture might be it is not the result of imagination as described above. The AI is simply accessing the huge data sets on which it has been trained and predicting the most likely next word or brush stroke. In other words, it isn’t capable of what we would call “original thought”, that is having new ideas of its own. I should add that when I discussed this point with Chat GPT it disagreed.

Genetics – And finally in terms of understanding imagination, being imaginative or creative is not thought to be genetic. While genetic predispositions may create a foundation, the development and expression of imagination is shaped more by external influences. (Nichols 1978, Barron & Parisi 1976, Reznikoff 1973).

The neuroscience of imagination – Watch this if your interested as to what is happening in your brain when you use your imagination.

Does imagination help with learning?
All very interesting, at least I hope so but can using your imagination improve learning, of course it can, below are some of the benefits:

  • Brings into play the imagination effect – A study in 2014 required two different groups to learn the parts of the respiratory system. One group were asked to imagine the parts from a text description but without a picture, the other had both text and picture (control).  Those who had to imagine the picture did better on a test than the control. The conclusion – people learn more deeply when prompted to form images depicting what the words describe. There are a number of reasons for this, but one is thought to be the reduction in cognitive load.
  • Encourages independent learning – The ability to think about a particular problem or situation using your imagination helps develop a more independent approach to learning.
  • Increases engagement – Imagination can make learning more engaging and enjoyable partly because the learning becomes more personal, as new information is related to something already known.
  • Improved memory retention – Creating mental images or scenarios related to the material being learned can improve memory retention. Imagination often requires visualisation, making it easier to recall information later.
  • Facilitates critical thinking – Imagining different scenarios and perspectives encourages critical thinking, allowing the learner to analyse information more deeply and consider various angles, leading to a richer understanding of the subject matter.
  • Stimulation of curiosity Imagination sparks curiosity, motivating learners to explore topics further. This intrinsic type of motivation can then lead to a lifelong learning mindset.

What happened to Charlie Bucket and friends?
Charlie, (Peter Ostrum) only ever stared in Willy Wonka. He later became a Vet in New York. Veruca, Salt (Julie Cole) continued to act but later became a psychotherapist. Violet Beauregarde (Denise Nickerson) also acted for a short while before getting a job as a receptionist. And Michael Bollner (Augustus Gloop) is a lawyer in Germany.

Want to now moreImagination: It’s Not What You Think. It’s How You Think – Charles Faulkner.

The last word we will give to Willy Wonka……But what do you think it means?

“We are the music makers; we are the dreamers of dreams.” Willy Wonka

Inquiry based learning is harmful – ouch!

Can I ask you a question, would you prefer to discover something for yourself or be told what you should know?

Choices as to how you want to learn are to a certain extent personal, perhaps even a learning style, but shouldn’t we be asking which is the most effective, and when it comes to that, we have evidence.

The problem is you might not like the results, I’m not sure I do.

The headline for this month’s blog is not mine but an edited one from John Sweller, of cognitive load fame, in a paper published this August by the Centre for Independent Studies in Australia. Although I have written about some aspects of Inquiry based learning before (IBL), it’s worth taking a closer look, especially given the impact Sweller believes IBL type methods have had in Australia. He suggests that the countries rankings on international tests such as PISA have reduced because of a greater emphasis on IBL in classrooms across the country.

But first…..

What is inquiry-based learning?
Inquiry based learning can be traced back to Constructivism and the work of Piaget, Dewey, Vygotsky et al. Constructivism is an approach to learning that suggests people construct their own understanding and knowledge of the world, through experiencing it and reflecting on those experiences. This sits alongside Behaviourism (see last month’s blog) and Cognitivism to form three important theories of learning.

As a process IBL often starts with a question to encourage students to share their thoughts, these are then carefully challenged in order to test conviction and depth of understanding. The result, a more refined and robust appreciation of what was being discussed, learning has taken place. It is an approach in which the teacher and student share responsibility for learning. There are some slight variations to IBL that include Problem-based learning (PBL), and Project-based learning (PjBL), in these rather than a question being the catalyst, it’s a problem.

This method is intuitively attractive and promoted widely in schools and higher education institutions around the world. Which is what makes Swellers argument so challenging, how can someone “learn better” when they are being told as opposed to discovering the answer for themselves?

What’s wrong with it?
To answer this question, I will quote both Sweller and Richard E Clark who challenged enquiry-based learning fifteen years ago in a paper called, Why Minimal Guidance During Instruction Does Not Work.

Unguided and minimally guided methods… ignore both the structures that constitute human cognitive architecture and evidence from empirical studies over the past half-century that consistently indicate that minimally guided instruction is less effective and less efficient than instructional approaches that place a strong emphasis on guidance of the student learning process.

The cognitive architecture they are refereeing to is the limitation of working memory and the need to keep cognitive load to a minimum e.g. 7+-2. In the more recent paper Sweller goes onto explain how the “worked example effect” demonstrates the problems of IBL and the benefits of a more direct instructional approach. If one group of students were presented with a series of problems to solve and another group given the same problems but with detailed solutions, those that had the worked example perform better on future common problem-solving tests.

“Obtaining information from others is vastly more efficient than obtaining it during problem solving“. John Sweller

In simple terms if a student (novice) has to formulate the problem, position it in a way that they can think about it, bring to bear their existing knowledge, challenge that knowledge, the cognitive load becomes far too high resulting in at best weak learning, and at worst confusion.

“As far as can be seen, inquiry learning neither teaches us how to inquire nor helps us acquire other knowledge deemed important in the curriculum.” John Sweller

What’s better – Direct instruction?
Sweller is not simply arguing against IBL, he is comparing it and promoting the use of direct instruction. This method you might remember requires the teacher to presents information in a prescriptive, structured and sequenced manner. Direct Instruction keeps cognitive load to a minimum and as a result makes it easier to transfer information from working to long term memory.

Best of both worlds
It may be that so far this blog has been a bit academic and does little more than promote direct instruction over IBL, my apologies. The intention was to showcase IBL, clarify what it is and point out some of the limitations. In addition to highlight how easy it is to believe that something must be good because it feels intuitively right. And in that IBL is compelling, we are human and learn from asking questions and solving problems, it’s what we have been doing for thousands of years. But that alone does not make it the best way to learn.

The good news is these methods are not mutually exclusive, and for me John Hattie, coincidentally another Australian has the answer. He says that although IBL may engage students, which can give an illusion of learning, if you are new to a subject (a novice) and have to learn content as opposed to the slightly deeper relationship between content, then IBL doesn’t work. Also, if you don’t teach the content, you have nothing to reason about.

But, there is a place for IBL…..its after the student has acquired sufficient knowledge that they can begin to explore by experimenting with their own thoughts. The more difficult question is, when do you should do this, and that is likely to be different for everyone.

One for another day perhaps.

The single most important thing for students to know – Cognitive load

Back in 2017 Dylan Williams, Professor of Educational Assessment at UCL described cognitive load theory (CLT) as ‘the single most important thing for teachers to know’. His reasoning was simple, if learning is an alteration in long term memory (OFSTED’s definition) then it is essential for teachers to know the best ways of helping students achieve this. At this stage you might find it helpful to revisit my previous blog, Never forget, improving memory, which explains more about the relationship between long and short-term memory but to help reduce your cognitive load…. I have provided a short summary below.

But here is the point, if CLT is so important for teachers it must also be of benefit to students.

Cognitive load theory
The term cognitive load was coined by John Sweller in a paper published in the journal of Cognitive Science in 1988. Cognitive load is the amount of information that working/short term memory can process at any one time, and that when the load becomes too great, processing information slows down and so does learning. The implication is that because we can’t do anything about the short-term nature of short-term memory, we can only retain 4 + or – 2 chunks of information before it’s lost, learning should be designed or studying methods changed accordingly. The purpose of which is to reduce the ‘load’ so that it can more easily pass into long term memory where the storage capacity is infinite.

CLT can be broken down into three categories:

Intrinsic cognitive load – this relates to the inherent difficulty of the material or complexity of the task. Some content will always have a high level of difficulty, for example, solving a complex equation is more difficult than adding two numbers together. However, the cognitive load arising from a complex task can be reduced by breaking it down into smaller and simpler steps. There is also evidence to show that prior knowledge makes the processing of complex tasks easier. In fact, it is one of the main differences between an expert and a novice, the expert requires less short-term memory capacity because they already have knowledge stored in long term memory that they can draw upon. The new knowledge is simply adding to what they already know. Bottom line – some stuff is just harder.

Extraneous cognitive load – this is the unnecessary mental effort required to process information for the task in hand, in effect the learning has been made overly difficult or confusing. For example, if you needed to learn about a square, it would be far easier to draw a picture and point to it, than use words to describe it. A more common example of extraneous load is when a presenter puts too much information on a PowerPoint slide, most of which adds little to what needs to be learned. Bottom line – don’t make learning harder by including unimportant stuff.

Germane cognitive load – increasing the load is not always bad, for example if you ask someone to think of a house, that will increase the load but when they have created that ‘schema’ or plan in their mind adding new information becomes easier. Following on with the house example, if you have a picture of a house in your mind, asking questions about what you might find in the kitchen is relatively simple. The argument is that learning can be enhanced when content is arranged or presented in a way that helps the learner construct new knowledge. Bottom line – increasing germane load is good because it makes learning new stuff easier.

In summary, both student and teacher should reduce intrinsic and extraneous load but increase germane.

Implications for learning
The three categories of cognitive load shown above provide some insight as to what you should and shouldn’t do if you want to learn more effectively. For example, break complex tasks down into simpler ones, focus on what’s important and avoid unnecessary information and use schemas (models) where possible to help deal with complexity. There are however a few specifics that relate to the categories worthy of mention.

The worked example effect – If you are trying to understand something and continual reading of the text is having little impact, it’s possible your short-term memory has reached capacity. Finding an example of what you need to understand will help free up some of that memory. For example…….…if I wanted to explain that short term memory is limited I might ask you to memorise these 12 letters, SHNCCMTAVYID. But because this will exceed the 4+ or – 2 rule it will be difficult and hopefully as a result prove the point. In this situation the example is a far more effective way of transferring knowledge than pages of text.

The redundancy effect – This is most commonly found where there is simply too much unnecessary or redundant information. It might be irrelevant or not essential to what you’re trying to learn. In addition, it could be the same information but presented in multiple forms, for example an explanation and diagram on the same page. The secret here is to be relatively ruthless in pursuing what you want to know, look for the answer to your question rather than getting distracted by adjacent information. You may also come across this online where a PowerPoint presentation has far too much content and the presenter simply reads out loud what’s on the slides. In these circumstances, it’s a good idea to turn down the sound and simply read the slides for yourself. People can’t focus when they hear and see the same verbal message during a presentation (Hoffman, 2006).

The split attention effect – This occurs when you have to refer to two different sources of information simultaneously when learning. Often in written texts and blogs as I have done in this one, you will find a reference to something further to read or listen to, ignore it and stick to the task in hand, grasp the principle and only afterwards follow up on the link. Another way of reducing the impact of split attention is to produce notes that reduce the conflict that arises when trying to listen to the teacher and make notes at the same time. You might want to use the Cornel note taking method, click here to find out more.

But is it the single most important thing a student should know?
Well maybe, maybe not but its certainly in the top three. The theory on its own will not make you a better learner but it goes a long way in explaining why you can’t understand something despite spending hours studying, it provides guidance as to what you can do to make learning more effective but most importantly it can change your mindset from – “I’m not clever enough” to, “I just need to reduce the amount of information, and then I’ll get it”.

And believing that is priceless, not only for studying towards your next exam but in helping with all your learning in the years to come.

Brain overload

Have you ever felt that you just can’t learn anymore, your head is spinning, your brain must be full? And yet we are told that the brains capacity is potentially limitless, made up of around 86 billion neurons.

To understand why both of these may be true, we have to delve a little more into how the brain learns or to be precise how it manages information. In a previous blog I outlined the key parts of the brain and discussed some of the implications for learning – the learning brain, but as you might imagine this is a complex subject, but I should add a fascinating one.

Cognitive load and schemas

Building on the work of George (magic number 7) Miller and Jean Paget’s development of schemas, in 1988 John Sweller introduced us to cognitive load, the idea that we have a limit to the amount of information we can process.

Cognitive load relates to the amount of information that working memory can hold at one time

Human memory can be divided into working memory and long-term memory. Working memory also called short term memory is limited, only capable of holding 7 plus or minus 2 pieces of information at any one time, hence the magic number 7, but long-term memory has arguably infinite capacity.

The limited nature of working memory can be highlighted by asking you to look at the 12 letters below. Take about 5 seconds. Look away from the screen and write down what you can remember on a blank piece of paper.

MBIAWTDHPIBF

Because there are more than 9 characters this will be difficult. 

Schemas – Information is stored in long-term memory in the form of schemas, these are frameworks or concepts that help organise and interpret new information. For example, when you think of a tree it is defined by a number of characteristics, its green, has a trunk and leaves at the end of branches, this is a schema. But when it comes to autumn, the tree is no longer green and loses its leaves, suggesting that this cannot be a tree. However, if you assimilate the new information with your existing schema and accommodate this in a revised version of how you think about a tree, you have effectively learned something new and stored it in long term memory. By holding information in schemas, when new information arrives your brain can very quickly identify if it fits within an existing one and in so doing enable rapid knowledge acquisition and understanding.

The problem therefore lies with working memory and its limited capacity, but if we could change the way we take in information, such that it doesn’t overload working memory the whole process will become more effective.

Avoiding cognitive overload

This is where it gets really interesting from a learning perspective. What can we do to avoid the brain becoming overloaded?

1. Simple first – this may sound like common sense, start with a simple example e.g. 2+2 = 4 and move towards the more complex e.g. 2,423 + 12,324,345. If you start with a complex calculation the brain will struggle to manipulate the numbers or find any pattern.

2. Direct Instruction not discovery – although there is significant merit in figuring things out for yourself, when learning something new it is better to follow guided instruction (teacher led) supported by several examples, starting simple and becoming more complex (as above). When you have created your own schema, you can begin to work independently.

3. Visual overload – a presentation point, avoid having too much information on a page or slide, reveal each part slowly. The secret is to break down complexity into smaller segments. This is the argument for not having too much content all on one page, which is often the case in textbooks. Read with a piece of paper or ruler effectively underlining the words you are reading, moving the paper down revealing a new line at a time.

4. Pictures and words (contiguity) – having “relevant” pictures alongside text helps avoid what’s called split attention. This is why creating your own notes with images as well as text when producing a mind map works so well.

5. Focus, avoid distraction (coherence) – similar to visual overload, remove all unnecessary images and information, keep focused on the task in hand. There may be some nice to know facts, but stick to the essential ones.

6. Key words (redundancy) – when reading or making notes don’t highlight or write down exactly what you read, simplify the sentence, focusing on the key words which will reduce the amount of input.

7. Use existing schemas – if you already have an understanding of a topic or subject, it will be sat within a schema, think how the new information changes your original understanding.

Remember the 12 characters from earlier, if we chunk them into 4 pieces of information and link to an existing schema, you will find it much easier to remember. Here are the same 12 characters chunked down.

FBI – TWA – PHD – IBM

Each one sits within an existing schema e.g. Federal Bureau of Investigation etc, making it easier for the brain to learn the new information.

Note – the above ideas are based on Richard E. Mayer’s principles of multimedia learning.

In conclusion

Understanding more about how the brain works, in particular how to manage some of its limitations as is the case with short term memory not only makes learning more efficient but also gives you confidence that how your learning is the most effective.