Human superpowers – Creative, Analytical and Critical thinking

Are you sure Gen AI doesn’t make mistakes Mr Spock? Because this just “feels” wrong to me.

Back in July 2022, I wrote about the importance of critical thinking, a skill long considered essential in education, leadership, and the workplace.

But that was before Gen AI arrived in the November, bringing with it the ability to answer almost any question within seconds. Its presence prompted reflection on the nature of learning, how education might change and what role humans should now play, if any.

If you don’t have time to read this month’s blog – listen to my AI alter ego summarise the key points.

But all is not lost we still have one last card to play, our ability to think and feel, okay maybe that’s two cards. Thinking is hopefully what you are doing whilst reading this blog, neurons will be firing as you reflect, analyse and question what is being said. It’s something we do in between day dreaming, sleeping and unconscious behaviours such as cleaning our teeth.

Thinking is however a little more nuanced, and there are many different types, for example you can think creatively, analytically, or critically. Whichever mode you engage in, there’s another essential human attribute that quietly shapes the process…. our emotions. These are the subjective experiences rooted in our limbic system that help us interpret information and as such see the world. Together these are our superpowers offering something AI can’t replicate, not yet at least!

An Artist, Pathologist and Judge walk into a bar
Critical thinking, creative thinking, and analytical thinking are often grouped under the umbrella of “higher-order cognitive skills,” but each one is different, playing a role in how we process, evaluate, and generate ideas.

  • Critical thinking is fundamentally about evaluation, it involves questioning assumptions, weighing evidence, and forming reasoned judgments. It’s the internal referee that asks, “Does this make sense? Is it credible? What are the implications?”
  • Meanwhile, analytical thinking breaks down complexity into more manageable components, identifying patterns, and applying logic so that we can better understand relationships.
  • And creative thinking is generative. It thrives on ambiguity, imagination, and novelty. Where critical thinking narrows and refines, creative thinking expands and explores. It’s the spark that asks, “What if? Why not? Could we do this differently?”

Humans are emotional – Far from being a distraction, emotions actively shape how we think, judge, and create. In creative thinking, emotion is the spark that fuels imagination and unlocks divergent ideas. In analytical thinking, emotion plays a subtler role influencing how we interpret data, what patterns we notice, and our levels of motivation.  Critical thinking, meanwhile, relies on emotion to provide an ethical compass and improve our self-awareness.

Learning to be a better thinker
Critical, creative, and analytical thinking aren’t fixed traits, they’re learnable skills. It’s tempting to believe they can only be acquired through the slow drip of wisdom from those who have had a lifetime of experience. The truth is, with good instruction, these skills can be learned well enough for any novice to get started. At first the beginner may simply replicate what they have been taught but with practice and reflection, they begin to refine, adapt, and eventually think for themselves.

By way of an example, this is how you might start to learn to think more critically.

  1. Start with knowledge – Critical thinking is the analysis of available facts, evidence, observations, and arguments to form a judgement.
  2. Use a framework
    • Formulate the question – what problem(s) are you trying to solve?
    • Gather information – what do you need to know more about?
    • Analyse and evaluate – ask challenging questions, consider implications, and prioritise.
    • Reach a conclusion – form an opinion, and reflect.
  3. Bring in Tools – These can provide ideas or change perspective, for example Edward de Bono’s six thinking hats.
  4. Apply by practicing with real world problems. This is largely experiential, and requires continual reflection and looping back to check you have asked the right question, gathered enough information, and correctly prioritised.

The real challenge and deeper learning take place in the application phase.  By working in groups, your arguments may well be questioned and potentially exposed by the use of Socratic type questions and differing views.  Your only defence is to start thinking about what others might say in advance. Over time like any other skill, it can begin to feel more like an instinct, requiring less conscious effort, simply popping into your mind when most needed.

To boldly go
Generative AI may offer logic, precision, and even flashes of creativity but it does not feel the weight of a decision, nor wrestle with the moral ambiguity that defines human experience. It is Spock without Kirk, brilliant, efficient, and deeply insightful, yet missing the emotional compass that gives judgment its humanity. True thinking is not just analysis, its empathy, intuition, and the courage to act without certainty. AI can advise, assist, and illuminate, but it cannot replace the uniquely human interplay of reason and emotion. Like Kirk and Spock, the future belongs not to one or the other, but to the partnership. Or at least I hope so…..

I will leave the last word to Dr McCoy.

The virtual educator has arrived!

But which one is me?

Inspired by a recent LinkedIn post I made regarding what it might be like to have an avatar as a teacher, I thought I should check out the evidence in terms of the effectiveness of avatars to improve learning before I get too carried away with the technology itself.

What is an avatar?
An avatar is a digital or computer-generated representation of a person or character in a virtual environment. It can take various forms, for example a simple profile picture on social media or an audio avatar talking about a specific subject using a synthetic voice. However, with major advancements in generative AI, avatars are evolving beyond static images or basic voice interactions. We are increasingly seeing lifelike digital humans emerge, sophisticated AI-driven avatars capable of “understanding” what we say and generating intelligent responses, speaking with realistic voices and impressive synchronised lip movements. This transformation is redefining how humans engage with AI-powered virtual beings, blurring the lines between digital representation and authentic interaction.

As to what they look like, here are some examples:

  • Firstly, an audio avatar that I have now built into my blog to provide a different perspective on what has been written. Here the avatar “chats” about the blog rather than simply reading it out loud. See above.
  • Secondly a Pixar style avatar. The goal here is to challenge the assumption that an avatar must resemble a real person to be effective.
  • And lastly, this is a more realistic avatar. Effectively an attempt to replicate me, in a slightly imperfect way. This is not about fooling the audience, although this is now possible, but to explore the idea that humans respond better to a more human like character.

The talking head – good or bad?
However there’s an elephant in the room when it comes to avatars, why do we need a talking head in the first place? Wouldn’t a simple voice-over, paired with well-structured content, be just as effective?

If you look at YouTube, almost everyone uses talking-head videos in different ways, surely if they weren’t effective, no one would have them, a kind of “wisdom of crowds.” But does their popularity actually prove their value, or are we just following a trend without questioning its impact?

Let’s have a look at the evidence:
After reviewing multiple studies, the findings are somewhat mixed. However, there’s enough insight to help us find an approach that works.

First, we have research from Christina Sondermann and Martin Merkt – Like it or learn from it: Effects of talking heads in educational videos. They conclude that the learning outcomes were worse for videos with talking heads, their concern was that it resulted in higher levels of cognitive load. But participants rated their perceived learning higher for videos with a talking head and gave better satisfaction ratings, selecting them more frequently. Secondly, another piece of research published five months later by Christina Sondermann and Martin Merkt, yes, the same people, What is the effect of talking heads in educational videos with different types of narrated slides. Here they found that “the inclusion of a talking head offers neither clear advantages nor disadvantages.” In effect using a talking head had no detrimental impact, which is slightly at odds with their previous conclusion.

A little confussing I agree, but stick with it….

Maybe we should move away from trying to prove the educational impact and consider the student’s perception of avatars. In this first report, student Perceptions of AI-Generated Avatars, the students said “there was little difference between having an AI presenter or a human delivering a lecture recording.” They also thought that the AI-generated avatar was an efficient vehicle for content delivery. However, they still wanted human connection in their learning and thought some parts of learning needed to be facilitated by teachers and that the avatar presentations “were ‘not … like a real class.” The second report, Impact of Using Virtual Avatars in Educational Videos on User Experience raised two really interesting points. Students found that high-quality video enhanced their learning, emotional experience, and overall engagement. Furthermore, when avatars displayed greater expressiveness, they felt more connected to the content, leading to improved comprehension and deeper involvement.

For those designing avatars, this means prioritising both technical quality and expressive alignment. Avatars should be visually clear, well animated, and their facial expressions should reinforce the message being conveyed.

What does this all mean?
Bringing everything together, we can conclude that avatars or talking heads are not distractions that lead to cognitive overload. Instead, students appreciate them, relate to them emotionally, in fact they see little difference between a recorded tutor and an avatar. Their expressiveness enhances engagement and might prove highly effective in helping student remember key points.

To balance differing perspectives, a practical approach might be to omit the talking head when explaining highly complex topics, (reducing cognative load) allowing students to focus solely on the material. However, keeping the avatar visible in most other situations, particularly for emphasising key concepts or prompting action to ensure maximum impact. Alternatively, why not let the student decide by offering them a choice to have the talking head or not.

How might avatars be used?
One important distinction in the use of avatars is whether they are autonomous or scripted. Autonomous avatars are powered by large language models, such as ChatGPT, allowing them to generate responses dynamically based on user interactions. In contrast, scripted avatars are entirely controlled by their creator, who directs what they say.

A scripted avatar could be particularly useful in educational settings where consistency, accuracy, and intentional messaging are crucial. Because its responses are predetermined, educators can ensure that the avatar aligns with specific learning goals, maintains an appropriate tone, and avoids misinformation.

This makes it ideal for scenarios such as:
– Delivering structured lessons with carefully crafted explanations.
– Providing standardised guidance, ensuring every student receives the same high-quality information.
– Reinforcing key concepts without deviation, which can be especially beneficial when high stake assessments are used, as is the case with professional exams.

However, if we power these avatars with Generative AI, the possibilities increase significantly:

  • More personalised learning. One of the most exciting prospects is the ability of avatars to offer personalised and contextualised instruction.
  • Help with effective study. Avatars could be used to remind students about a specific learning strategy or a deadline for completion of a piece of work. A friendly face, at the right time might be more effective than an email from your tutor or worse still an automated one.
  • Motivational and engaging. These avatars could also have a positive effect on motivation and feelings about learning. They could be designed to match an individual’s personality and interests, making them far more effective both in terms of higher levels of motivation and engagement.
  • Contextualised Learning. AI-based avatars can support teaching in practical, real-world scenarios, including problem solving and case-based learning. Traditionally, creating these types of environments required significant resources such as trained actors or expensive designed virtual worlds.

A few concerns – autonomous avatars
Of course, as with any new technology there are some concerns and challenges:

Autonomous avatars pose several risks, including their ability to make mistakes, the problem with avatars in particular is, they will be very convincing. We are already acutely aware that large language models can sometimes ‘hallucinate’ or simply make things up. Data protection is another concern, with risks ranging from deepfake misuse to avatars persuading users into sharing personal or confidential information, which could be exploited. Finally, value bias is a challenge, as AI trained avatars may unintentionally reflect biased perspectives that a professional educator would recognise and navigate more responsibly.

Conclusions
Avatars, whether simple or lifelike, are gaining traction in education. Research indicates that while talking heads don’t necessarily improve learning outcomes, they don’t harm them, and students perceive them positively. A key distinction lies between scripted avatars, offering consistent and accurate pre-determined content, ideal for structured lessons, and autonomous avatars powered by AI that open up a world of possibility in several areas including personalisation.

Avatars are a powerful and exciting new tool that offer capabilities that in many ways go beyond previous learning technologies, but their effectiveness very much depends on how they are designed and used. But hasn’t that always the case….

Finally – This is an excellent video that talks about some of the research I have referred to. It is of course presented by an avatar.  What Does Research Say about AI Avatars for Learning?

PS – which one is me – none of them, including the second one from the left.

Transforming Learning – GenAI is two years old

ChatGPT – Happy second birthday
Generative AI (GenAI), specifically ChatGPT exploded onto the scene in November 2022, which means it is only two years old. Initially people were slow to react, trying to figure out what this new technology was, many were confused, thinking it was a “bit like Google.” But when they saw what it could do – “generating” detailed, human-like responses to human generated “prompts,” ideas as to what it could be used for started to emerge. The uptake was extraordinary with over 1 million people using it within the first five days, a year later this had grown to 153 million monthly users and as at November 2024 its around 200 million. The use of GPTs across all platforms is difficult to estimate but it could be something in the region of 400 – 500 million. That said, and to put this in perspective, google search has over 8.5 billion searches every day, that’s the equivalent to the world’s population!

From Wow to adoption
Initially there was the WOW moment, true AI had been around for a long time but GenAI made it accessible to ordinary people. In the period from November 2022 to early 2023 we saw the early adopters, driven mostly by curiosity and a desire to experiment. By mid 2023 it became a little more mainstream as other GPTs emerged e.g. Googles Bard (Now Gemini), and Microsoft’s Copilot to name just two. But it was not all plain sailing, ethical concerns began to grow and by the end of 2023 there were people talking about misinformation, problems with academic integrity, and job displacement. This led to calls for greater regulation especially in Europe, where AI governance frameworks were developed to address some of the risks.

In terms of education, initially there were calls to ban learners from using these tools in response to answers being produced that were clearly not the work of the individual. And although many still worry, by early 2024, there was a creeping acceptance that the genie was out of the bottle and it was time for schools, colleges, and universities to redefine their policies, accept GPTs, and integrate rather than ban. 2024 saw even greater adoption, according to a recent survey, 48% of teachers are now using GenAI tools in some way.

GenAI – Educational disrupter
There have been significant changes in education over the last 50 years e.g. the introduction of personal computers and the Internet (1980s -1990s), making content far more accessible, and changing some learning practices. Then in the 2000 – 2010s we saw the development of E-learning Platforms and MOOCs such as Moodle, Blackboard and Coursera. This fuelled the growth of online education providing learners with access to quality courses across the globe.

But I am going to argue that as important as these developments were, not least because they are essential underpinning technologies for GenAI, we are always “standing on the shoulders of giants” – GenAI is by far the biggest educational disrupter than anything that has come before. Here are a few of the reasons:

  • Personalised Learning at scale: GenAI tools make it possible for everyone to enjoy a highly personalised learning experience. For instance, AI can adapt to an individual’s learning style, pace, and level of understanding, offering custom explanations and feedback. This brings us far closer to solving the elusive two sigma problem.
  • Easier access to knowledge and resources: Although it could be argued the internet already offers the worlds information on a page, the nature of the interaction has improved making it far easier to use, and have almost human conversations. This means learners can now explore topics in depth, engage in Socratic questioning, produce summaries reducing cognitive load and be inspired by some of the questions the AI might ask.
  • Changing the Teachers role: Teachers and educators can use GenAI to automate administrative tasks such as marking and answering frequently asked questions. And perhaps more importantly the traditional teacher centered instructor role is shifting to that of a facilitator, guiding students rather than “telling” them.
  • Changes the skill set: Learners must rapidly enhance their skills in prompting, AI literacy, critical thinking, and foster a greater level of curiosity if they are to remain desirable to employers.
  • Disrupting Assessment: The use of GenAI for generating essays, reports, and answers has raised concerns about academic integrity. How can you tell if it’s the learners own work? As a result, educational institutions are now having to rethinking assessments, moving towards more interactive, collaborative, and project-based formats.

Transforming learning
GenAI is not only disrupting the way learning is delivered its also having an impact on the way we learn.

A recent study by Matthias Stadler, Maria Bannert and Michael Sailer compared the use of large language models (LLMs), such as ChatGPT, and traditional search engines (Google) in helping with problem-based exploration. They focused on how each influences cognitive load and the quality of learning outcomes. What they found was a trade-off between cognitive ease and depth of learning. LLMs are effective at reducing the barriers to information, making them useful for tasks where efficiency is a priority. But they may not be as beneficial for tasks requiring critical evaluation and complex reasoning. Traditional search engines, need the learner to put in far more effort in terms of thinking, which results in a deeper and better understanding of the subject matter.

The research reveals a fascinating paradox in how learners interact with digital learning tools. When using LLMs, learners experienced a dramatically reduced cognitive burden. In other words, they had far less information to think, making it easier to “see the wood from the trees.” This is what any good teacher does, they simplify. But because there was little effort required (desirable difficulty) they were less engaged and as a result there was little intellectual development.

This leads to one of the biggest concerns about Generative AI, the idea that it is seen as a way of offloading learning – the problem is you cant.

Conclusions
As we celebrate ChatGPT’s second birthday, it’s clear that GenAI is more than a fleeting novelty, it has already begun to disrupt the world of education and learning. Its ability to personalise learning, reduce cognitive barriers, and provide a human friendly access to resources holds immense potential to transform how we teach and learn. However, the opportunities come hand in hand with significant challenges.

The risk of over-reliance on GenAI, where learners disengage from critical thinking and problem solving, cannot be ignored. True learning requires effort, reflection, and the development of independent thought, skills that no technology can substitute.

The role of educators is crucial in ensuring that GenAI is used to complement, not replace, these processes.

You can’t outsource learning – Cognitive offloading 

As we begin to better understand the capabilities of Generative AI (Gen AI) and tools such as ChatGPT, there is also a need to consider the wider implications of this new technology. Much has been made of the more immediate impact, students using Gen AI to produce answers that are not their own, but less is known as to what might be happening in the longer term, the effect on learning and how our brains might change over time.

There is little doubt that Gen AI tools offer substantial benefits, (see previous blogs, Let’s chat about ChatGPT and Chatting with a Chat Bot – Prompting) including access to vast amounts of knowledge, explained in an easy to understand manner, as well as its ability to generate original content  instantly. However, there might be a significant problem of using these tools that has not yet been realised that could have implications for learning and learning efficacy. What if we become too reliant on these technologies, asking them to solve problems before we even think about them ourselves. This fear has found expression in debates well before Gen AI, in particular an article written by Nicholas Carr in 2008 asking “is Google making us stupid’’ – spoiler alert, the debate continues. And an interesting term coined by the neuroscientist and psychiatrist Manfred Spitzer in 2012, “Digital dementia”, describing the changes in cognition as a result of overusing technology.

But the focus of this blog is on cognitive offloading (Circ 1995), which as you might guess is about allowing some of your thinking/processing/learning to be outsourced to a technology.  

Cognitive offloading
Cognitive offloading in itself is neither good nor bad, it refers to the delegation of cognitive processes to external tools or devices such as calculators, the internet and more recently of course Gen AI. In simple terms there is a danger that by instinctively and habitually going to Google or Chat GPT for answers, your brain misses out on an essential part of the learning process. That is reflecting on what you already know, pulling the information forward, and as a result reinforcing that knowledge, (retrieval practice), then combining it with the new information to better understand what is being said or required.

As highlighted by the examples in the paragraph above cognitive offloading is not a new concern, and not specifically related to Gen AI. However, the level of cognitive offloading, the sophistication in the answers and opportunities to use these technologies is increasing, and as a result the scale and impact is greater.

Habitual dependency – one of the main concerns is that even before the question is processed, the student instinctively plugs it into the likes of ChatGPT without any attention or thought. The prompt being regurgitated from memory, “please answer this question in 100 words”. This will lead to possibly the worst situation, where all thought is delegated and worryingly the answer unquestionable believed to be true.

Cognitive offloading in action – Blindly following the Sat Nav! Research has shown that offloading navigation to GPS devices impairs spatial memory.

Benefits of Cognitive offloading – it’s important to add that there are benefits of using cognitive offloading, for example it reduces cognitive load, which is a significant problem in learning. The technology helps reduce the demand on our short-term memory, freeing the brain to focus on what is more important.

Also, some disagree as to the long-term impact, arguing that short term evidence (see below) is not necessarily the best way to form long term conclusions. For example, there were concerns that calculators would affect our ability to do math’s in our heads, but research found little difference whether students used calculators or not. And the debate has moved on to consider how calculators could be used to complement and reinforce mental and written methods of math’s. These benefits have led some to believe that cognitive offloading increases immediate task performance but diminishes subsequent memory performance for the offloaded information.

Evidence
There is little research on the impact of Gen AI due to it being so new, but as mentioned above we have a large amount of evidence on what has happened since the introduction of the internet and search.

  • In the paper Information Without Knowledge. The Effects of Internet Search on Learning, Matthew Fisher, et al found that participants who were allowed to search for information online were overconfident about their ability to comprehend the information and those who used the internet were less likely to remember what they had read. 
  • Dr Benjamin Storm the lead author of Cognitive offloading: How the Internet is increasingly taking over human memory, commented, “Memory is changing. Our research shows that as we use the Internet to support and extend our memory we become more reliant on it. Whereas before we might have tried to recall something on our own, now we don’t bother.”

What should you do?
To mitigate the risks of cognitive offloading, the simple answer is to limit or reduce your dependency and use Gen AI to supplement your learning rather than as a primary source. For example, ask it to come with ideas and lists but not the final text, spend your time linking the information together and shaping the arguments.

Let’s chat about ChatGPT – WOW!

If you have not heard of ChatGPT, where have you been since November 30th when it was launched by OpenAI the company that developed what is fast becoming a groundbreaking technology. Since then, it’s been making waves, everyone is talking about and using it. In the first week alone over 1,000,000 people had subscribed to what is for the time being at least, free. OpenAI was founded in 2016 by Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and Sam Altman, although Musk stepped down from the board in February 2018 to avoid any conflict with Tesla.

Originally a “not for profit”, in 2019 it became a “capped for-profit”, meaning in this instance that investors can’t earn more than 100 times their investment. And just to give some idea of scale, OpenAI expects to generate $200 million in revenue next year and $1 billion by 2024.

Its mission is simple, yet as you might imagine ambitious – to ensure that artificial general intelligence benefits all of humanity.

In contrast here is Googles mission – to organise the world’s information and make it universally accessible and useful.

But what is ChatGPT? – To find an answer to this question, we can actually ask ChatGPT, however before we do there are some points worth noting. ChatGPT is not a search engine unlike Google, it’s not even connected to the internet, in fact the data sets used to train it haven’t been updated since the end of 2021. You will find this out if you ask anything relatively topical, for example, who won the World Cup this year, it will say….

“I’m sorry, but I am not able to provide information about events that have occurred after my knowledge cut-off in 2021”.

The answers appear in real time in a suitably bot like fashion, but what makes it really interesting is that it uses Generative AI, which enables it to start with existing content but from that produce something new and original, let me clarify, its not simply taking content from somewhere else, its creating new and original stuff.

ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI for use in chatbots and conversational systems. It is able to generate human-like text in real-time and understand the context of a conversation, allowing it to generate appropriate responses based on the topic being discussed. ChatGPT is also efficient and fast, making it ideal for use in chatbot applications where fast response times are important. It is able to handle a wide variety of tasks, including answering questions, providing information, and engaging in casual conversation.

The above was the answer provided by ChatGPT when asked for a definition within 90 words. In fact everything in purple and italics in this blog has been taken directly from ChatGPT.

How does it learn?
Once again let’s ask ChatGPT, after all why should I do all the hard work.

ChatGPT uses a type of machine learning called supervised learning. In supervised learning, the model is trained on a labeled dataset, where the correct output is provided for each input. The model is then able to make predictions for new, unseen data based on the patterns it learned during training. too train ChatGPT, a large dataset of text is fed into the model, and the model is then trained to predict the next word in a sequence based on the words that come before it. Through this process, the model learns to recognize patterns in language and generate coherent text. The more data the model is trained on, the better it becomes at generating natural-sounding text and understanding the relationships between words and concepts

In summary, the model is initially trained by humans (The labeller), this is the supervised stage. It is then able to provide multiple answers to a question of which the trainer ranks in terms of quality, this is the reinforcement stage which provides more data for the model. And lastly, reward models are created that automatically reward the model for getting it right, the unsupervised stage.

It might be easier to think of it in the same way as you would if learning to ride a bike, for example firstly someone shows you how to ride a bike, (supervised) secondly you have a go and are given feedback in order to improve (reinforcement) and finally, if you ride for 5 minutes, you will be rewarded with £5.00, fall off and you get nothing. (Reward model – unsupervised).

Clever… but what are the implications?
We are at one of those “genie out of the bottle” moments, when something that you thought might happen in the future becomes a reality today. As a consequence, we start to ask questions such as, is this technology good or bad, what will it mean for jobs and the future of work? If it can produce high quality answers to questions, how can we tell if it’s the student’s work or simply the result of an exercise in cut and paste? And because it can write poems, stories and news articles, how can you know if anything is truly original, think deep fake but using words. By way of an example, here is a limerick I didn’t write about accountants.

There once was an accountant named Sue
Who loved numbers, they were her clue
She worked with great care
To balance the ledger with great flair
And made sure all the finances were true

Okay it might need a bit of work but hopefully you can see it has potential.

We have however seen this all before when other innovative technologies first appeared, for example, the motor car, the development of computers and more recently mobile phones and the internet. The truth is they did change how we worked and resulted in people losing their jobs, the same is almost certainly going to be the case with ChatGPT. One thing is for sure, you can’t put the genie back in the bottle.

Technology is neither good nor bad; nor is it neutral. Melvin Kranzberg’s first law of technology

And for learning
There have already been some suggesting that examinations should no longer be allowed to be sat remotely and that Universities should stop using essays and dissertations to asses performance.

However, ChatGPT is not Deep thought from The Hitchhikers Guide to the Galaxy nor Hal from 2001 a Space Odyssey, it has many limitations. The answers are not always correct, the quality of the answer is dependent on the quality of the question and as we have already seen, 2022 doesn’t exist at the moment.

There are also some really interesting ways in which it could be used to help students.

  • Use it as a “critical friend”, paste your answer into ChatGPT and ask for ways it might be improved, for example in terms of grammar and or structure.
  • Similar to the internet, if you have writers block just post a question and see what comes back.
  • Ask it to generate a number of test questions on a specific subject.
  • Have a conversation with it, ask it to explain something you don’t understand.

Clearly it should not be used by a student to pass off an answer as their own, that’s called cheating but it’s a tool and one that has a lot of potential if used properly by both students and teachers.

Once upon a time, sound was new technology. Peter Jackson filmmaker

PS – if you are more interested in pictures than words check out DALL·E 2, which allows anyone to create images by writing a text description. This has also been built by OpenAI.

Blooms 1984 – Getting an A instead of a C

When people see the year 1984 most think of George Orwell’s book about a dystopian future, but a few other things happened that year. Dynasty and Dallas were the most popular TV programs and one of my favorite movies, Amadeus won best picture at the Oscars. You can be excused for missing the publication of what has become known as the two Sigma problem by Benjamin Bloom, of Blooms taxonomy fame. He provided the answer to a question that both teachers and students have been asking for some time – how can you significantly improve student performance?  

One of the reasons this is still being talked about nearly 40 years later is because Bloom demonstrated that most students have the potential to achieve mastery of a given topic. The implication is that it’s the teaching at fault rather than the students inherent lack of ability. It’s worth adding that this might equally apply to the method of learning, it’s not you but the way you’re studying.

The two-sigma problem
Two of Bloom’s doctoral students (J. Anania and A.J. Burke) compared how people learned in three different situations:

  1. A conventional lecture with 30 students and one teacher. The students listened to the lectures and were periodically tested on the material.
  2. Mastery learning – this was the conventional lecture with the same testing however students were given formative style feedback and guidance, effectively correcting misunderstandings before re-testing to find out the extent of the mastery.
  3. Tutoring – this was the same as for mastery learning but with one teacher per student.

The results were significant and showed that mastery learning increased student performance by approximately one standard deviation/sigma, the equivalent of an increase in grading from a B to an A. However, if this was combined with one-to-one teaching, the performance improved by two standard deviations, the equivalent of moving from a C to an A. Interestingly the need to correct students work was relatively small.

Bloom then set up the challenge that became known as the two-sigma problem.

“Can researchers and teachers devise teaching/learning conditions that will enable the majority of students under group instruction to attain levels of achievement that can at present be reached only under good tutoring conditions?”

In other words, how can you do this in the “real world” at scale where it’s not possible to provide this type of formative feedback and one to one tuition because it would be too expensive.

Mastery learning – To answer this question you probably need to understand a little more about mastery learning. Firstly, content has to be broken down into small chunks, each with a specific learning outcome. The process is very similar to direct instruction that I have written about before. The next stage is important, learners have to demonstrate mastery of each chunk of content, normally by passing a test scoring around 80% before moving onto new material. If not, the student is given extra support, perhaps in the form of additional teaching or homework. Learners then continue the cycle of studying and testing until the mastery criteria are met.

Why does it work?
Bloom was of the opinion that the results were so strong because of the corrective feedback which was targeted at the very area the student didn’t understand. The one to one also helped because the teacher had time to explain in a different way and encourage the student to participate in their own learning which in turn helped with motivation. As you might imagine mastery is particularly effective in situations where one subject builds on another, for example, introduction to economics is followed by economics in business.

Of course, there are always problems, students may have mastered something to the desired level but forget what they have learned due to lack of use. It’s easy to set a test but relatively difficult to assess mastery, for example do you have sufficient coverage at the right level, is 80% the right cut score? And finally, how long should you allow someone to study in order to reach the mastery level and what happens in practice when time runs out and they don’t?

The Artificial Intelligence (AI) solution
When Bloom set the challenge, he was right, it was far too expensive to offer personalised tuition, however it is almost as if AI was invented to solve the problem. AI can offer an adaptive pathway tracking the student’s progression and harnessing what it gleans to serve up a learning experience designed specifically for the individual. Add to this instructionally designed online content that can be watched by the student at their own pace until mastery is achieved and you are getting close to what Bloom envisaged. However, although much of this is technically possible, questions remain. For example, was the improvement in performance the result of the ‘personal relationship’ between the teacher and student and the advise given or the clarity in explaining the topic. Can this really be replicated by a machine?

In the meantime, how does this help?
What Bloom identified was that in most situations it’s not the learner who is at fault but the method of learning or instruction. Be careful however, this cannot be used as an excuse for lack of effort, “its not my fault, it’s because the teacher isn’t doing it right”.

How to use Blooms principles.

  • Change the instruction/content – if you are finding a particular topic difficult to understand, ask questions such as, do I need to look at this differently, maybe watching a video or studying from another book. Providing yourself with an alternative way of exploring the problem.
  • Mastery of questions – at the end of most text books there are a number of different questions, don’t ignore them, test yourself and even if you get them wrong spend some time understanding why before moving on. You might also use the 80% rule, the point being you don’t need to get everything right

In conclusion – It’s interesting that in 1985 Bloom came up with a solution to a problem we are still struggling to implement. What we can say is that personalisation is now high on the agenda for many organisations because they recognise that one size does not fit all. Although AI provides a glimmer of hope, for now at least Blooms 2 Sigma problem remains unsolved.

Listen to Sal Khan on TED – Let’s teach for mastery, not test scores

Artificial Intelligence in education (AIEd)

robot learning or solving problems

The original Blade Runner was released in 1982. It depicts a future in which synthetic humans known as replicants are bioengineered by a powerful Corporation to work on off-world colonies. The final scene stands out because of the “tears in rain” speech given by Roy, the dying replicant.

I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.

This was the moment in which the artificial human had begun to think for himself. But what makes this so relevant is that the film is predicting what life will be like in 2019. And with 2018 only a few days away, 2019 is no longer science fiction, and neither is Artificial Intelligence (AI).

Artificial Intelligence and machine learning

There is no one single agreed upon definition for AI, “machine learning” on the other hand is a field of computer science that enables computers to learn without being explicitly programmed. The way it does this is by analysing large amounts of data in order to make accurate predictions, for example regression analysis does something very similar when using data to produce a line of best fit.

The problem with the term artificial intelligence is the word intelligence, defining this is key. If intelligence is, the ability to learn, understand, and make judgments or have opinions based on reason, then you can see how difficult deciding if a computer has intelligence might be. So, for the time being think of it like this:

AI is the intelligence; machine learning is the enabler making the machine smarter i.e. it helps the computer behave as if it is making intelligent decisions.

AI in education

As with many industries AI is already having an impact in education but given the right amount of investment it could do much more, for example

Teaching – Freeing teachers from routine and time-consuming tasks like marking and basic content delivery. This will give them time to develop greater class engagement and address behavioural issues and higher-level skill development. These being far more valued by employers, as industries themselves become less reliant on knowledge but dependant on those who can apply it to solve real word problems. In some ways AI could be thought of as a technological teaching assistant. In addition the quality and quantity of feedback the teacher will have available to them will not only be greatly improved with AI but be far more detailed and personalised.

Learning – Personalised learning can become a reality by using AI to deliver a truly adaptive experience. AI will be able to present the student with a personalised pathway based on data gathered from their past activities and those of other students. It can scaffold the learning, allowing the students to make mistakes sufficient that they will gain a better understanding.  AI is also an incredibly patient teacher, helping the student learn from constant repetition, trial and error.

Assessment and feedback – The feedback can also become rich, personalised and most importantly timely. Offering commentary as to what the individual student should do to improve rather than the bland comments often left on scripts e.g. “see model answer” and “must try harder.” Although some teachers will almost certainly mark “better” than an AI driven system would be capable of, the consistency of marking for ALL students would be considerably improved.

Chatbots are a relatively new development that use AI.  In the Autumn of 2015 Professor Ashok Goel built an AI teaching assistant called Jill Watson using IBM’s Watson platform. Jill was developed specifically to handle the high number of forum posts, over 10,000 by students enrolled on an online course. The students were unable to tell the difference between Jill and a “real” teacher. Watch and listen to Professor Goel talk about how Jill Watson was built.

Pearson has produced an excellent report on AIEd – click to download.

Back on earth

AI still has some way to go, and as with many technologies although there is much talk, getting it into the mainstream takes time and most importantly money. Although investors will happily finance driverless cars, they are less likely to do the same to improve education.

The good news is that Los Angeles is still more like La La Land than the dystopian vision created by Ridely Scott, and although we have embraced many new technologies, we have avoided many of the pitfalls predicated by the sci-fi writers of the past, so far at least.

But we have to be careful watch this, it’s a robot developed by AI specialist David Hanson named “Sophia” and has made history by becoming the first ever robot to be granted a full Saudi Arabian citizenship, honestly…..