The virtual educator has arrived!

But which one is me?

Inspired by a recent LinkedIn post I made regarding what it might be like to have an avatar as a teacher, I thought I should check out the evidence in terms of the effectiveness of avatars to improve learning before I get too carried away with the technology itself.

What is an avatar?
An avatar is a digital or computer-generated representation of a person or character in a virtual environment. It can take various forms, for example a simple profile picture on social media or an audio avatar talking about a specific subject using a synthetic voice. However, with major advancements in generative AI, avatars are evolving beyond static images or basic voice interactions. We are increasingly seeing lifelike digital humans emerge, sophisticated AI-driven avatars capable of “understanding” what we say and generating intelligent responses, speaking with realistic voices and impressive synchronised lip movements. This transformation is redefining how humans engage with AI-powered virtual beings, blurring the lines between digital representation and authentic interaction.

As to what they look like, here are some examples:

  • Firstly, an audio avatar that I have now built into my blog to provide a different perspective on what has been written. Here the avatar “chats” about the blog rather than simply reading it out loud. See above.
  • Secondly a Pixar style avatar. The goal here is to challenge the assumption that an avatar must resemble a real person to be effective.
  • And lastly, this is a more realistic avatar. Effectively an attempt to replicate me, in a slightly imperfect way. This is not about fooling the audience, although this is now possible, but to explore the idea that humans respond better to a more human like character.

The talking head – good or bad?
However there’s an elephant in the room when it comes to avatars, why do we need a talking head in the first place? Wouldn’t a simple voice-over, paired with well-structured content, be just as effective?

If you look at YouTube, almost everyone uses talking-head videos in different ways, surely if they weren’t effective, no one would have them, a kind of “wisdom of crowds.” But does their popularity actually prove their value, or are we just following a trend without questioning its impact?

Let’s have a look at the evidence:
After reviewing multiple studies, the findings are somewhat mixed. However, there’s enough insight to help us find an approach that works.

First, we have research from Christina Sondermann and Martin Merkt – Like it or learn from it: Effects of talking heads in educational videos. They conclude that the learning outcomes were worse for videos with talking heads, their concern was that it resulted in higher levels of cognitive load. But participants rated their perceived learning higher for videos with a talking head and gave better satisfaction ratings, selecting them more frequently. Secondly, another piece of research published five months later by Christina Sondermann and Martin Merkt, yes, the same people, What is the effect of talking heads in educational videos with different types of narrated slides. Here they found that “the inclusion of a talking head offers neither clear advantages nor disadvantages.” In effect using a talking head had no detrimental impact, which is slightly at odds with their previous conclusion.

A little confussing I agree, but stick with it….

Maybe we should move away from trying to prove the educational impact and consider the student’s perception of avatars. In this first report, student Perceptions of AI-Generated Avatars, the students said “there was little difference between having an AI presenter or a human delivering a lecture recording.” They also thought that the AI-generated avatar was an efficient vehicle for content delivery. However, they still wanted human connection in their learning and thought some parts of learning needed to be facilitated by teachers and that the avatar presentations “were ‘not … like a real class.” The second report, Impact of Using Virtual Avatars in Educational Videos on User Experience raised two really interesting points. Students found that high-quality video enhanced their learning, emotional experience, and overall engagement. Furthermore, when avatars displayed greater expressiveness, they felt more connected to the content, leading to improved comprehension and deeper involvement.

For those designing avatars, this means prioritising both technical quality and expressive alignment. Avatars should be visually clear, well animated, and their facial expressions should reinforce the message being conveyed.

What does this all mean?
Bringing everything together, we can conclude that avatars or talking heads are not distractions that lead to cognitive overload. Instead, students appreciate them, relate to them emotionally, in fact they see little difference between a recorded tutor and an avatar. Their expressiveness enhances engagement and might prove highly effective in helping student remember key points.

To balance differing perspectives, a practical approach might be to omit the talking head when explaining highly complex topics, (reducing cognative load) allowing students to focus solely on the material. However, keeping the avatar visible in most other situations, particularly for emphasising key concepts or prompting action to ensure maximum impact. Alternatively, why not let the student decide by offering them a choice to have the talking head or not.

How might avatars be used?
One important distinction in the use of avatars is whether they are autonomous or scripted. Autonomous avatars are powered by large language models, such as ChatGPT, allowing them to generate responses dynamically based on user interactions. In contrast, scripted avatars are entirely controlled by their creator, who directs what they say.

A scripted avatar could be particularly useful in educational settings where consistency, accuracy, and intentional messaging are crucial. Because its responses are predetermined, educators can ensure that the avatar aligns with specific learning goals, maintains an appropriate tone, and avoids misinformation.

This makes it ideal for scenarios such as:
– Delivering structured lessons with carefully crafted explanations.
– Providing standardised guidance, ensuring every student receives the same high-quality information.
– Reinforcing key concepts without deviation, which can be especially beneficial when high stake assessments are used, as is the case with professional exams.

However, if we power these avatars with Generative AI, the possibilities increase significantly:

  • More personalised learning. One of the most exciting prospects is the ability of avatars to offer personalised and contextualised instruction.
  • Help with effective study. Avatars could be used to remind students about a specific learning strategy or a deadline for completion of a piece of work. A friendly face, at the right time might be more effective than an email from your tutor or worse still an automated one.
  • Motivational and engaging. These avatars could also have a positive effect on motivation and feelings about learning. They could be designed to match an individual’s personality and interests, making them far more effective both in terms of higher levels of motivation and engagement.
  • Contextualised Learning. AI-based avatars can support teaching in practical, real-world scenarios, including problem solving and case-based learning. Traditionally, creating these types of environments required significant resources such as trained actors or expensive designed virtual worlds.

A few concerns – autonomous avatars
Of course, as with any new technology there are some concerns and challenges:

Autonomous avatars pose several risks, including their ability to make mistakes, the problem with avatars in particular is, they will be very convincing. We are already acutely aware that large language models can sometimes ‘hallucinate’ or simply make things up. Data protection is another concern, with risks ranging from deepfake misuse to avatars persuading users into sharing personal or confidential information, which could be exploited. Finally, value bias is a challenge, as AI trained avatars may unintentionally reflect biased perspectives that a professional educator would recognise and navigate more responsibly.

Conclusions
Avatars, whether simple or lifelike, are gaining traction in education. Research indicates that while talking heads don’t necessarily improve learning outcomes, they don’t harm them, and students perceive them positively. A key distinction lies between scripted avatars, offering consistent and accurate pre-determined content, ideal for structured lessons, and autonomous avatars powered by AI that open up a world of possibility in several areas including personalisation.

Avatars are a powerful and exciting new tool that offer capabilities that in many ways go beyond previous learning technologies, but their effectiveness very much depends on how they are designed and used. But hasn’t that always the case….

Finally – This is an excellent video that talks about some of the research I have referred to. It is of course presented by an avatar.  What Does Research Say about AI Avatars for Learning?

PS – which one is me – none of them, including the second one from the left.

Chatting with a Chat Bot – Prompting

In December last year I wrote about what was then a relatively new technology, Generative AI (GAI). Seven months later it has become one of the most exciting and scary developments we have seen in recent years, it has the potential to create transformative change that will affect our very way of life, how we work and the area I am most interested in, how we learn. Initially it was all about a particular type of GAI called ChatGPT 3.5, a large language model funded by Microsoft. But the market reacted quickly and there are now many more models, including Bard from Google, Llama 2 from Meta and a pay for version of ChatGPT imaginatively entitled ChatGPT 4. And just to make this a little more complicated, in early February, Microsoft unveiled a new version of Bing (Microsoft’s search engine that competes with Google) that includes an AI chatbot powered by the same technology as ChatGPT.

One of the reasons for its rapid adoption is it’s so easy to use, you can literally chat with it as you might a human. However as with people, to have a meaningful conversation you need to plan what you want to say, be clear in how you say it whilst providing sufficient context to avoid misunderstanding.

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.” The Turing Test – Alan Turing

Prompting – rubbish in rubbish out

Prompting is how we talk with these GAI models. The quality and specificity of the prompt can significantly influence the response you get. A well-crafted prompt can lead to a coherent and relevant answer, whilst a poorly formulated one offers up ambiguity and irrelevant information. If only people thought as deeply about how they communicate with each other, we might avoid a lot of problems!

How to prompt
• Be clear, use specific and unambiguous language.
• Provide context as to why you are asking the question or who are, and write in complete sentences. For example, “would William Shakespeare be considered a great writer if he were to be published today?”
• Ask open ended questions, you will get more detailed and creative responses.
• Set rules such as the tone required or length of an answer, limiting it to so many words, sentence or paragraphs. For example, “in a sentence could you provide a motivational response as to why learning is important?”
• Ask a follow up question if you don’t get the answer you want. GAI is conversational and will remember what you asked last. For example, if you don’t think the answer goes into sufficient detail, say “could you provide more detail as to why this particular event was considered so important?”.
• Provide examples of what you want will result in far more accurate answers. For example, why not copy in the report you wrote last time and ask GAI to update it with new information but keeping the same style and within 10% of the word count.

……And although there is no need to be polite, I have to admit to saying please and thank you.

Types of questions

ChatGPT is very flexible when it comes to answering question, the secret is in getting the prompt right. Here are just a few examples with responses from ChatGPT.

• General knowledge, Q When was Winston Churchill born? A Winston Churchill was born on November 30, 1874.
• Explainer questions, Q Can you explain in 5 words why Churchill was considered a great prime minister? A Inspiring leadership during wartime victory.
• Comparison questions, Q In a sentence can you compare Winston Churchill with Boris Johnson? A. Winston Churchill was a historical wartime leader known for his inspirational leadership during World War II, while Boris Johnson is a contemporary British politician who served as Prime Minister from 2019 to 2022.
• Others, ask…. How to do something e.g. bake a lemon drizzle cake, Can you define apprentice, Tell me a joke, Write me a poem in Japanese. You are limited only by your imagination.

Using it to learn

There are many ways in which this new technology can help with learning but it comes with a couple of health warnings, firstly – GenAI can be a great teacher but it cannot learn for you. It is an excellent resource, able to provide answers 24/7, clarify, summarise etc but ultimately you are the one that has to do the learning. Secondly – All GenAI bots suffer from something called hallucination, they make stuff up. The problem with this is as a learner you might not be able to tell because the answer will sound so authentic. In terms of how common this is, ChatGPT estimates around 20% of answers given might have something wrong with them, but they are working on reducing this to less than 10%.

Here are a few ways you can use GAI
• Summarise large amounts of text – copy a whole section of text into the model and ask it to summarise the most important points. Remember the more detail you give, the more relevant the response, e.g. Produce me a timeline of key events or identify the theories used in the answer.
• Question practice and marking – copy a question in and ask for the answer in 100 words. Paste your answer in and ask it to give you some feedback against the answer it has just produced. This can be further refined if you put in the examiners answer and if you have it, the marking guide.
• Ask for improvement – put into the model your answer with the examiners answer and ask how you might improve the writing style, making it more concise or highlighting the most important points.
• Produce flip cards – ask the model to write you 5 questions with answers in the style of a flip card.
• Produce an answer for a specific qualification – ask if it could produce an answer that is possible to complete in one hour, that would pass the AQA, GCSE exam in biology.
• Explain something – ask can you explain, for example Photosynthesis in simple terms or as an analogy or metaphor.
• Coach me – Ask it to review your answer against the examiners answer but rather than correct it ask it to coach you through the process so that you develop a better understanding.

There is little doubt as to the potential of GenAI in learning, its biggest impact may be in developing countries where there is limited access to teachers and few resources. Although most would agree that an educated world is a better one, there will need to be some safeguards. It cant be left to the open market, education is simply too important.

“Education is the most powerful weapon which you can use to change the world”
Nelson Mandela

And If you want to see some of these tools in action as well as hear Sal Khan talk about Khanmigo, his version of a teacher chatbot, see below.
Sal Khan talks about Khanmigo
ChatGPT in action for studying and exams
Revise SMARTER, not harder: How to use ChatGPT to ace your exams

Let’s chat about ChatGPT – WOW!

If you have not heard of ChatGPT, where have you been since November 30th when it was launched by OpenAI the company that developed what is fast becoming a groundbreaking technology. Since then, it’s been making waves, everyone is talking about and using it. In the first week alone over 1,000,000 people had subscribed to what is for the time being at least, free. OpenAI was founded in 2016 by Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and Sam Altman, although Musk stepped down from the board in February 2018 to avoid any conflict with Tesla.

Originally a “not for profit”, in 2019 it became a “capped for-profit”, meaning in this instance that investors can’t earn more than 100 times their investment. And just to give some idea of scale, OpenAI expects to generate $200 million in revenue next year and $1 billion by 2024.

Its mission is simple, yet as you might imagine ambitious – to ensure that artificial general intelligence benefits all of humanity.

In contrast here is Googles mission – to organise the world’s information and make it universally accessible and useful.

But what is ChatGPT? – To find an answer to this question, we can actually ask ChatGPT, however before we do there are some points worth noting. ChatGPT is not a search engine unlike Google, it’s not even connected to the internet, in fact the data sets used to train it haven’t been updated since the end of 2021. You will find this out if you ask anything relatively topical, for example, who won the World Cup this year, it will say….

“I’m sorry, but I am not able to provide information about events that have occurred after my knowledge cut-off in 2021”.

The answers appear in real time in a suitably bot like fashion, but what makes it really interesting is that it uses Generative AI, which enables it to start with existing content but from that produce something new and original, let me clarify, its not simply taking content from somewhere else, its creating new and original stuff.

ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI for use in chatbots and conversational systems. It is able to generate human-like text in real-time and understand the context of a conversation, allowing it to generate appropriate responses based on the topic being discussed. ChatGPT is also efficient and fast, making it ideal for use in chatbot applications where fast response times are important. It is able to handle a wide variety of tasks, including answering questions, providing information, and engaging in casual conversation.

The above was the answer provided by ChatGPT when asked for a definition within 90 words. In fact everything in purple and italics in this blog has been taken directly from ChatGPT.

How does it learn?
Once again let’s ask ChatGPT, after all why should I do all the hard work.

ChatGPT uses a type of machine learning called supervised learning. In supervised learning, the model is trained on a labeled dataset, where the correct output is provided for each input. The model is then able to make predictions for new, unseen data based on the patterns it learned during training. too train ChatGPT, a large dataset of text is fed into the model, and the model is then trained to predict the next word in a sequence based on the words that come before it. Through this process, the model learns to recognize patterns in language and generate coherent text. The more data the model is trained on, the better it becomes at generating natural-sounding text and understanding the relationships between words and concepts

In summary, the model is initially trained by humans (The labeller), this is the supervised stage. It is then able to provide multiple answers to a question of which the trainer ranks in terms of quality, this is the reinforcement stage which provides more data for the model. And lastly, reward models are created that automatically reward the model for getting it right, the unsupervised stage.

It might be easier to think of it in the same way as you would if learning to ride a bike, for example firstly someone shows you how to ride a bike, (supervised) secondly you have a go and are given feedback in order to improve (reinforcement) and finally, if you ride for 5 minutes, you will be rewarded with £5.00, fall off and you get nothing. (Reward model – unsupervised).

Clever… but what are the implications?
We are at one of those “genie out of the bottle” moments, when something that you thought might happen in the future becomes a reality today. As a consequence, we start to ask questions such as, is this technology good or bad, what will it mean for jobs and the future of work? If it can produce high quality answers to questions, how can we tell if it’s the student’s work or simply the result of an exercise in cut and paste? And because it can write poems, stories and news articles, how can you know if anything is truly original, think deep fake but using words. By way of an example, here is a limerick I didn’t write about accountants.

There once was an accountant named Sue
Who loved numbers, they were her clue
She worked with great care
To balance the ledger with great flair
And made sure all the finances were true

Okay it might need a bit of work but hopefully you can see it has potential.

We have however seen this all before when other innovative technologies first appeared, for example, the motor car, the development of computers and more recently mobile phones and the internet. The truth is they did change how we worked and resulted in people losing their jobs, the same is almost certainly going to be the case with ChatGPT. One thing is for sure, you can’t put the genie back in the bottle.

Technology is neither good nor bad; nor is it neutral. Melvin Kranzberg’s first law of technology

And for learning
There have already been some suggesting that examinations should no longer be allowed to be sat remotely and that Universities should stop using essays and dissertations to asses performance.

However, ChatGPT is not Deep thought from The Hitchhikers Guide to the Galaxy nor Hal from 2001 a Space Odyssey, it has many limitations. The answers are not always correct, the quality of the answer is dependent on the quality of the question and as we have already seen, 2022 doesn’t exist at the moment.

There are also some really interesting ways in which it could be used to help students.

  • Use it as a “critical friend”, paste your answer into ChatGPT and ask for ways it might be improved, for example in terms of grammar and or structure.
  • Similar to the internet, if you have writers block just post a question and see what comes back.
  • Ask it to generate a number of test questions on a specific subject.
  • Have a conversation with it, ask it to explain something you don’t understand.

Clearly it should not be used by a student to pass off an answer as their own, that’s called cheating but it’s a tool and one that has a lot of potential if used properly by both students and teachers.

Once upon a time, sound was new technology. Peter Jackson filmmaker

PS – if you are more interested in pictures than words check out DALL·E 2, which allows anyone to create images by writing a text description. This has also been built by OpenAI.

Artificial Intelligence in education (AIEd)

robot learning or solving problems

The original Blade Runner was released in 1982. It depicts a future in which synthetic humans known as replicants are bioengineered by a powerful Corporation to work on off-world colonies. The final scene stands out because of the “tears in rain” speech given by Roy, the dying replicant.

I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.

This was the moment in which the artificial human had begun to think for himself. But what makes this so relevant is that the film is predicting what life will be like in 2019. And with 2018 only a few days away, 2019 is no longer science fiction, and neither is Artificial Intelligence (AI).

Artificial Intelligence and machine learning

There is no one single agreed upon definition for AI, “machine learning” on the other hand is a field of computer science that enables computers to learn without being explicitly programmed. The way it does this is by analysing large amounts of data in order to make accurate predictions, for example regression analysis does something very similar when using data to produce a line of best fit.

The problem with the term artificial intelligence is the word intelligence, defining this is key. If intelligence is, the ability to learn, understand, and make judgments or have opinions based on reason, then you can see how difficult deciding if a computer has intelligence might be. So, for the time being think of it like this:

AI is the intelligence; machine learning is the enabler making the machine smarter i.e. it helps the computer behave as if it is making intelligent decisions.

AI in education

As with many industries AI is already having an impact in education but given the right amount of investment it could do much more, for example

Teaching – Freeing teachers from routine and time-consuming tasks like marking and basic content delivery. This will give them time to develop greater class engagement and address behavioural issues and higher-level skill development. These being far more valued by employers, as industries themselves become less reliant on knowledge but dependant on those who can apply it to solve real word problems. In some ways AI could be thought of as a technological teaching assistant. In addition the quality and quantity of feedback the teacher will have available to them will not only be greatly improved with AI but be far more detailed and personalised.

Learning – Personalised learning can become a reality by using AI to deliver a truly adaptive experience. AI will be able to present the student with a personalised pathway based on data gathered from their past activities and those of other students. It can scaffold the learning, allowing the students to make mistakes sufficient that they will gain a better understanding.  AI is also an incredibly patient teacher, helping the student learn from constant repetition, trial and error.

Assessment and feedback – The feedback can also become rich, personalised and most importantly timely. Offering commentary as to what the individual student should do to improve rather than the bland comments often left on scripts e.g. “see model answer” and “must try harder.” Although some teachers will almost certainly mark “better” than an AI driven system would be capable of, the consistency of marking for ALL students would be considerably improved.

Chatbots are a relatively new development that use AI.  In the Autumn of 2015 Professor Ashok Goel built an AI teaching assistant called Jill Watson using IBM’s Watson platform. Jill was developed specifically to handle the high number of forum posts, over 10,000 by students enrolled on an online course. The students were unable to tell the difference between Jill and a “real” teacher. Watch and listen to Professor Goel talk about how Jill Watson was built.

Pearson has produced an excellent report on AIEd – click to download.

Back on earth

AI still has some way to go, and as with many technologies although there is much talk, getting it into the mainstream takes time and most importantly money. Although investors will happily finance driverless cars, they are less likely to do the same to improve education.

The good news is that Los Angeles is still more like La La Land than the dystopian vision created by Ridely Scott, and although we have embraced many new technologies, we have avoided many of the pitfalls predicated by the sci-fi writers of the past, so far at least.

But we have to be careful watch this, it’s a robot developed by AI specialist David Hanson named “Sophia” and has made history by becoming the first ever robot to be granted a full Saudi Arabian citizenship, honestly…..