The virtual educator has arrived!

But which one is me?

Inspired by a recent LinkedIn post I made regarding what it might be like to have an avatar as a teacher, I thought I should check out the evidence in terms of the effectiveness of avatars to improve learning before I get too carried away with the technology itself.

What is an avatar?
An avatar is a digital or computer-generated representation of a person or character in a virtual environment. It can take various forms, for example a simple profile picture on social media or an audio avatar talking about a specific subject using a synthetic voice. However, with major advancements in generative AI, avatars are evolving beyond static images or basic voice interactions. We are increasingly seeing lifelike digital humans emerge, sophisticated AI-driven avatars capable of “understanding” what we say and generating intelligent responses, speaking with realistic voices and impressive synchronised lip movements. This transformation is redefining how humans engage with AI-powered virtual beings, blurring the lines between digital representation and authentic interaction.

As to what they look like, here are some examples:

  • Firstly, an audio avatar that I have now built into my blog to provide a different perspective on what has been written. Here the avatar “chats” about the blog rather than simply reading it out loud. See above.
  • Secondly a Pixar style avatar. The goal here is to challenge the assumption that an avatar must resemble a real person to be effective.
  • And lastly, this is a more realistic avatar. Effectively an attempt to replicate me, in a slightly imperfect way. This is not about fooling the audience, although this is now possible, but to explore the idea that humans respond better to a more human like character.

The talking head – good or bad?
However there’s an elephant in the room when it comes to avatars, why do we need a talking head in the first place? Wouldn’t a simple voice-over, paired with well-structured content, be just as effective?

If you look at YouTube, almost everyone uses talking-head videos in different ways, surely if they weren’t effective, no one would have them, a kind of “wisdom of crowds.” But does their popularity actually prove their value, or are we just following a trend without questioning its impact?

Let’s have a look at the evidence:
After reviewing multiple studies, the findings are somewhat mixed. However, there’s enough insight to help us find an approach that works.

First, we have research from Christina Sondermann and Martin Merkt – Like it or learn from it: Effects of talking heads in educational videos. They conclude that the learning outcomes were worse for videos with talking heads, their concern was that it resulted in higher levels of cognitive load. But participants rated their perceived learning higher for videos with a talking head and gave better satisfaction ratings, selecting them more frequently. Secondly, another piece of research published five months later by Christina Sondermann and Martin Merkt, yes, the same people, What is the effect of talking heads in educational videos with different types of narrated slides. Here they found that “the inclusion of a talking head offers neither clear advantages nor disadvantages.” In effect using a talking head had no detrimental impact, which is slightly at odds with their previous conclusion.

A little confussing I agree, but stick with it….

Maybe we should move away from trying to prove the educational impact and consider the student’s perception of avatars. In this first report, student Perceptions of AI-Generated Avatars, the students said “there was little difference between having an AI presenter or a human delivering a lecture recording.” They also thought that the AI-generated avatar was an efficient vehicle for content delivery. However, they still wanted human connection in their learning and thought some parts of learning needed to be facilitated by teachers and that the avatar presentations “were ‘not … like a real class.” The second report, Impact of Using Virtual Avatars in Educational Videos on User Experience raised two really interesting points. Students found that high-quality video enhanced their learning, emotional experience, and overall engagement. Furthermore, when avatars displayed greater expressiveness, they felt more connected to the content, leading to improved comprehension and deeper involvement.

For those designing avatars, this means prioritising both technical quality and expressive alignment. Avatars should be visually clear, well animated, and their facial expressions should reinforce the message being conveyed.

What does this all mean?
Bringing everything together, we can conclude that avatars or talking heads are not distractions that lead to cognitive overload. Instead, students appreciate them, relate to them emotionally, in fact they see little difference between a recorded tutor and an avatar. Their expressiveness enhances engagement and might prove highly effective in helping student remember key points.

To balance differing perspectives, a practical approach might be to omit the talking head when explaining highly complex topics, (reducing cognative load) allowing students to focus solely on the material. However, keeping the avatar visible in most other situations, particularly for emphasising key concepts or prompting action to ensure maximum impact. Alternatively, why not let the student decide by offering them a choice to have the talking head or not.

How might avatars be used?
One important distinction in the use of avatars is whether they are autonomous or scripted. Autonomous avatars are powered by large language models, such as ChatGPT, allowing them to generate responses dynamically based on user interactions. In contrast, scripted avatars are entirely controlled by their creator, who directs what they say.

A scripted avatar could be particularly useful in educational settings where consistency, accuracy, and intentional messaging are crucial. Because its responses are predetermined, educators can ensure that the avatar aligns with specific learning goals, maintains an appropriate tone, and avoids misinformation.

This makes it ideal for scenarios such as:
– Delivering structured lessons with carefully crafted explanations.
– Providing standardised guidance, ensuring every student receives the same high-quality information.
– Reinforcing key concepts without deviation, which can be especially beneficial when high stake assessments are used, as is the case with professional exams.

However, if we power these avatars with Generative AI, the possibilities increase significantly:

  • More personalised learning. One of the most exciting prospects is the ability of avatars to offer personalised and contextualised instruction.
  • Help with effective study. Avatars could be used to remind students about a specific learning strategy or a deadline for completion of a piece of work. A friendly face, at the right time might be more effective than an email from your tutor or worse still an automated one.
  • Motivational and engaging. These avatars could also have a positive effect on motivation and feelings about learning. They could be designed to match an individual’s personality and interests, making them far more effective both in terms of higher levels of motivation and engagement.
  • Contextualised Learning. AI-based avatars can support teaching in practical, real-world scenarios, including problem solving and case-based learning. Traditionally, creating these types of environments required significant resources such as trained actors or expensive designed virtual worlds.

A few concerns – autonomous avatars
Of course, as with any new technology there are some concerns and challenges:

Autonomous avatars pose several risks, including their ability to make mistakes, the problem with avatars in particular is, they will be very convincing. We are already acutely aware that large language models can sometimes ‘hallucinate’ or simply make things up. Data protection is another concern, with risks ranging from deepfake misuse to avatars persuading users into sharing personal or confidential information, which could be exploited. Finally, value bias is a challenge, as AI trained avatars may unintentionally reflect biased perspectives that a professional educator would recognise and navigate more responsibly.

Conclusions
Avatars, whether simple or lifelike, are gaining traction in education. Research indicates that while talking heads don’t necessarily improve learning outcomes, they don’t harm them, and students perceive them positively. A key distinction lies between scripted avatars, offering consistent and accurate pre-determined content, ideal for structured lessons, and autonomous avatars powered by AI that open up a world of possibility in several areas including personalisation.

Avatars are a powerful and exciting new tool that offer capabilities that in many ways go beyond previous learning technologies, but their effectiveness very much depends on how they are designed and used. But hasn’t that always the case….

Finally – This is an excellent video that talks about some of the research I have referred to. It is of course presented by an avatar.  What Does Research Say about AI Avatars for Learning?

PS – which one is me – none of them, including the second one from the left.

The AI Education Paradox: Answers are cheap, questions are priceless

After 7.5 million years of computation, Deep Thought reveals the answer: “forty-two.”

This was the “Answer to the Ultimate Question of Life, the Universe, and Everything” in The Hitchhiker’s Guide to the Galaxy. 

Coming up with answers to questions is reasonably easy, especially for such a big computer as “Deep Thought,” although in fairness taking 7.5 million years is a little slow by modern standards! When I asked ChatGPT it only needed a few seconds, although it did eventually ask me what I thought the answer was. 

What is far more difficult than answering questions is asking them. Which is why in Hitchhikers they go on to ask Deep Thought if it can produce “The Ultimate Question” to go with the answer 42. See* – spoiler, it doesn’t end well.

AI has all the answers?
Historically it could be argued that the educational model has been largely focussed on knowledge transfer, requiring students to absorb and regurgitate pre-determined facts and solutions. This model, while valuable when information was not so accessible, is however starting to creak under the pressure of new technologies such as GenAI. After all, what’s the point of teaching facts, and answers to questions when you have ChatGPT?

Although you could have made a very similar point about the internet, large language models are different. They are far more accessible and provide credible, if not always correct answers instantly, requiring little or no effort by the individual, which is of course is part of the problem.

This is not however a good argument to avoid teaching knowledge, because without it as a foundation it becomes almost impossible to develop those hugely important higher-level skills such as critical thinking and problem solving.  Dan Willingham, the Cognitive Scientists is very clear on this:

 “Thinking well requires knowing facts, and that’s true not simply because you need something to think about. Critical thinking and processes such as reasoning and problem solving are intimately intertwined with factual knowledge” Dan Willingham (edited).

But that’s not all, in addition to continuing to teach knowledge we need to pivot away from what GenAI does best, e.g. data analysis, repetitive tasks and answering questions, to focus on the areas in which humans excel.

Learning…….to beat AI
There is little doubt that GenAI is eroding human skills and as a consequence reshaping labour markets. The Tony Blair institute (The Impact of AI on the Labour Market) estimates something in the region of one to three million jobs could be displaced**. Take for example my own industry, Finance. GenAI can analyse bank statements, matching transactions with internal records, it can review historical financial data and identify trends and patterns as well as produce forecasts to support financial planning.

However, it’s not all bad news, although GenAI is excellent at processing vast amounts of data and providing rapid output, the quality of what is produced is very dependant on the questions asked, and humans are capable of asking great questions.

The three AI proof human skills

Skill no 1 – Asking the right questions. This may seem counterintuitive, surely “any fool can ask a question” – but can they ask a good one? The ability to ask the right question is far from trivial, it’s a spark for curiosity, and leads to growth and critical thinking. Socrates, built his entire philosophy on the principle of asking questions, he challenged assumptions looking for the underlying truth, and in so doing fostered a deep understanding of the subject.

Questions aren’t merely tools for obtaining answers, they are catalysts for refining our thinking, discovering new perspectives, and embracing intellectual humility.  

How to ask questions:

  • Move beyond simple “what” and “how” questions, ask “why and what if”
  • Break down complex inquiries into smaller, more manageable parts
  • Challenge assumptions, for example, “what are the counterarguments to this idea?” or “What would someone with a different perspective say?”

Skill no 2 – Evaluating the answer. While AI can produce insights, summaries, or responses that may seem well crafted, it lacks the uniquely human ability to contextualise, empathise, and discern subtleties. Think of evaluation in this context as – the “human act” of applying critical thinking, professional judgment, and emotional intelligence to assess the relevance, accuracy and practical value of AI generated content.

This process goes beyond mere interpretation. Human evaluation is, in essence, the bridge that ensures AI contributions remain meaningful and grounded in purpose. In simple terms, interpretation focuses on meaning, while evaluation focuses on judgment.

How to evaluate:

  • Have a clear criterion, be specific and decide on the method of prioritisation
  • Use multiple sources of evidence, combine numerical data with qualitative insights
  • Distinguish facts from assumptions, being careful to separate what you can prove from information that is speculative or anecdotal

Skill no 3 – Maintaining agency and an ethical perspective. Human agency requires the individual to act independently and make informed decisions about the AI output. Agency involves understanding AI’s capabilities and limitations, questioning its outputs, and actively deciding how it is applied rather than passively following its suggestions. By retaining oversight and exercising judgment, we ensure that AI remains a tool serving human needs, rather than a means for delegating responsibility.

Equally important is the ethical perspective. AI is devoid of inherent morality, able only to reflect the values embedded in its training data. Humans must actively define and enforce ethical boundaries, addressing biases and prioritising human values such as compassion and social responsibility.

How to maintain agency and an ethical perspective.

  • Educate yourself about AI, understanding how it works, including its capabilities, limitations, and potential biases
  • Develop an ethical framework. Create a set of guidelines to assess AI use, including its long-term impact on individuals, communities, and the environment
  • Be the Human in the Loop. Remember that you have ultimate responsibility both for the final decision and the ethical impact. This should never be delegated

Conclusion
While AI delivers instant results, true education goes beyond merely retrieving information. It requires deep understanding, a spirit of inquiry, and continuous personal growth. For students, this translates to mastering the art of asking thoughtful, probing questions, and developing the ability to critically evaluate responses.

Educators, have a more complex role. They must not only provide the necessary foundational knowledge base, but also teach and assess those uniquely human skills that AI will find hard to replicate – the ability to ask good questions, judge answers wisely, and maintain ethical agency.

Footnotes
*In Hitchhikers’ Deep Thought is unable to come up with the ultimate question, it needs a bigger and better computer, however it can buid it “one of such infinite complexity that life itself will form part of its operational matrix.” It’s called earth!
**The Impact of AI on the Labour Market report goes on to say that the job displacements will not occur all at once, but instead will rise gradually with the pace of AI adoption across the wider economy. Moreover, the rise in unemployment is likely to be capped and ultimately offset as AI creates new demand for workers, which pulls displaced workers back into the workforce.

Let’s chat about ChatGPT – WOW!

If you have not heard of ChatGPT, where have you been since November 30th when it was launched by OpenAI the company that developed what is fast becoming a groundbreaking technology. Since then, it’s been making waves, everyone is talking about and using it. In the first week alone over 1,000,000 people had subscribed to what is for the time being at least, free. OpenAI was founded in 2016 by Elon Musk, Greg Brockman, Ilya Sutskever, Wojciech Zaremba, and Sam Altman, although Musk stepped down from the board in February 2018 to avoid any conflict with Tesla.

Originally a “not for profit”, in 2019 it became a “capped for-profit”, meaning in this instance that investors can’t earn more than 100 times their investment. And just to give some idea of scale, OpenAI expects to generate $200 million in revenue next year and $1 billion by 2024.

Its mission is simple, yet as you might imagine ambitious – to ensure that artificial general intelligence benefits all of humanity.

In contrast here is Googles mission – to organise the world’s information and make it universally accessible and useful.

But what is ChatGPT? – To find an answer to this question, we can actually ask ChatGPT, however before we do there are some points worth noting. ChatGPT is not a search engine unlike Google, it’s not even connected to the internet, in fact the data sets used to train it haven’t been updated since the end of 2021. You will find this out if you ask anything relatively topical, for example, who won the World Cup this year, it will say….

“I’m sorry, but I am not able to provide information about events that have occurred after my knowledge cut-off in 2021”.

The answers appear in real time in a suitably bot like fashion, but what makes it really interesting is that it uses Generative AI, which enables it to start with existing content but from that produce something new and original, let me clarify, its not simply taking content from somewhere else, its creating new and original stuff.

ChatGPT is a variant of the GPT (Generative Pre-trained Transformer) language model developed by OpenAI for use in chatbots and conversational systems. It is able to generate human-like text in real-time and understand the context of a conversation, allowing it to generate appropriate responses based on the topic being discussed. ChatGPT is also efficient and fast, making it ideal for use in chatbot applications where fast response times are important. It is able to handle a wide variety of tasks, including answering questions, providing information, and engaging in casual conversation.

The above was the answer provided by ChatGPT when asked for a definition within 90 words. In fact everything in purple and italics in this blog has been taken directly from ChatGPT.

How does it learn?
Once again let’s ask ChatGPT, after all why should I do all the hard work.

ChatGPT uses a type of machine learning called supervised learning. In supervised learning, the model is trained on a labeled dataset, where the correct output is provided for each input. The model is then able to make predictions for new, unseen data based on the patterns it learned during training. too train ChatGPT, a large dataset of text is fed into the model, and the model is then trained to predict the next word in a sequence based on the words that come before it. Through this process, the model learns to recognize patterns in language and generate coherent text. The more data the model is trained on, the better it becomes at generating natural-sounding text and understanding the relationships between words and concepts

In summary, the model is initially trained by humans (The labeller), this is the supervised stage. It is then able to provide multiple answers to a question of which the trainer ranks in terms of quality, this is the reinforcement stage which provides more data for the model. And lastly, reward models are created that automatically reward the model for getting it right, the unsupervised stage.

It might be easier to think of it in the same way as you would if learning to ride a bike, for example firstly someone shows you how to ride a bike, (supervised) secondly you have a go and are given feedback in order to improve (reinforcement) and finally, if you ride for 5 minutes, you will be rewarded with £5.00, fall off and you get nothing. (Reward model – unsupervised).

Clever… but what are the implications?
We are at one of those “genie out of the bottle” moments, when something that you thought might happen in the future becomes a reality today. As a consequence, we start to ask questions such as, is this technology good or bad, what will it mean for jobs and the future of work? If it can produce high quality answers to questions, how can we tell if it’s the student’s work or simply the result of an exercise in cut and paste? And because it can write poems, stories and news articles, how can you know if anything is truly original, think deep fake but using words. By way of an example, here is a limerick I didn’t write about accountants.

There once was an accountant named Sue
Who loved numbers, they were her clue
She worked with great care
To balance the ledger with great flair
And made sure all the finances were true

Okay it might need a bit of work but hopefully you can see it has potential.

We have however seen this all before when other innovative technologies first appeared, for example, the motor car, the development of computers and more recently mobile phones and the internet. The truth is they did change how we worked and resulted in people losing their jobs, the same is almost certainly going to be the case with ChatGPT. One thing is for sure, you can’t put the genie back in the bottle.

Technology is neither good nor bad; nor is it neutral. Melvin Kranzberg’s first law of technology

And for learning
There have already been some suggesting that examinations should no longer be allowed to be sat remotely and that Universities should stop using essays and dissertations to asses performance.

However, ChatGPT is not Deep thought from The Hitchhikers Guide to the Galaxy nor Hal from 2001 a Space Odyssey, it has many limitations. The answers are not always correct, the quality of the answer is dependent on the quality of the question and as we have already seen, 2022 doesn’t exist at the moment.

There are also some really interesting ways in which it could be used to help students.

  • Use it as a “critical friend”, paste your answer into ChatGPT and ask for ways it might be improved, for example in terms of grammar and or structure.
  • Similar to the internet, if you have writers block just post a question and see what comes back.
  • Ask it to generate a number of test questions on a specific subject.
  • Have a conversation with it, ask it to explain something you don’t understand.

Clearly it should not be used by a student to pass off an answer as their own, that’s called cheating but it’s a tool and one that has a lot of potential if used properly by both students and teachers.

Once upon a time, sound was new technology. Peter Jackson filmmaker

PS – if you are more interested in pictures than words check out DALL·E 2, which allows anyone to create images by writing a text description. This has also been built by OpenAI.

Blooms 1984 – Getting an A instead of a C

When people see the year 1984 most think of George Orwell’s book about a dystopian future, but a few other things happened that year. Dynasty and Dallas were the most popular TV programs and one of my favorite movies, Amadeus won best picture at the Oscars. You can be excused for missing the publication of what has become known as the two Sigma problem by Benjamin Bloom, of Blooms taxonomy fame. He provided the answer to a question that both teachers and students have been asking for some time – how can you significantly improve student performance?  

One of the reasons this is still being talked about nearly 40 years later is because Bloom demonstrated that most students have the potential to achieve mastery of a given topic. The implication is that it’s the teaching at fault rather than the students inherent lack of ability. It’s worth adding that this might equally apply to the method of learning, it’s not you but the way you’re studying.

The two-sigma problem
Two of Bloom’s doctoral students (J. Anania and A.J. Burke) compared how people learned in three different situations:

  1. A conventional lecture with 30 students and one teacher. The students listened to the lectures and were periodically tested on the material.
  2. Mastery learning – this was the conventional lecture with the same testing however students were given formative style feedback and guidance, effectively correcting misunderstandings before re-testing to find out the extent of the mastery.
  3. Tutoring – this was the same as for mastery learning but with one teacher per student.

The results were significant and showed that mastery learning increased student performance by approximately one standard deviation/sigma, the equivalent of an increase in grading from a B to an A. However, if this was combined with one-to-one teaching, the performance improved by two standard deviations, the equivalent of moving from a C to an A. Interestingly the need to correct students work was relatively small.

Bloom then set up the challenge that became known as the two-sigma problem.

“Can researchers and teachers devise teaching/learning conditions that will enable the majority of students under group instruction to attain levels of achievement that can at present be reached only under good tutoring conditions?”

In other words, how can you do this in the “real world” at scale where it’s not possible to provide this type of formative feedback and one to one tuition because it would be too expensive.

Mastery learning – To answer this question you probably need to understand a little more about mastery learning. Firstly, content has to be broken down into small chunks, each with a specific learning outcome. The process is very similar to direct instruction that I have written about before. The next stage is important, learners have to demonstrate mastery of each chunk of content, normally by passing a test scoring around 80% before moving onto new material. If not, the student is given extra support, perhaps in the form of additional teaching or homework. Learners then continue the cycle of studying and testing until the mastery criteria are met.

Why does it work?
Bloom was of the opinion that the results were so strong because of the corrective feedback which was targeted at the very area the student didn’t understand. The one to one also helped because the teacher had time to explain in a different way and encourage the student to participate in their own learning which in turn helped with motivation. As you might imagine mastery is particularly effective in situations where one subject builds on another, for example, introduction to economics is followed by economics in business.

Of course, there are always problems, students may have mastered something to the desired level but forget what they have learned due to lack of use. It’s easy to set a test but relatively difficult to assess mastery, for example do you have sufficient coverage at the right level, is 80% the right cut score? And finally, how long should you allow someone to study in order to reach the mastery level and what happens in practice when time runs out and they don’t?

The Artificial Intelligence (AI) solution
When Bloom set the challenge, he was right, it was far too expensive to offer personalised tuition, however it is almost as if AI was invented to solve the problem. AI can offer an adaptive pathway tracking the student’s progression and harnessing what it gleans to serve up a learning experience designed specifically for the individual. Add to this instructionally designed online content that can be watched by the student at their own pace until mastery is achieved and you are getting close to what Bloom envisaged. However, although much of this is technically possible, questions remain. For example, was the improvement in performance the result of the ‘personal relationship’ between the teacher and student and the advise given or the clarity in explaining the topic. Can this really be replicated by a machine?

In the meantime, how does this help?
What Bloom identified was that in most situations it’s not the learner who is at fault but the method of learning or instruction. Be careful however, this cannot be used as an excuse for lack of effort, “its not my fault, it’s because the teacher isn’t doing it right”.

How to use Blooms principles.

  • Change the instruction/content – if you are finding a particular topic difficult to understand, ask questions such as, do I need to look at this differently, maybe watching a video or studying from another book. Providing yourself with an alternative way of exploring the problem.
  • Mastery of questions – at the end of most text books there are a number of different questions, don’t ignore them, test yourself and even if you get them wrong spend some time understanding why before moving on. You might also use the 80% rule, the point being you don’t need to get everything right

In conclusion – It’s interesting that in 1985 Bloom came up with a solution to a problem we are still struggling to implement. What we can say is that personalisation is now high on the agenda for many organisations because they recognise that one size does not fit all. Although AI provides a glimmer of hope, for now at least Blooms 2 Sigma problem remains unsolved.

Listen to Sal Khan on TED – Let’s teach for mastery, not test scores

Artificial Intelligence in education (AIEd)

robot learning or solving problems

The original Blade Runner was released in 1982. It depicts a future in which synthetic humans known as replicants are bioengineered by a powerful Corporation to work on off-world colonies. The final scene stands out because of the “tears in rain” speech given by Roy, the dying replicant.

I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.

This was the moment in which the artificial human had begun to think for himself. But what makes this so relevant is that the film is predicting what life will be like in 2019. And with 2018 only a few days away, 2019 is no longer science fiction, and neither is Artificial Intelligence (AI).

Artificial Intelligence and machine learning

There is no one single agreed upon definition for AI, “machine learning” on the other hand is a field of computer science that enables computers to learn without being explicitly programmed. The way it does this is by analysing large amounts of data in order to make accurate predictions, for example regression analysis does something very similar when using data to produce a line of best fit.

The problem with the term artificial intelligence is the word intelligence, defining this is key. If intelligence is, the ability to learn, understand, and make judgments or have opinions based on reason, then you can see how difficult deciding if a computer has intelligence might be. So, for the time being think of it like this:

AI is the intelligence; machine learning is the enabler making the machine smarter i.e. it helps the computer behave as if it is making intelligent decisions.

AI in education

As with many industries AI is already having an impact in education but given the right amount of investment it could do much more, for example

Teaching – Freeing teachers from routine and time-consuming tasks like marking and basic content delivery. This will give them time to develop greater class engagement and address behavioural issues and higher-level skill development. These being far more valued by employers, as industries themselves become less reliant on knowledge but dependant on those who can apply it to solve real word problems. In some ways AI could be thought of as a technological teaching assistant. In addition the quality and quantity of feedback the teacher will have available to them will not only be greatly improved with AI but be far more detailed and personalised.

Learning – Personalised learning can become a reality by using AI to deliver a truly adaptive experience. AI will be able to present the student with a personalised pathway based on data gathered from their past activities and those of other students. It can scaffold the learning, allowing the students to make mistakes sufficient that they will gain a better understanding.  AI is also an incredibly patient teacher, helping the student learn from constant repetition, trial and error.

Assessment and feedback – The feedback can also become rich, personalised and most importantly timely. Offering commentary as to what the individual student should do to improve rather than the bland comments often left on scripts e.g. “see model answer” and “must try harder.” Although some teachers will almost certainly mark “better” than an AI driven system would be capable of, the consistency of marking for ALL students would be considerably improved.

Chatbots are a relatively new development that use AI.  In the Autumn of 2015 Professor Ashok Goel built an AI teaching assistant called Jill Watson using IBM’s Watson platform. Jill was developed specifically to handle the high number of forum posts, over 10,000 by students enrolled on an online course. The students were unable to tell the difference between Jill and a “real” teacher. Watch and listen to Professor Goel talk about how Jill Watson was built.

Pearson has produced an excellent report on AIEd – click to download.

Back on earth

AI still has some way to go, and as with many technologies although there is much talk, getting it into the mainstream takes time and most importantly money. Although investors will happily finance driverless cars, they are less likely to do the same to improve education.

The good news is that Los Angeles is still more like La La Land than the dystopian vision created by Ridely Scott, and although we have embraced many new technologies, we have avoided many of the pitfalls predicated by the sci-fi writers of the past, so far at least.

But we have to be careful watch this, it’s a robot developed by AI specialist David Hanson named “Sophia” and has made history by becoming the first ever robot to be granted a full Saudi Arabian citizenship, honestly…..