Lessons from Khanmigo – Learning remains stubbornly human

Sal Khan didn’t start Khan Academy in 2004 with a grand vision to disrupt global education. His objective was to help his cousin, Nadia, with her maths homework. He wasn’t trying to change the world, just be a good teacher. Despite these humble beginnings Khan Academy has become a huge success.

TL;DR – the short audio version

Khan Academy
Set up in 2005, the Khan Acadmey mission is to provide a free, world-class education to anyone, anywhere. It employs around 350 people and supports 40 to 50 million students each month. A not-for-profit that relies largely on donations (notably from the Gates foundation) to pay for its $120 million to $170 million operating costs.  

It is built on the idea that students learn best when studying at their own pace, able to revisit topics as many times as they like to clarify understanding. Support is available in the form of instant feedback, tips and hints and progress checking. The style of instruction is short (chunked) videos, often just a coloured pen writing on a black screen, followed by practice questions and step by step answers.  In terms of methodology It takes a flipped classroom approach. Instead of introducing new content in class, students watch videos or complete the initial learning at home. Class time can then be used for practice, discussion, and problem-solving. This allows the teacher to focus on addressing misunderstandings and supporting individuals. It incorporates mastery learning, where students are expected to fully understand a concept before moving on rather than progressing regardless of whether they’ve understood it or not.

Then in November 2022, the world changed when OpenAI launched ChatGPT, within two months it had 100 million monthly active users. Sal Khan actually received a personal email from OpenAI’s leadership just prior to its launch asking him to test the model. This was pretty special, a major new technology being given to educators, Duolingo were also early adopters because they offered something other businesses couldn’t. The opportunity to find out if AI could actually teach rather than simply automate process that led to efficiencies.

Khanmigo: Born 2023 Died 2026
On the 14th of March 2023 in partnership with OpenAI, Khanmigo was created. But this was no AI Chatbot simply offering answers to questions. Its core philosophy was to act as a virtual Socrates, asking questions such as: “what do you know about the subject already?” This was to reduce cognitive load, scaffold learning and create the right amount of desirable difficulty, all essential components for good learning. For teachers it helped them save time by producing lesson plans writing questions and tracking student progress.

So, what went wrong – Sal Khan really believed that AI, in particular Khanmigo would change the future of learning. In a widely viewed TED Talk in 2023, he declared, “We’re at the cusp of using AI for probably the biggest positive transformation that education has ever seen.”

However, in April this year (2026) even Khan had to admit “for a lot of students, it was a non-event. They just didn’t use it much.” While he remains optimistic about the many applicationa of AI in education, he’s also come to see its limits.

“I just view it as part of the solution; I don’t view it as the end-all and be-all.” Chalkbeat.

Part of the problem has been attributed to the students themselves, with Khans CLO Kristen DiCerbo saying that students aren’t great at asking questions. You can give them access to the world’s best AI tutor, but if they don’t know what they don’t understand, they won’t ask. And if they don’t ask, Khanmigo simply doesn’t work.

Lessons learned
Sal Khan has come in for a fair amount of criticism, partly due to his initial claims about Khanmigo being the biggest positive transformation that education has ever seen, and then having to admit that it wasn’t. But isn’t that precisely what innovation looks like? It takes a rare kind of conviction to believe deeply enough in an idea to pursue it, and it takes genuine integrity to stand up and admit you were wrong. Sal Khan didn’t just theorise about improving education, he tried to do something to make it better.

But all is not lost, there are some really important lessons to be drawn from the Khanmigo experiment. Not least about how AI technology actually fits into the practice of learning, and what impact it has on students when put to the test in real classrooms.

  • Students weren’t ready for Socrates – Khanmigo’s Socratic design required students to already possess some basic knowledge. When you don’t have the conceptual scaffolding to understand what you’re confused about, you can’t ask useful questions. This led some to simply past the same question into another AI platform to get the answer, and as a result learned nothing.
  • It favoured good students – Those with strong metacognitive skills (The ability to notice, monitor, and manage your own thinking) did well, but for the others it was a struggle leading to frustration and reduced engagement, clicking a few times before giving up.  Sal Khan said that it was like a shy student who won’t raise their hand.
  • Engagement was passive – Students didn’t actively engage in conversation, when asked a question by Khanmigo they often replied “IDK” (I don’t know) rather than thinking about what the question, resulting in usage falling below expectations. Initially engagement was high, but this was put down to novelty, and over time this simply fell away.
  • Not inspiring nor motivational – Although it was designed to be encouraging, you can only say “good attempt” so often before it has little effect. Students work hard for people. For a teacher who believes in them, a parent who will ask how they are feeling or a peer who can share their experience. Khanmigo removed all of that social texture. Although it is infinitely patient and never disappointed, there’s no one to let down.
  • Extrinsic motivation is underrated – Education theory has long championed intrinsic motivation, the idea that students should want to learn for its own sake. Khanmigo was built on that assumption. But for most students, extrinsic factors really matter e.g. grades, approval, peer comparison. Strip that away and many students simply don’t engage.
  • There was no relationship – Khanmigo could not replicate the sensitive, personal connection a human tutor provides, making it less effective, especially for struggling learners. A teacher knows that a student is distracted because of something that happened at home, or that they know the individual panics when faced with a test. The AI only “sees” the text.

Conclusion
Personally, I am a fan of Sal Khan, and see this as a huge educational experiment that simply didn’t work in the way it was intended. But “didn’t work as intended” is not the same as “wasted.” What Khanmigo revealed, perhaps more clearly than any research paper could, is that learning is stubbornly, irreducibly human. It requires relationship, stakes, and social texture. These are not new ideas, educators have argued this for decades but Khanmigo gave us a live, large-scale demonstration of what happens when we design as if those things don’t matter.

For those of us involved in teaching and course design, the lessons are important. We cannot assume metacognitive skill. Engagement needs scaffolding, not just encouragement. Motivation is more complex than theory suggests. And the relationship between teacher and learner is not a nice-to-have its critical.

Technology works best when it supports the human, not replaces them. Sal Khan himself has now arrived at this conclusion. The experiment is not over. But the next iteration will be better because of what this one taught us and that, in the end, is exactly how learning is supposed to work.

Blooms 1984 – Getting an A instead of a C

When people see the year 1984 most think of George Orwell’s book about a dystopian future, but a few other things happened that year. Dynasty and Dallas were the most popular TV programs and one of my favorite movies, Amadeus won best picture at the Oscars. You can be excused for missing the publication of what has become known as the two Sigma problem by Benjamin Bloom, of Blooms taxonomy fame. He provided the answer to a question that both teachers and students have been asking for some time – how can you significantly improve student performance?  

One of the reasons this is still being talked about nearly 40 years later is because Bloom demonstrated that most students have the potential to achieve mastery of a given topic. The implication is that it’s the teaching at fault rather than the students inherent lack of ability. It’s worth adding that this might equally apply to the method of learning, it’s not you but the way you’re studying.

The two-sigma problem
Two of Bloom’s doctoral students (J. Anania and A.J. Burke) compared how people learned in three different situations:

  1. A conventional lecture with 30 students and one teacher. The students listened to the lectures and were periodically tested on the material.
  2. Mastery learning – this was the conventional lecture with the same testing however students were given formative style feedback and guidance, effectively correcting misunderstandings before re-testing to find out the extent of the mastery.
  3. Tutoring – this was the same as for mastery learning but with one teacher per student.

The results were significant and showed that mastery learning increased student performance by approximately one standard deviation/sigma, the equivalent of an increase in grading from a B to an A. However, if this was combined with one-to-one teaching, the performance improved by two standard deviations, the equivalent of moving from a C to an A. Interestingly the need to correct students work was relatively small.

Bloom then set up the challenge that became known as the two-sigma problem.

“Can researchers and teachers devise teaching/learning conditions that will enable the majority of students under group instruction to attain levels of achievement that can at present be reached only under good tutoring conditions?”

In other words, how can you do this in the “real world” at scale where it’s not possible to provide this type of formative feedback and one to one tuition because it would be too expensive.

Mastery learning – To answer this question you probably need to understand a little more about mastery learning. Firstly, content has to be broken down into small chunks, each with a specific learning outcome. The process is very similar to direct instruction that I have written about before. The next stage is important, learners have to demonstrate mastery of each chunk of content, normally by passing a test scoring around 80% before moving onto new material. If not, the student is given extra support, perhaps in the form of additional teaching or homework. Learners then continue the cycle of studying and testing until the mastery criteria are met.

Why does it work?
Bloom was of the opinion that the results were so strong because of the corrective feedback which was targeted at the very area the student didn’t understand. The one to one also helped because the teacher had time to explain in a different way and encourage the student to participate in their own learning which in turn helped with motivation. As you might imagine mastery is particularly effective in situations where one subject builds on another, for example, introduction to economics is followed by economics in business.

Of course, there are always problems, students may have mastered something to the desired level but forget what they have learned due to lack of use. It’s easy to set a test but relatively difficult to assess mastery, for example do you have sufficient coverage at the right level, is 80% the right cut score? And finally, how long should you allow someone to study in order to reach the mastery level and what happens in practice when time runs out and they don’t?

The Artificial Intelligence (AI) solution
When Bloom set the challenge, he was right, it was far too expensive to offer personalised tuition, however it is almost as if AI was invented to solve the problem. AI can offer an adaptive pathway tracking the student’s progression and harnessing what it gleans to serve up a learning experience designed specifically for the individual. Add to this instructionally designed online content that can be watched by the student at their own pace until mastery is achieved and you are getting close to what Bloom envisaged. However, although much of this is technically possible, questions remain. For example, was the improvement in performance the result of the ‘personal relationship’ between the teacher and student and the advise given or the clarity in explaining the topic. Can this really be replicated by a machine?

In the meantime, how does this help?
What Bloom identified was that in most situations it’s not the learner who is at fault but the method of learning or instruction. Be careful however, this cannot be used as an excuse for lack of effort, “its not my fault, it’s because the teacher isn’t doing it right”.

How to use Blooms principles.

  • Change the instruction/content – if you are finding a particular topic difficult to understand, ask questions such as, do I need to look at this differently, maybe watching a video or studying from another book. Providing yourself with an alternative way of exploring the problem.
  • Mastery of questions – at the end of most text books there are a number of different questions, don’t ignore them, test yourself and even if you get them wrong spend some time understanding why before moving on. You might also use the 80% rule, the point being you don’t need to get everything right

In conclusion – It’s interesting that in 1985 Bloom came up with a solution to a problem we are still struggling to implement. What we can say is that personalisation is now high on the agenda for many organisations because they recognise that one size does not fit all. Although AI provides a glimmer of hope, for now at least Blooms 2 Sigma problem remains unsolved.

Listen to Sal Khan on TED – Let’s teach for mastery, not test scores

Artificial Intelligence in education (AIEd)

robot learning or solving problems

The original Blade Runner was released in 1982. It depicts a future in which synthetic humans known as replicants are bioengineered by a powerful Corporation to work on off-world colonies. The final scene stands out because of the “tears in rain” speech given by Roy, the dying replicant.

I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.

This was the moment in which the artificial human had begun to think for himself. But what makes this so relevant is that the film is predicting what life will be like in 2019. And with 2018 only a few days away, 2019 is no longer science fiction, and neither is Artificial Intelligence (AI).

Artificial Intelligence and machine learning

There is no one single agreed upon definition for AI, “machine learning” on the other hand is a field of computer science that enables computers to learn without being explicitly programmed. The way it does this is by analysing large amounts of data in order to make accurate predictions, for example regression analysis does something very similar when using data to produce a line of best fit.

The problem with the term artificial intelligence is the word intelligence, defining this is key. If intelligence is, the ability to learn, understand, and make judgments or have opinions based on reason, then you can see how difficult deciding if a computer has intelligence might be. So, for the time being think of it like this:

AI is the intelligence; machine learning is the enabler making the machine smarter i.e. it helps the computer behave as if it is making intelligent decisions.

AI in education

As with many industries AI is already having an impact in education but given the right amount of investment it could do much more, for example

Teaching – Freeing teachers from routine and time-consuming tasks like marking and basic content delivery. This will give them time to develop greater class engagement and address behavioural issues and higher-level skill development. These being far more valued by employers, as industries themselves become less reliant on knowledge but dependant on those who can apply it to solve real word problems. In some ways AI could be thought of as a technological teaching assistant. In addition the quality and quantity of feedback the teacher will have available to them will not only be greatly improved with AI but be far more detailed and personalised.

Learning – Personalised learning can become a reality by using AI to deliver a truly adaptive experience. AI will be able to present the student with a personalised pathway based on data gathered from their past activities and those of other students. It can scaffold the learning, allowing the students to make mistakes sufficient that they will gain a better understanding.  AI is also an incredibly patient teacher, helping the student learn from constant repetition, trial and error.

Assessment and feedback – The feedback can also become rich, personalised and most importantly timely. Offering commentary as to what the individual student should do to improve rather than the bland comments often left on scripts e.g. “see model answer” and “must try harder.” Although some teachers will almost certainly mark “better” than an AI driven system would be capable of, the consistency of marking for ALL students would be considerably improved.

Chatbots are a relatively new development that use AI.  In the Autumn of 2015 Professor Ashok Goel built an AI teaching assistant called Jill Watson using IBM’s Watson platform. Jill was developed specifically to handle the high number of forum posts, over 10,000 by students enrolled on an online course. The students were unable to tell the difference between Jill and a “real” teacher. Watch and listen to Professor Goel talk about how Jill Watson was built.

Pearson has produced an excellent report on AIEd – click to download.

Back on earth

AI still has some way to go, and as with many technologies although there is much talk, getting it into the mainstream takes time and most importantly money. Although investors will happily finance driverless cars, they are less likely to do the same to improve education.

The good news is that Los Angeles is still more like La La Land than the dystopian vision created by Ridely Scott, and although we have embraced many new technologies, we have avoided many of the pitfalls predicated by the sci-fi writers of the past, so far at least.

But we have to be careful watch this, it’s a robot developed by AI specialist David Hanson named “Sophia” and has made history by becoming the first ever robot to be granted a full Saudi Arabian citizenship, honestly…..