Case study – Omelettes and Cognitivism

1774_making_summer_sausage_omelette

If you have actually got as far as reading this first paragraph, there must have been something in the title that caught your attention. Perhaps you were simply curious as to how these three words are connected, or maybe one of the words relates to something you are interested in?

Whatever the reason, you have begun to process information and so engage in cognition, put more simply, you have started to think.

Making an omelette

But first a question, take a moment and think about how you make an omelette? ……….Then in your own words, explain how you would do this? ………. As you might imagine this is not about the omelette but the process you went through in order to answer the question.

The process – There was clearly an element of memory and recall as you thought back to the time when you last made an omelette, you would also have needed to direct your attention to the event itself and use strong visualisation skills, to see yourself actually whisking the egg, adding the salt and pepper etc. However so sophisticated is the human mind you can actually create images of making an omelette based on your knowledge of scrambling an egg! The point being, you have the ability to visualise activities of which you have no or little experience. The mental processes outlined above go some way to explaining Cognitivism. Cognitivism in learning is the study of how information is received, directed, organised, stored and perceived in order to facilitate better learning. Cognitivist believe that mental processes should be studied in order to develop better theories as to how people learn.

Case study is higher level

As you progress up the exam ladder the style of examination question changes. It starts with relatively simple activities that require you to recall something already taught e.g. what is the capital of France? It then moves to questions that test understanding, e.g. explain why Paris is the capital of France? At higher levels you will ultimately come across, Application, Analyse and Evaluation, and it is these higher level skills that a case studies often requires you to master.

I have written about case studies before, firstly, Putting the context into case study and secondly Passing case studies by thinking in words. Here I want to explore how by understanding how people think  (Cognitivism) you can develop strategies to help you answer what seem to be impossible questions.

Application of knowledge

Imagine you have been given a case study that has a large amount of information about the company, the people and the financial position. You have been asked to offer advise as to how the company should improve its internal controls within the HR department. Even though you may not think you know the answer, the process outlined above will give a framework to follow.

  • Firstly, focus your attention on the key words – internal controls and HR deportment
  • Secondly, recall any information you have about internal controls and HR departments
  • Thirdly, deploy strong visualisation skills, seeing yourself in that company, bringing in as much detail as possible to give context, and then use common sense
  • Finally write out your answer – Say what you see, talk through how you would do it, mention some of the problems you might experience and outline the possible solutions

These are cognitive strategies developed from learning more as to how people think, why not give them a go?

And here is how to make an omelette from my favourite instructor, Delia – yet another practical tip, remember last month it was how to make toast.

Advertisements

Learning unleashed – Micro learning

dogholdingleash

As with many other types of learning, micro learning is difficult to define. At its simplest it can be thought of as small chunks of untethered content that can be consumed in about 5 minutes, 8 minutes tops. Although video is possibly the best example, watch this micro learning chunk on how to boil an egg  it can come in other mediums for example quizzes, flashcards, infographics etc.

Each chunk of micro learning should be capable of being consumed independently but can form part of a larger topic. For example, if you watch the video on how to boil an egg, that could be part of a series of micro lessons, including how to scramble an egg, how to poach an egg, you get the idea. The video might also be interactive and include questions at the end to check that you were paying attention. When fully formed, it’s a complete course, with its own learning objective, content, examples and an assessment. And that is its real value from the perspective of a student, they are getting a well designed chunk of learning available when it is most needed – its learning at the point of need.

Growing in popularity

Organisations are finding that micro learning is popular not just with the “attention short” millennials but all ages. One reason for this is it’s how we like to learn, being presented with information in relatively short bursts. Despite the often quoted falling attention spans being a justification for micro learning, apparently it was 12 seconds and is now only 8, there is little real evidence that this is true. The original research which was attributed to Microsoft is in fact from another organisation, and not easily confirmed.

But if we think of it less in biological terms and more behavioural, there is merit. It’s not so much that attention spans are changing its that we now live our lives at an ever-increasing pace, and so want information and learning to move just as fast. Micro learning also needs to be accessible, in practical terms this means it should work on a mobile device, most likely a smartphone. And because we always have our phone with us, it’s always available. This might be when you have some free time, on a train, travelling to and from work perhaps, or when faced with a problem that requires a skill you don’t have. For example, that boiled egg now needs to be placed on the best toast in the world, but how do you make the best toast? If only there was a short 3-minute video you could watch. But from a learning perspective micro learning has one other big advantage. When you are trying to understand something, you are at your most curious, and if that curiosity can be satisfied before the moment passes, learning will take place more easily.

Micro learning is informal, meaning it is not a structured A to B, B to C process led by a teacher, its student led, requiring the individual to pick the next step in the journey. This can of course be time consuming as the student wanders around, following their instincts as to what is important rather than taking direction from an expert. But if the student has a clear understanding of where they are going and a time constraint, its can be an excellent self managed learning experience.

Micro learning is distilled wisdom

As Mark Twain once so famously wrote “I didn’t have time to write a short letter, so I wrote a long one instead,” micro learning is not created by taking existing content and cutting it into smaller chunks. It requires you revisit exactly what it is that needs to be learned, remove everything that is not essential in helping you achieve that objective, then offer up that content in a short easily understood chunk. This will need the help of an individual with a high level of subject expertise and significant experience. It will also, as Mark Twain so succinctly identified take far longer than you might at first thought.

Here are some great examples of micro learning, they won’t take you very long to watch – after all, its micro learning.

  • This is a gamified micro course that trains people to make a Domino’s pizza – click.
  • A free, gamified language app that uses short lessons to help learn almost  any language – click.
  • And lastly, not all micro learning is in a video format – here is an infographic that summarise the key features of micro learning – click.
  • Oh and just in case – how to make toast! – click.

Plastic fantastic – how the brain grows

Stress BallA major new idea was presented to the world in 1991, to many it will mean very little but in terms of improving our understanding of the brain it was a milestone.

Functional magnetic resonance imaging (fMRI) had seen its roots in the earlier MRI, but instead of creating images of organs and tissues, fMRI looks at blood flow in the brain to detect areas of activity and so show how the brain works in real time.

The implications of this for learning are significant because for the first time we were able to identify which parts of the brain were reacting when different tasks were being performed. For example, we know that the cerebrum which is the largest part of the brain performs higher functions such as interpreting touch, vision, hearing, speech, emotions etc.

Brain plasticity

But it is the next discovery that is far more interesting from a learning perspective. For many years the common belief was that brain functionality (intelligence) was to a certain extent hard wired, largely genetic, with a fixed number of neurons. It probably didn’t help that the computer gave us a simile for how the brain worked which was misleading.

That all changed when it became possible to observe the brain and watch how it responded to what it saw and was asked to do. What this showed was that the brain has the ability to generate new cells, a process called Neurogenesis.

Click here to listen to neuroscientist Sandrine Thuret explain how humans can generate new brain cells i.e. Neurogenesis.

This may make sense for children given the basic brain functionality when a child is born, something must be happening to turn them into caring and thoughtful adults. In fact, by adolescence the brain has produced so many synapse, the connections between cells, they have to be cut back or pruned. Hence the term synaptic pruning.  What was perhaps more of a surprise was that growing new brain cells was not just something children could do, adults were able to do it as well.

The classic example is the evidence by Professor Eleanor Maguire from the Wellcome Trust Centre and colleague Dr Katherine Woollett who followed a group of 79 trainee taxi drivers and 31 controls (non-taxi drivers). Their research showed that London taxi drivers developed a greater volume of grey matter i.e.  cell development, three to four years after passing “the knowledge”  when compared to the control group.

Learning about learning

This may leave you thinking, all very interesting but what does it mean for me as a student?

In the same way that people can develop a growth mindset, bringing it within your control, you can do the same with your academic performance. Just because you don’t understand something or pick it up very quickly doesn’t mean that you won’t be able to. This is not to say that some people are not “brighter” than others, it is estimated that around 50%/60% of your intelligence is genetic, but that’s on the assumption your brain cannot change, and what this proves is it can.

And here is one last interesting observation, knowing how the brain works can actually help rewire it. There is evidence that students who know more about how they learn, (meta cognition) will naturally reflect on what they are doing when they are learning which in turn will help grow new cells, how good is that.

Artificial Intelligence in education (AIEd)

robot learning or solving problems

The original Blade Runner was released in 1982. It depicts a future in which synthetic humans known as replicants are bioengineered by a powerful Corporation to work on off-world colonies. The final scene stands out because of the “tears in rain” speech given by Roy, the dying replicant.

I’ve seen things you people wouldn’t believe. Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the dark near the Tannhäuser Gate. All those moments will be lost in time, like tears in rain. Time to die.

This was the moment in which the artificial human had begun to think for himself. But what makes this so relevant is that the film is predicting what life will be like in 2019. And with 2018 only a few days away, 2019 is no longer science fiction, and neither is Artificial Intelligence (AI).

Artificial Intelligence and machine learning

There is no one single agreed upon definition for AI, “machine learning” on the other hand is a field of computer science that enables computers to learn without being explicitly programmed. The way it does this is by analysing large amounts of data in order to make accurate predictions, for example regression analysis does something very similar when using data to produce a line of best fit.

The problem with the term artificial intelligence is the word intelligence, defining this is key. If intelligence is, the ability to learn, understand, and make judgments or have opinions based on reason, then you can see how difficult deciding if a computer has intelligence might be. So, for the time being think of it like this:

AI is the intelligence; machine learning is the enabler making the machine smarter i.e. it helps the computer behave as if it is making intelligent decisions.

AI in education

As with many industries AI is already having an impact in education but given the right amount of investment it could do much more, for example

Teaching – Freeing teachers from routine and time-consuming tasks like marking and basic content delivery. This will give them time to develop greater class engagement and address behavioural issues and higher-level skill development. These being far more valued by employers, as industries themselves become less reliant on knowledge but dependant on those who can apply it to solve real word problems. In some ways AI could be thought of as a technological teaching assistant. In addition the quality and quantity of feedback the teacher will have available to them will not only be greatly improved with AI but be far more detailed and personalised.

Learning – Personalised learning can become a reality by using AI to deliver a truly adaptive experience. AI will be able to present the student with a personalised pathway based on data gathered from their past activities and those of other students. It can scaffold the learning, allowing the students to make mistakes sufficient that they will gain a better understanding.  AI is also an incredibly patient teacher, helping the student learn from constant repetition, trial and error.

Assessment and feedback – The feedback can also become rich, personalised and most importantly timely. Offering commentary as to what the individual student should do to improve rather than the bland comments often left on scripts e.g. “see model answer” and “must try harder.” Although some teachers will almost certainly mark “better” than an AI driven system would be capable of, the consistency of marking for ALL students would be considerably improved.

Chatbots are a relatively new development that use AI.  In the Autumn of 2015 Professor Ashok Goel built an AI teaching assistant called Jill Watson using IBM’s Watson platform. Jill was developed specifically to handle the high number of forum posts, over 10,000 by students enrolled on an online course. The students were unable to tell the difference between Jill and a “real” teacher. Watch and listen to Professor Goel talk about how Jill Watson was built.

Pearson has produced an excellent report on AIEd – click to download.

Back on earth

AI still has some way to go, and as with many technologies although there is much talk, getting it into the mainstream takes time and most importantly money. Although investors will happily finance driverless cars, they are less likely to do the same to improve education.

The good news is that Los Angeles is still more like La La Land than the dystopian vision created by Ridely Scott, and although we have embraced many new technologies, we have avoided many of the pitfalls predicated by the sci-fi writers of the past, so far at least.

But we have to be careful watch this, it’s a robot developed by AI specialist David Hanson named “Sophia” and has made history by becoming the first ever robot to be granted a full Saudi Arabian citizenship, honestly…..

 

Concentration – the war in the brain

Concentrating

One of the most important skills in learning is the ability to concentrate. If you could focus your attention on a specific task for long periods of time you would be able to absorb more content, more quickly.

But concentrating is not easy. The reason is partly because we lack the ability to manage distraction. I have written before about focus, information overload and the problems with multi-tasking, but this is a large and fascinating subject.

The war in the brain

Improving concentration has a lot to do with attention, which in some ways is an invisible force, but as we have found before neuroscience can help us gain insight into the previously unknown. For example, most of us will have what is called a priority map, a map of the most visited places in our brain. Its value is that it can be used to identify how we prioritise incoming information and as such where we place our attention. It’s worth stating that attention a is a limited resource so how we use it is important.

Take this attention test and find out your level of attention.

The problem is that these maps change based on how “relevant” the information is, and relevancy itself is dependent on three systems that continually compete with each other. I know this is getting complicated but stick with it, concentrate!

The executive system – Sitting in the frontal lobe, this is the main system and orients attention according to our current goals. For example, I need to learn about double entry bookkeeping, so I will place my attention on page 4 and start reading.

The reward system – As you might imagine this is the system that offers us rewards. A reward can be as simple as the dopamine rush you get when checking your mobile phone, the problem is, you should be reading page 4! And its made worse by the fact that the brain’s attention naturally moves to flashing lights, which you often get when a text comes in.

The habit system – This system operates using fixed rules often built up over time by repetition, perhaps it’s the reason you keep looking at your phone just to check that you haven’t had a text even though you know you haven’t because you would have seen the flashing light….But most importantly the habit of checking, created by you has once again distracted your attention, when you should still be reading page 4!

Hence the term, war in the brain, these systems are in competition for your attention. The result is exhausting, you don’t finish reading page 4, and feel tired even though you have achieved very little.

How to improve concentration  

Some of the methods below will seem obvious and there is of course no magic bullet, however because there is a scientific reason as to why these might work I hope you will be more likely to give them a go.

  1. Reduce distraction –  if you have to make a huge amount of effort to check your mobile phone, the reward you get from checking it will diminish. The simple advice is don’t have your phone with you when studying or anything else that might occupy your thoughts. Also have a space to study that is quiet, with simple surroundings and nothing interesting that might be a distraction. Finally, although there is mixed evidence on playing music or listening to white noise in the background, it may be worth a try.
  2. Set goals – this is to support your executive system, write down your goal and don’t make them too ambitious.
  3. Relax and stay calm – it’s hard to concentrate when you are feeling high levels of anxiety. Methods to help with relaxation include, deep breathing, click this video its very helpful, and of course exercise which I have written about in the past, because of it being a natural antidote for stress.
  4. Avoid too much stimulation – novelty seeking behaviours for example playing video games can become imbedded in your reward system. They can make studying appear very dull and unrewarding especially if you have played a game immediately before getting down to study. Keep it for afterwards, by way of a reward perhaps.

And if you would like to find out more watch these:

What’s the use of lectures?

Robot lecturerThe title of this month’s blog is not mine but taken from what many would consider a classic book about what can realistically be achieved by someone stood at the front of a classroom or lecture theatre, simply talking. Written some 25 years ago but updated recently Donald A. Bligh’s book takes 346 pages to answer the question, what’s the use of lectures?

What makes this book interesting is the amount of research it brings to bear on a topic some consider an art form and so not easily measured or assessed.

With many in Higher education questioning what they get for their £9,250 per annum, and contact time being one way of measuring value, it’s as important a question as ever.

For clarity, we should define what we mean by lecturing, as ever Wikipedia can help –  A lecture (from the French meaning ‘reading’) is an oral presentation intended to deliver information or teach people about a particular subject.

What should happen in a lecture?

If you’re a student attending a lecture you would hope to learn something, however as many of my past blogs have discussed, learning is a complicated process and so we may need to break this question down a little further by asking, what should a lecture actually achieve?

A lecture should….

  • Transmit information
  • Promote thought,
  • Maybe change opinion or attitude
  • Inspire and motivate
  • Help you be able to do something i.e. develop a behavioural skill

Well here is the bad news, according to Mr Bligh, a lecture is only really good for one of the above, to transmit information. And it’s not even better than many other methods e.g. reading, it’s simply as effective, but no more.

Promoting thought, changing opinions

Lectures are relatively passive whereas a discussion requires that people listen, translate what is said into their own words, check if it makes sense with what is already understood, construct a sentence in response etc. In effect, a discussion is far more effective than a lecture in developing thought.

In addition, putting the student in a situation where they have to think is important, for example by giving them a problem or asking a question as is the case when you have to answer a past exam question for example. A discussion can also help change opinions, especially where you can hear other people’s views, often different to your own. It has a longer-term impact when the group comes to a consensus.

Inspiration and motivation

Bligh also argues that on the whole lectures are not an effective means of inspiring or motivating. He suggests that it should certainly be the objective of the lecturer to try, it’s just they rarely succeed. I find myself slightly disagreeing, lecturers can be inspirational, and yet maybe this is just my personal bias from having watched Sir Ken deliver his “do schools kill creativity“  or the last lecture delivered by Randy Paush.

But perhaps, these are just the exceptions that prove the rule.

Developing skills

And finally, if you want to help people become good at a particular behaviour, you don’t tell them how to do it, you get them to practice, over and over again, with good feedback.

The end of the lecture?

I don’t think this is the end of the lecture, these criticisms have been around for many years. But I can’t help thinking that with new technologies and online learning, lectures are going to have to get a whole lot better in the future.

And what will Universities point to as value for money then?

 

 

The 5 top EdTech trends – summer of 2017

Glastonbury a marginally more interesting gathering….but only just.

We are in the season when many learning and technology leaders gather to discuss what’s new and what’s trending in the world of education. And at two recent conferences, Learning Technologies and EdTechXEurope there was plenty to see. Generally, the role of technology in learning seems to have found its place with many acknowledging it should support learning not drive it. However it’s still very easy to look at the latest shiny new offerings and think, this is great how can I use it, rather than, what learning problem does it solve.

Here are a few of the most notable developments.

1. Video is getting even better – fuelled by the YouTube generation of learners, those who would rather watch a video than read a book as a means to consume knowledge, we have some new developments.

Firstly, using video to deliver micro learning.  Not just small chunks of video but untethered, JIT, 3 minute courses that offer the learner digestible easy to remember information. Think of micro learning as a series of very short courses that could be linked to each other or not, and can even include assessment.

Secondly, interactive video. TV is no longer the all commanding medium it once was, it like other technologies has had to evolve. In recent years the shift has been towards better engagement, offering spin off programmes where there is a live audience, web sites that showcase the backstory to the characters and programmes that require the audience to vote and so influence events. Now we have interactive video, where the individual can choose what they would do and so change the future. Check out this amazing example, used by Deloitte to attract new talent.

2. Gamification is becoming better understood. For the uninitiated gamification is the use of game based principles to improve motivation, concentration and more effective learning. Gamification uses Points (P) as a measure of reward, Badges (B) as a visual record of success, and leader boards (L) to create competition.

We now believe Dopamine, the pleasure induced neurotransmitter (chemical) is not created as a result of a reward e.g. by being given a badge, it is the challenge and subsequent achievement that releases the dopamine which in turn leads to pleasure. This might seem obvious, with hindsight, no one gets pleasure from being top of a leader board, if they did nothing to get there.  In addition, dopamine is released when you have a new experience, so think about changing pathways, setting different questions and tasks, it’s certainly not very motivational to go over the same content again.

3. Information overload is leading to a need for Knowledge Curation – we are living in an age where  information is abundant. You can learn anything from the internet. But there lies the problem, we have too much information, we suffer from information overload. Curation is the collecting and sorting of meaningful content around a theme, and it is now in some instances being thought of as more valuable than the content itself.

Arguably curation is not so much about what you curate and share but what you don’t share. In addition to the organisation of content the curators need to have an expertise in the subject and an understanding of their audience and what they want.

Steven Rosenbaum in his book Curation Nation, offers up a good summary. “Curation replaces noise with clarity. And it’s the clarity of your choosing; it’s the things that people you trust help you find.”

4. The market is becoming more accepting of user generated content (UGC) – organisations are beginning to see the benefits of UGC for a whole host of reasons. It’s a very fast way of generating content, there is a lot of expertise that can be uncovered by allowing individuals to share what they know, it’s often user friendly, and importantly its cheap. It is of course not perfect, and there are concerns about quality, but by allowing the users to rate the content, the quality might just look after itself.

5. Virtual reality (VR), Augmented Reality (AR) and Artificial intelligence (AI) – not that these are all related, but just a simple way of me summarising three areas to keep an eye on in the not too distant future. All of these technologies are becoming cheaper, largely because of the investment made and experience being gained in the gaming industry.

By way of a footnote Google have released an open source software called Tensorflow which can help with machine learning, something that they believe will help drive new initiatives in AI.