Teaching to the test – another point of view

point-of-veiw-2A point of view is a programme on radio 4 that allows certain well-read, highly educated individuals, usually with large vocabularies to express an opinion. It lasts 10 minutes and is often thought provoking, concluding with a rhetorical question that has no answer.

This week Will Self the novelist and journalist gave his point of view on teaching to the test, as you might imagine it caught my attention. Self starts by telling a story about the life of a “good student,” and how it would unfold. He describes the way in which their concentration intensifies when the teacher states that what they are going to learn next is important and often examined. The story continues, as a result of their diligence and technique, the “good student” gets the necessary grades to go to University. They don’t however select the University on the basis of the course of study or on what they passionately wish to learn, no its based on the Universities credibility in league tables.

Upon successfully gaining a degree the student, now an employee gets a desk job that rewards a similar style of rubric mentality. As an employee, they are assessed against targets, performing well only on the ones that promise promotion and a pay rise. Eventually they retire and die.

Self concludes that this ordinary, dull, uninspiring life started back in the classroom all those years ago, when the teacher failed to educate and inspire, and simply taught to the test.

Over egging the pudding

There is a logic to this story, and it sounds ever the more inevitable as Self narrates it in his black and grey voice. But that’s all it is, a story. It avoids detail and colour, offering little regard as to the individual’s ability to reflect at some point in their life and ask searching and probing questions. It is as if somehow because the teacher highlighted the importance of one piece of knowledge it somehow stifled the student’s capacity to one day think for themselves.  Self is how they say, overegging the pudding, taking an interesting question as to the impact teaching to the test might have and serving up an omelette.

Teaching to the test is not bad

Brunel university asked a question as to what makes an unmissable lecture. In addition to many arguably more commendable answers, including the passion of the tutor and because they wanted to learn, the likelihood of the subject being taught having a high probability of being in the exam was key. Suggesting that a specific topic might be on the exam paper firstly, ensured a good attendance and secondly guaranteed the student listened intently.

Attention is important but even for the diligent student focus is vital. Learning everything is simply not possible, faced with 20 chapters, the student needs some clue as to where they need to direct their energy and time. Of course, the educationist will say that everything is important, but saying that will not make it so. Knowing that something is examinable at least gives a starting point and helps guide the student through the material quickly and efficiently. It’s also worth adding that It does not exclude the need to be inquisitive, in fact by making the student read a particular topic it may inspire them to find out more.

Exams and exam answers also provide examples of what is expected and the standard the student must reach if they are to be successful, no amount of narrative in the student handbook or curriculum guidance will do this as effectively.

The type of assessment matters

Of course, in Selfs world, teaching to the test removes the need to do anything more than learn about what will be in the exam. He suggests that students need to think outside the box rather than simply tick them. I have to admit I like that sentence.

But he does have a point, if the test is so narrow that it only assesses memory or a very small part of the syllabus then that is all the student will focus on. But that is just a bad test, this is of course where I am in danger of becoming idealistic and painting a picture that is not a true reflection as to what is happening. Not all tests are good, and undoubtedly some students will pass with limited thought and little more than good memory skills. Yet with changes in technology it becomes ever more possible to build tests and simulations that asses the student ability to perform in real world situations, and for that matter think outside the box.

Teaching to the test has become a term used to describe bad teaching and poor assessment and no one would agree that either of these are desirable. But it is not the process that’s problematic, it’s the application. Testing in its many forms is part of learning but it needs done well and thoughtfully.

In conclusion

Having now read the blog I would encourage you to listen to Will Self – click. It is of course not for me to say who presents the right point of view, you need to make up your own mind. For those however who were taught to the test no matter how long ago, you probably won’t understand even what I am asking because to the best of my knowledge this question has never been tested before……………?

 

Are exams fit for purpose (part two) – what are the alternatives?

You dont fatten pigs 2

Last month’s blog came to the conclusion that examinations* are fit for purpose or at least “a purpose.”

They provide the student with a clear objective to which they can direct their efforts and focus attention and are a transferable measure of competency that can be assessed at scale. The “at scale” point is important as there are many ways of assessing competence but few that can cope with the need to test thousands of students all at the same time.

The main problem with examinations is that they don’t always examine what is most valued; the method of assessment often has significant limitations as to what it actually tests and the results are presented in league tables that give a far too simplistic view of success.

I am not sure we can resolve all of these but it might be worth exploring other options, specifically alternative methods of assessment. For example If you change the method of assessment from a formal, often timed written exam to say a portfolio of work, not only do you change the method of assessment but you will change what is being examined, two birds with one stone perhaps.

Different methods of assessing competence

Open book exams

Open book assessment offers a way of testing application rather than memory. Students have access to a text book that contains information relevant to what they are being asked. It’s the use of knowledge that is important, not the knowledge itself. The idea of open book could easily be adapted, why not allow students access to the internet during the exam, they could look up anything they wanted. Is this not more representative of what happens in the real world?

Take out exams

Similar to the above the so called “take out exam” allows the student to take the exam away to work on at home using whatever resources they prefer, books, internet etc. They return the next day with a completed answer. This can work better than you might at first think so long as you have a robust mechanism to detect plagiarism. There are several very good software packages that can spot the most sophisticated types of copying.

Case studies/simulations

A case study provides an environment for the student to demonstrate they can use their knowledge to solve problems and or offer advice in a virtual world. Most case studies tend to be written but this is one area that we could see some clever and affordable use of technology to better simulate the real world.

Performance tests

In a performance test students are required to demonstrate a skill/process, create a product etc while being observed by the assessor who will evaluate the performance. A great example of testing ability to apply knowledge but suffers from the subjectivity of the assessor and has limited application at scale.

Portfolios

Portfolios are most often collections of the student’s work that demonstrate their ability to perform a specific task. These can be simulations of the real world or portfolios of work actually undertaken on the job. A portfolio can include written documents, emails, audio or video recordings, in fact anything that provides evidence as required by the assessor.   Portfolios are perfect for assessing application but the process of assessment is expensive and not without bias.

Viva Vocal – (living voice) Oral exam

Often used to test PhD students, an oral exam gives the assessor chance to question the student. This is a very effective method where you are looking for higher level skill and depth of understanding. As identified last month it’s probably one of the oldest forms of assessment.

Digital badges – capturing the learning path

Being awarded a badge as recognition of achievement is something many will be familiar with especially if you were a boy scout or girl guide. But digital badging is new and becoming increasingly popular because of the internet. A good example would be linkedin and the badges awarded to you by others as recognition of certain skills.  Many of the assessment methods above provide a first past the post type of assessment, you pass and that’s it. Digital badging on the other hand is a form of lifelong assessment that evolves along with your career.

Digital badging for me is one of the most exiting forms of assessment and I am not alone Nasa have been using digital badging since 2011. Read more about digital badging.

Assessment in the future

Scanning for competenceThe list above is far from comprehensive and many other equally valid types of assessment exist e.g.  Role plays, Slide presentations, Assignments etc but what might assessment look like 15 years from now. Well how about using MRI scans to identify which parts of the brain are being used?  Not sure it will catch on but it would provide some interesting evidence as to how the student is getting to the answer, simple memory or a genuine and deep understanding .

*Examinations defined as a written test administered to assess someone’s level of understanding, knowledge or skills

Are exams fit for purpose? (part one)

take-the-same-testI have written in the past about what passing an exam proves but have never questioned if exams achieve what they were originally designed to do, are they fit for purpose?

Firstly let me define what I mean by an exam. A written test administered to assess someone’s level of understanding, knowledge or skill that results in a qualification if successful. This is in contrast to a test which is a method of assessing someone’s level of understanding, knowledge or skill often as part of a course in order to provide feedback. A test does not have to be written. Although exams don’t have to be written either, many are and initially at least I would like to keep the definition as narrow as possible.

In order to answer the question, are exams fit for purpose we must first take a step back and look at how we got to where we are now.

 

A brief history of examinations

The first standardised test is believed to have been introduced by the Chinese in 606 AD to help select candidates for specific governmental positions. However most examinations around this time would have been oral, requiring the candidate to recite a dissertation or answer questions. Although there is evidence of written exams being used as early as 1560*, it was not until the 1820’s that many Universities began to adopt the practice. From 1850 onwards the written exam became the norm in most UK Universities. In 1854 under the Gladstone government selection of Civil servants was based on their ability to pass an exam, this time however it was written.

Bureaucracy – In 1917 to help bring some order to what had been described as chaotic the Certificate and the Higher School Certificate were introduced. Then in 1951 we had the General Certificate of Education (GCE) examinations, more commonly known as Ordinary ‘O’ level and Advanced ‘A’ level , these were normally taken at 16 and 18.

In the 1960’s the CSE (Certificate of Secondary Education) was born, opening up qualifications for all, not just those that went to Grammar school. However this two tier system was thought divisive and so in 1988 under the guidance of the then Education Secretary Sir Keith Joseph both sets of examinations were replaced by the GCSE. The GCSE was graded and contained credit for course work.  In 1991 the General National Vocational Qualifications, (GNVQS) were established intended to incorporate both academic and vocational elements, by 1995 these were accepted as ‘equivalent’ to GCSE.

In 2014 we find change again, gone is the course work and written examinations once again become the main method of assessment, although there will be grading, 1 to 9 with 9 being the higher mark. The exams will still be called GCSE’s, although officially they are known as GCSE (England). This is to avoid confusion with Wales and Northern Ireland, who are not changing.

Yes they are

Historically at least it would appear the purpose of the exam was to provide a recognised and transferable measure of competency in a given subject or discipline. The lack of transparency and consistency of the oral exam resulted in them being replaced with written ones and a more formal bureaucratic structure was developed to administer the process.

And in many ways there is very little wrong with this.

The problem is not with the exam itself, but with what is being examined. If as a society we value “thinking and creativity” for example, then should we not be examining these rather than subjects that require the candidate to do little more than rote learn facts.  Perhaps we should explore different methods of assessment, the written exam has its uses but hand written papers are looking increasingly outdated in a world that communicates electronically not only in short texts and tweets but with video and photos . In addition the way exam results are used in league tables to show winners and losers is divisive. It looks like a measure but has in fact become a target that schools and teachers must hit or be considered failures.

Please watch this it’s very funny…..and thought provoking

Not on the test

 

 

 

 

 

In the second blog about exams I want to look more closely at some of these points, in particular what other ways we can assess what people know.

*Assessment around this time was through debate between a number of learned people all at the same time and lasting for two hours or more.

Teaching to the test – Interesting research but the fat lady is still in good voice

Fat lady still singingThis week researchers from the University of East Anglia released some very interesting findings that resulted from testing 594 bio-science students in their first week of term at five universities.

The students selected would be considered by many more than competent in their subject, almost all had a grade A*, A or B in biology at A-level. Yet when they were given 50 minutes to answer 38 multiple choice questions on cells, genetics, biochemistry and physiology from their A level core syllabus, they only got 40% correct. The period of time between the students sitting their A levels and the test was three months.

Lead researcher for the study, Dr Harriet Jones, said: “What our research shows is that students are arriving at university with fantastic A-level grades, but having forgotten much of what they actually learned for their exams.”  She went on to say that the trend to teach to the test, to ensure good results for schools’ reputations, was the problem.

The schools are to blame then

The facts of the research are clear, students who had successfully passed a test, were unable to pass a similar test three months later. The conclusion reached is that the students did not understand (see my blog on understanding) their subject well enough and passed their A levels probably using little more than memory. And who is to blame, the schools of course, for teaching to the test. Why the school do this is worthy of further debate, but government pressure and the impact of league tables will certainly be in the mix.

But do employers not accuse Universities of delivering up similar ill prepared students. The test is different but from the employers perspective the result is the same. A University student who professes to know something but when tested “in the real world” doesn’t.

Does this mean that Universities are also teaching to the test!

It’s about the test etc

The problem is not in teaching to the test; the problem is with the test, the pass mark and possibly the marking. If the test was more Testing but for what!aligned to what the student needs to know/do at a fundamental level, the pass mark sufficiently high and the marker having some degree of autonomy to form judgements, then the results would probably be different. It could of course be that the exams are easier – Exam chief: ‘you don’t have to teach a lot’ for our tests.

The big criticism of teaching to the test is, it results in a narrowness of understanding, little in the way of depth and does not push students to think in abstract and creative ways. But if the test, which incidentally does not have to be in the exam hall or on paper/PC was able to “test” for these qualities then teaching towards it would perhaps be more acceptable.

Bottom line

Teaching to the test is unlikely to change, in fact given the popularity of league tables  in education just now it may well increase, but with more effective testing the results might be better students, happy Universities and even happier employers.