This was a slightly odd bit of research which came about from a chat with @CParkinson535. We were chatting about assessment on Twitter and he asked if I ever checked what students would predict their grade to be before a test, then compare the results against reality. I’d considered it, after reading Hattie’s work, but never tried it. Eventually curiosity won.
1) Students predicted their grades the day before our test.
2) We did an test (using a past exam paper).
3) Marked test.
4) Compared results (this is the interesting bit.)
I was open about what I was doing with students the whole time (using them for a bizarre experiment fuelled by Twitter, pedagogical geekery and curiosity) and shared the results with them.
These surprised us all, 50% were accurate and all but one of the remaining students were above their predictions. The strange bit was that I would have predicted the same as the students for all but two. This means that their expectations were roughly in line with mine.
I shared these results with @CParkinson535 and he has done this with over 600 students and generally finds 80% ish accuracy. This poses two questions, why the aberration? (the results, I’m not calling anyone names) and if we can guess with 80% accuracy why are we testing them?
The aberration was resolved through chatting to the group; many of them had revised harder in the knowledge they were part of my crazy experiment; so I’m going to retry it on this class just before the final exam. All’s fair in love and war;)
Weighing the pig? – It’s a great phrase which has a certain rhetoric attached, but I do want to know how students are doing. I see a value in measuring them and if it can be done in a way which focuses them on relevant content them so be it. There is the too much testing and not enough teaching argument, but finding the right balance is often a personal, professional judgement call.
Comparing Progress – It’s a good way of finding out which students are progressing at which rates. This allows me to target my efforts, direct peer support, provide additional support or challenge in the right places.
Gaps in Knowledge – Find out the bits they don’t know. Then make sure they learn those bits. Use the results to identify the gaps to fill.
Identify Support – As mentioned above in the comparing progress; it’s essential to know where your efforts for intervention and support are needed. Assessment helps to identify where the help is needed and measure the impact of that later.
Concept Cracking (Patterns) – If only one or two didn’t understand a key concept they need my support. If most of them didn’t get a concept I must have explained it shockingly and that’s useful information. No matter how experienced we are sometimes we pitch a concept at the wrong level or use the wrong example to make it relevant, it’s important to strip back the ego sometimes and look at which bits of our game we need to improve.
Technique Practice – If we never get students to practice exam style questions under pressure when it comes to the real exam they’re going to get a shock. Exam style questions may not be glamorous but they’re important to developing exam strategies and techniques. Education isn’t just about examinations but I’d feel somewhat remiss if I didn’t prepare my students for them.
Feedback and Action – Once we’ve marked it we need to give them things to do to make it better, some of my thoughts of making feedback useful can be found here.
Assessment is all well and good, but make sure you use that information for something, whether that’s informing our own practice, providing timely interventions or providing feedback for our students to act upon. Otherwise we may as well not mark it and just ask them what they think they’d achieve.
Barry Dunn – @SeahamRE