4 things I've learnt about assessing Computing and ICT

I've been thinking about, doing, and running courses in the art and science of assessing what kids know, understand and can do when it comes to Computing and ICT for a long time. Here are 4 things I've learnt. (This is part of a longer article called "12 things I've learnt about assessing Computing and ICT", which will be featured in the next issue of the Digital Education newsletter.)

Update: The longer version of this article has now been published in Digital Education. It’s called 12 things I’ve learnt about assessment. To read it, subscribe to the newsletter, and then click on the link to the July 2016 issue given in your introductory email.

The more I learn, the more I realise I don't know

I think this must be true of any sphere of knowledge. The more you know, the more nuances you see, the more you become aware of the alternatives.

I experienced the same thing when I studied Economics at school. When I was 18 I thought of myself as a budding economist. By the time I'd finished my degree in Economics I thought of myself as someone who didn't know all that much in the total scheme of things.

Getting back to assessing Computing and ICT, when I was at the Qualifications and Curriculum Authority we would spend whole days looking at examples of assessment questions and examples of pupils' work, arguing back and forth about whether this particular question was "valid", or whether that particular sample of work really proved that the student knew her stuff.

My "take-away" from this is that if someone tells you they know the definitive way of assessing pupils' understanding of Computing, they don't have a tremendously good grasp of the subject. That's why on my courses I aim to get teachers thinking about assessment in different ways, rather than profess to know, or have the arrogance to assume that I know, the best way for them to go about it in their school with their kids.

Don't believe simplistic solutions

I like simple solutions, but not simplistic ones. A simple (partial) solution might be to say you're going to give pupils a short baseline test at the start of each new topic. That's an eminently sensible thing to do, and with the right tool and the right approach, it shouldn't take long.

A simplistic solution is usually introduced with the phrase "All you have to do is...". In my opinion, any sentence starting like that presages a simplistic "solution" that is not a solution at all.

Consolidation is not progress

I'm certainly in favour of checking that students have really understood a concept, such as conditionality. But I've sometimes seen instructions that tell you that if a student gets the right answer three times to the same sort of problem, they have achieved more than if they get it right only once or twice.

No they haven't.

All that proves is that either they have consolidated their understanding, knowledge or skills, or they have learnt how to answer that sort of question. To evaluate progress, you have to set different types of problem, to come at it from several angles.

As I used to say when we had Levels, achieving a Level 3 ten times doesn't make you a Level 30, it makes you a Level 3.

Groupthink can lead you astray

It's easy to assume that when everyone is saying or doing the same thing, then they must be right -- the "wisdom of crowds" argument. But as Crispin Weston points out in his article, It's the technology, stupid!, the book of that name is somewhat disparaging about the so-called "wisdom" of crowds.

Let's put it this way. Since many of the approaches to assessing Computing, while useful, have effectively reinvented Levels, you have nothing to lose and everything to gain by thinking through all the issues for yourself. That is far better than assuming everyone else is right, or buying an assessment product without knowing how it works (which I referred to as outsourcing assessment to an algorithm).


As I said in the introduction to this article, I'm going to be publishing a longer version in the Digital Education newsletter.

Update: The longer version of this article has now been published in Digital Education. It’s called 12 things I’ve learnt about assessment. To read it, subscribe to the newsletter, and then click on the link to the July 2016 issue given in your introductory email.

Assessment makes a fairly regular appearance in the newsletter. Recent articles have included:

  • How to convert your assessment system to levels or grades
  • Assessment as a process of scientific discovery
  • Perverse incentives in assessment
  • A question of assessment

To subscribe to this esteemed publication, just click the button below and follow the instructions. Thanks!