Published
12 January 2022

This is a question that comes up a lot – understandably: after all, if we are in the business of education then we are all about children making progress. But it is apparent that there still remains a variety of interpretations of that word ‘progress’. For many, many years, school leaders have been used to a systemic view of progress as being something that could be measured and expressed using numbers, and that these numbers could be used as evidence of school effectiveness at every stage in a pupil’s journey. The DfE and Ofsted changed their mind about this notion over 2 years ago now, but some school leaders still feel wary about letting go of their detailed data spreadsheets. Again, this is entirely understandable. It is a huge culture shift to move from the old world of constantly trying to convert learning into a numerical measure, to this new world where we talk about progress in a different way. But it’s a step worth taking, as I will explore in this blog.

Firstly, let’s try to define progress. As far back as 2018 (before the publication of the current Education Inspection Framework) Sean Harford (in this blog post) defined progress as “pupils knowing more and remembering more”. This is still the working definition used by Ofsted, and is supplemented by the view that the school’s curriculum is the progression model – i.e. the progress of learners is evident through exploring the journey that they are taking through a well-sequenced curriculum.

So we have shifted from a position of trying to measure progress (quantitatively) to trying to demonstrate progress (qualitatively). Or rather, a position where school leaders ultimately need to be able to reassure themselves that learners within their care are making progress through the curriculum, mainly for their own internal accountability, but also (occasionally) for external audiences.

There are just three points in time where we still attempt to express progress using numbers: once in primary school (KS2 progress scores) and twice in secondary school (Progress 8 at Key Stage 4, and value added measures at Key Stage 5). It is only possible to justify expressing progress numerically at these points in time because these measures are based on very large (national) datasets, where statistically valid correlations can be analysed, meaning that we can indicate, with a statistically significant degree of confidence, in which schools the progress made by learners was above average and in which schools it was below. At no other points in time do we need to try to express progress using numbers and attempts to do so are not worthwhile.

The question that we really want to be able to answer is ‘how well are children learning (the intended curriculum content) in this school?’  We have to accept the reality that it is never possible to truly and completely know the answer to this question with 100% accuracy. We can make good attempts at answering the question, but there will always be margins of error. This is because we can never know with absolute certainty what is going on inside the minds of learners. We use assessment to try to find out what children have learnt - the knowledge they have gained, skills they have developed and their ability to apply their knowledge and skills in various contexts. Some assessment techniques will be more reliable (and valid) than others when it comes to ascertaining the learners’ knowledge, but however effective that assessment is, it will always be a proxy for the actual learning. If we then try to boil down all our assessment knowledge into a grade or a number, we are losing even more accuracy and useful detail. So we might end up with a nice precise numerical progress measure, to several decimal places, giving an illusion of accuracy, but the reality is that it is not as meaningful as it might appear, in terms of answering our question ‘how well are children learning?’

It’s like taking a sentence written in English, translating it into German using a basic English-German dictionary, then putting that into an online engine to translate it into Swedish, then finally translating back from Swedish to English. It would not be surprising if the final result had deviated significantly from the original input. Some meaning might have been lost, or perhaps altered, so that it now implies something different.

Of course, I accept that, for my part, I have contributed to the ‘data machine’ over the years, helping to develop tools that schools could use for internal tracking and spreadsheets to analyse. At the time, schools seemed to need such systems and I was happy to help meet that demand. But now that the external need for data has gone, the opportunity really is there to re-evaluate what data we actually need - what is useful and serves a genuine purpose - and to strip away the things we don’t need or which are not helpful. In terms of the ongoing, day-to-day assessment which lies at the heart of good pedagogy, I discuss a range of important ideas in this blog: The place of assessment in the new Ofsted framework. As for data tracking systems, we definitely now advocate a ‘keep it simple’ approach (hence our Easy Tracking approach which we launched in 2020) - acknowledging that there is a huge wealth of assessment information that teachers will hold in their heads, derived from all their classroom interactions with learners, but that it is not necessary or desirable for electronic data systems to attempt to hold all such information. The most important purpose of this professional knowledge is that it should shape what happens next, i.e. teachers use the results of their assessments to either make necessary adaptations to the current lesson, or to a future lesson, or to the curriculum plan itself, or provide targeted support to particular learners. The ‘evidence’ that this is happening will be in the pupils’ learning outcomes.

As for tracking systems, the thinking behind our Easy Tracking approach is that it is a simple, light-touch approach to capturing internal summative checkpoint assessments: just enough to provide school leaders with the overview that they need, but without trying to imply a level of accuracy that would not be justified.  It should always be remembered that all this data does - all any data does - is provide a starting point for discussion. The numbers might indicate that, for example, attainment in maths in Year 4 is not quite where we would like it to be. But what does this mean? Can we deduce that this tells us that the teaching is not as good as it should be?  Of course not. We need to follow up and explore further before any firm conclusions can be drawn. We need to find out what assessment methods the Year 4 teachers have used, and how they have reached their judgements. Are their approaches consistent with other teachers across the school or do they vary significantly? We could explore for example (across all the classes in the school):

  • what sorts of questions or activities have the teachers used to inform their assessment?
  • how valid and reliable are these as assessment approaches, i.e. do they actually assess the domain of skills and knowledge that they were intended to assess (validity) and do they do so in such a way as to reliably determine the extent to which learners have secured that knowledge (reliability)?
  • what was the timing of these assessment activities, i.e. how close to, or distant from, the point at which key concepts were directly taught? Does this show whether the relevant knowledge is in the child’s short-term or long-term memory?
  • are the teachers looking for depth and application of knowledge, or a more surface-level regurgitation of facts?
  • are the assessment activities closely matched to what the children should have learnt? (This would not be the case if, for example, a commercially produced test were used which included questions on topics not yet taught.)
  • to what extent are gaps in pupils’ learning playing a part, where learners are at the ‘lower end’ of attainment? Are these gaps recent or from previous terms/years? How effectively are pupils supported to catch up?

The above list of questions is not exhaustive – it serves merely to illustrate that there are a great many variable factors underpinning any summative assessment.  Therefore it would be wrong to assume, from data alone, that we can deduce (for example) in which classes the most effective teaching is taking place.

There is an important message for school governors here too. Because governors are often not educational or curriculum experts, there could be a tendency to over-rely on numerical data as a means to monitor the effectiveness of the school. Again, data can play its part but is best seen as a conversation-opener, not as the be-all and end-all. “I noticed the data seems to show that…” is fine, as long as that is seen within the context of a line of enquiry, that needs to include gathering further information, e.g. talking to curriculum leaders, talking to children about their learning, asking children to show them their work and discuss what they learnt from an activity.

Children with Special Educational Needs

A genuinely frequently asked question is how best to evidence progress for children with SEND, especially within the context of our Easy Tracking system, where a child might be categorised as working at ‘Pre-Curriculum Expectations’ year on year. It is not possible to infer from that data whether the progress that child is making in their learning is acceptable, good, excellent – or not good enough. But the truth is, it was never really possible to make those judgements from data, even when we were using measurable P-scales or other tools. The reality has always been that, particularly amongst this group of learners (although you could argue this is true for everybody) each individual is completely unique. Their needs, their barriers to learning, are so completely unique that no ‘one-size-fits-all’ approach to defining ‘good progress’ numerically has ever been appropriate.

Just as is the case for all children, to be able to comment on such a child’s progress, we first need to have established what the curriculum expectations are, based on where they are now and where we are trying to get them to in the future. This might mean discussing appropriately ambitious curricular targets relevant to the child, perhaps thinking about the small steps and goals that we might want them to achieve across the next 6 weeks, say. And then reviewing the learning to see whether those goals have been achieved. That is what progress is all about. What could be considered slow progress for one child might be considered phenomenal for another, taking into account that child’s particular context. Attempts to produce a system that would quantify such progress seem both unworkable and unnecessary.  Once you accept that fact and embrace the reality that progress can only be described qualitatively, in curricular terms, it frees us up to focus on what really matters: using really good formative assessment to establish exactly where each child is in their learning now, and then planning really good teaching that is matched to the children’s needs, to get them to where you need them to be.

Progress versus attainment

Putting all this talk of progress to one side, what we really need to be striving for is (at least) expected standards in attainment for all learners. One of the main arguments against the previous system of levels, sub-levels and point scores, was that it was possible for a culture to develop where we just focused on all pupils making a certain amount of progress (whether that be defined as ‘3 points’ or something else) and thinking that was ok. The result of this of course was that attainment gaps remained (and still remain) wide. A pupil working behind the expected level of attainment at the end of Year 3, say, could make the ‘expected amount of progress’ (whatever that is) across Year 4, but would still end the year behind where we ideally want them to be and could be forever chasing their tail – never quite securely achieving what is expected of them. Our aim ought to be that the curriculum in each year of learning equips the children with the knowledge and skills that they need to be able to successfully access the curriculum for the next year. We must therefore shift the focus of ‘Pupil Progress Meetings’ to become ‘Pupil Progress & Attainment Conversations’ and ask ourselves important questions, such as

  • in which aspects of the curriculum are each of these children secure?
  • in which aspects are they less secure?
  • what specific measures could be put in place to try to enable these children to achieve the Expected Standard? (e.g. more practice in…, focused teaching on…, addressing gaps in…)

Ultimately, for the future life chances of our learners, attainment is the key.

Share this