The tale of the tail that wags the dog

    Published: 25 January 2018

    I know, we shouldn’t be looking at the tail that wags the dog but…(deep breath) I actually really like the way the questions are constructed in the SATs papers at the end of both key stages. Tests are supposed to be testing right?

    I have been working intensively with Year Six cohorts as well as training sessions specifically around supporting Year Six teachers. In part of the course we look at the reasoning papers themselves from last year as well as the national analysis about the content and the year group pupils were likely to learn it according to the Curriculum ’14 PoS. On the face of it and over both reasoning papers, 2.5 marks were attributed to Year 3 content, 16.5 marks from Year 4 and 21.5 from Year 5 out of the 70 marks total. More than half of the content marks come from previous year groups. But I believe this is superficial data and I think misleading. Test developers do consider content domains but they also consider cognitive domains where intellectual processes are also tested.

    The cognitive domain seeks to make the thinking skills and intellectual processes required for the key stage 2 mathematics test explicit. Each question will be rated against the four strands of the cognitive domain listed in sections 5.1 to 5.4 below to provide an indication of the cognitive demand.

    (STA, 2016)

    Because of this I dug out my, slightly crumpled, copy of the Test Developers’ Framework (available here) not because schools should game the system but because it’s pretty interesting and it tells us how robust our provision is. This led me to re-read Norman Webb’s Depth of Knowledge work (1997) - Webb developed a process and criteria for systematically analysing the alignment between standards and standardised assessments – to be honest it has been much misused and as usual crammed into a graphic meant to support teachers a ’la Bloom and the Newman Error Analysis (aka RUCSAC) which I am not advocating using. However, Webb’s analysis and the DfE framework resonate in places notably when he states the following -

    The depth of knowledge required by an expectation or in an assessment is related to the number of connections of concepts and ideas a student needs to make in order to produce a response, the level of reasoning, and the use of other self-monitoring processes. In addition, other factors influence the cognitive demands of performance including the social or contextual requirements, the variety of representations students are expected to use (written, verbal, pictorial, and variations within each), and requirements for transfer and generalization to new situations (Webb 1997)

    In the Test Developers’ Framework the following three cognitive domains are tested along with spatial and geometric thinking which I have not included;

    Depth of understanding

    This strand is used to assess the demand associated with recalling facts and using procedures to solve problems.

    Tail_1.png

    Computational complexity

    This strand is used to assess the computational demand of problems.

    Tail_2.png

    Response strategy

    This strand describes the demand associated with constructing a response to a question.

    Tail_3.png

    (STA, 2016)

    Let’s consider the cognitive demands of a couple of questions.

    Tail_4.png

    Firstly, a famous one in our office, but nationally only 52.6% of pupils answered this correctly. Yes some pupils were wondering what a koala was and that could have been a distraction, but the numbers are small and it is rooted in Year 4 (time) and 5 (percentages and common factors) content. There are some in built complexities however, recall of 24 hours in a day is key but notice the shift in language from 12 hours each day to 50% of its life. Nothing that would worry an adult perhaps but perhaps quite a leap to see both of these sentences are providing the same information if you are eleven and a teensy bit stressed – in fact it hadn’t occurred to me until a teacher pointed the potential difficulty out either. Subtle shifts in language are evident throughout the tests as I aim to demonstrate in a further blog currently dominating my thoughts. Pupils are also being asked to make connections about the relationship between 12 and 18, with a non-explicit 6 and 24 being key contributors to the solution. In terms of depth of understanding, I would say that this sits between a rating scale of 3 and 4 as the component parts of the problem are simple but the links between the parts and processes are not clearly identifiable and it’s not particularly routine either so transfer is necessary. The computational complexity is not particularly high in my view but I think the response strategy is more testing as some reasoned construction may be required to organise more complex working. My point is here that although this question contains content many pupils will have learned before Year 6, the question construction takes it up a gear.

    Tail_5.png

    (STA, 2017)

    Another deceptively simple question is this one. Again, fractions and decimals pupils will have met before Year 6 but only 44% of pupils succeeded in gaining a mark for this nationally. Pupils are not asked whether Adam is correct, they are told he is correct. The bubble shows us that it is the response strategy that has been ramped up this time; this question requires that the answer will need to be constructed, organised and reasoned.

    Tail_6.png

    (STA, 2017)

    Several pupils, whose tests I have seen and who answered this incorrectly, found precision of response difficult here – I don’t doubt they knew that 2/5 was greater but that wasn’t what tested here exactly. And whilst the uplift in rehearsal of written reasoning literally lifts my heart when I see books, often it is this precision that is missing. A complex response does not have to be long winded and it must not miss the point.

    Now all of this I think does have connotations for our school maths curriculum. If our provision for pupils is only based on recall of facts and procedures, with expectations for pupils responses to be answers placed after the equal sign and where the strategy for solving is always evident then we can’t expect pupils to cope with the cognitive demands of an assessment, nor are they truly getting the mathematical diet necessary for life.

    Considering Webb’s Depth of Knowledge work made me think about several ways I can construct the same question, beginning with basic recall/procedure/response and developing in complexity in terms of computation, response and depth of understanding.

    Tail_8.png

    So yes, a lot of the content is taken from previous year groups and if that isn’t in place then Year 6 is a difficult place to be. But equally, if the cognitive demands, not statutory in the curriculum haven’t been met then we are neglecting the rigour necessary for success.

    Hopefully I’m forgiven now for mentioning testing!

    Contact details

    Latest blogs

    Receive our latest posts direct to your inbox...