Taking the faff out of TAF

    Published: 07 November 2018

    We are now fully into the swing of a brand new shiny school year, and so it seems like a good time to reflect on this summer’s moderations to see if there are learnings to take forward with our year 2 classes this year.

    This June was one of the busiest moderation windows I’ve experienced since joining the Assessment Team. Our large team of moderators visited 122 schools and through the ever-valuable professional dialogue saw a huge amount of good practice and secure and accurate application of the Teacher Assessment Framework (TAF).

    As you can imagine, all these visits generated a lot of feedback from moderators, teachers and heads. Now that the dust has settled and we are well underway with our current cohorts, we’ve collated some key points that could be useful to consider as we think about how we can ensure that our teaching enables evidence for all aspects of the TAF.

    Writing

    The key areas that came up were around the quantity/range of writing evidence, the independence of pieces, and some misconceptions regarding the use of success criteria or marking/feedback approaches. 

    The main point about the quantity and range of writing is just that we need to make sure there are enough pieces to demonstrate the pupil’s security with the pupil-can statements. One piece can evidence a number of the ‘pupil can’ statements, but broader evidence is needed to confidently assess against the TAF. There is no specific number of pieces of evidence that need to be consulted in order to make a secure assessment judgement, but it really is helpful to have a selection of ‘recent’ (Spring/Summer) pieces showing writing for different purposes that can provide the opportunity for the pupil to show the full range of their skills – and this can be cross-curricular. This is especially true, as you’d expect, for consideration of Greater Depth, but also applies to Expected Standard.

    There is no statutory requirement for schools to consult the STA exemplifications. However, the collections do show the variety and range of writing that can demonstrate a pupil’s security of the pupil can statements and therefore provide a really useful benchmark.

    Also, relating to the quantity of writing is the extent of the editing process. For Greater Depth, the TAF requires evidence of pupils editing their work for technical and compositional elements (‘The pupil can, after discussion with the teacher…make simple additions, revisions and proof-reading corrections to their own writing’), and aside from that, we would surely want all pupils, irrespective of final ‘TAF standard’, to feel that they are crafting their writing so they can develop a sense of achievement and better understanding of their tendencies/areas for development. However, how we create the opportunities to do this may sometimes require consideration. If, in the summer term, we are spending so long crafting and redrafting particular pieces, it can have the consequence of there being few pieces overall. This isn’t a problem where the pieces are varied and demonstrate the pupil-can statements, but it can be a problem if this means there is not much recent evidence for all of the statements in the TAF. Please do allow pupils to keep drafting/editing work as part of valuable classroom practice – especially useful is allowing editing/correcting opportunities during the writing process and part of the lesson. This process supports pupils in developing their skills as independent writers. But do bear in mind that it isn’t necessary to have a final ‘finished’ draft copied out for every piece. We can use the ‘working documents’ where the edits are being made to make assessment judgements.

    The next key area that came up a little regards the independence of writing. Often the discussion was around where pieces of writing were heavily modelled, which could lead to pupils all producing writing that was very similar rather than showing what they had taken from the model and applied independently. This can make it quite difficult to see what pupils can do in relation to the ‘pupil-can’ statements, particularly if other pieces in the collection didn’t show a consistency of these elements.

    For schools that use a ‘Talk for Writing’ approach, it is quite important that pupils get to the ‘invent’ stage of the ‘imitate, innovate, invent’ sequence, so we can see what they are independently applying, especially in the later Summer term writing.

    This moves us on to other areas that relate to independence and are often the source of misconceptions.

    With regards to success criteria, we want to be clear that we regard these as central to effective classroom practice. The STA have issued some guidance over recent years that has caused some confusion and we are still seeing evidence of that. Where the STA advise against overly-detailed success criteria, they are specifically referring to writing that will inform the final teacher assessment judgement (so, perhaps those Summer Term pieces). Furthermore, they are not suggesting we stop using success criteria, but rather that we are cautious of success criteria that function more like a writing frame with directive words/phrases/examples that pupils may tend to rely on in their writing.

    It is entirely appropriate to use detailed success criteria and writing frames when introducing new text types or technical elements, however, it may be worth thinking about stripping those back when it comes to the pupil applying their learned skills in an independent piece (eg. by not giving examples of conjunctions or punctuation that they should use). Of course, a rich learning classroom will have lots of examples displayed in rooms and resources available, which a child may independently engage with and use, and that would be fine since there is then less prescription and more independent choice reflected in the writing.

    Similarly, with teacher marking/feedback. Early in the year, we may need to indicate to the pupil specifically where they are making spelling or punctuation errors. Over the course of the year we may be able to rein that in a little, eg. by moving towards putting dots/’sp’ etc in the margin, and then eventually moving towards less directed feedback, that forces children to think a bit more about where the errors are -  a ‘find and fix’ approach (eg. “there are 2 basic spelling errors on this page - find and fix them”).This is, again, to assist with working out whether a pupil can spot and correct an error without too much guidance from the teacher.

    Peer collaborative improvement can be really helpful here. Reading whole pieces or sections to a talk partner and identifying parts to correct further supports the independence of the pupil. Stop-gaps/mini-breaks during the process of writing can give further opportunity to do this and prevent errors being repeated throughout a piece of writing.

    For more detail on ensuring that writing is ‘independent’, please see our blog from last Spring – Declaration of Independence.   

    Reading

    Overall, very few key points came up regarding the assessment of reading.

    On occasion, feedback referred to the book bands that some pupils we on. Sometimes we found that pupils were reading within a lower band than their scaled score indicated that they could be at. This then makes it quite difficult to have a range of evidence besides the test supporting that a pupil could be at a particular standard.

    For a pupil to meet the comprehension elements of expected standard, they really need to be able to read and understand books banded as ‘white’ (or equivalent) either on their own or in guided reading sessions. For the fluency element of the expected standard, we do expect pupils to be on at least the gold band, and to be able to read these books fluently.

    As always questions came up regarding how the scores should be interpreted. It would be difficult for a pupil who was working towards the expected standard to get over the magic ‘100’ on the test, but it isn’t beyond the realm of possibility. In those cases, it is the teacher knowledge of the pupil that is most helpful in deciding whether this score indicates a security at the Expected Standard or more of a borderline case that may still be more accurately at Working Towards the Expected Standard when all evidence for the pupil is taken into account. Similarly, it can be possible for a pupil to score lower than 100 and yet still be at the Expected Standard – perhaps they just didn’t perform at their usual standard in the test – and again, this is where the range of evidence and the teacher knowledge of the pupil come into play.  

    Along similar lines is the question of which score should indicate a pupil is at greater depth. The STA have not issued any guidance about this. There isn’t a particular score that marks the transition between ‘expected’ and ‘greater depth’.

    Maths

    Key feedback that arose from the moderation of maths was mostly around the evidence that was present in class work. As a secure-fit model, we do need to see evidence of all of the pupil-can statements on the TAF, and for that we often have to look beyond the test. Sometimes a particular test question can provide evidence of a ‘pupil can’ statement from the TAF, although bear in mind that not every ‘pupil can’ statement will necessarily be represented by the test.

    Even in cases where we can use test questions as evidence for the TAF, teachers also sometimes need to consider not just whether a pupil’s answer is correct but now they arrived at it. For example, consider the statement “The pupil can add and subtract any 2 two-digit numbers using an efficient strategy, explaining their method verbally, in pictures or using apparatus (e.g. 48 + 35; 72 – 17)”. There are bound to be questions on the test that involve 2-digit addition and subtraction, but the statement makes specific reference to the use of an efficient strategy.  Therefore, if a pupil managed to find the correct answers to these questions but consistently used a very inefficient strategy (e.g. drawing 48 little dots, then another 35 little dots, then counting up to find the total) one would have to question whether they really meet that statement. Of course, this isn’t a problem if the teacher has other evidence showing that the pupil can calculate efficiently. It is only an issue if there is over-reliance on the test questions to evidence the statements.

    Another key tip relates to the teaching of telling the time. At the expected standard, pupils need to be able to read the time on an analogue clock for the time on the hour, quarter past, half past and quarter to the hour. At greater depth, reading the time to 5-minute intervals is required. For pupils to gain fluency in this skill, it is highly recommended that throughout the year there is regular reference by the teacher and the pupils to the classroom clock, so that telling the time becomes an embedded life skill. Confining the reading of times on clocks to a week on the maths plan is far less likely to be effective. If this skill has been embedded throughout the year, through talk, it can be very quickly and easily evidenced through a worksheet of questions, perhaps administered as an informal test, to supplement the teachers’ professional knowledge of which pupils can tell the time.

    The final key area mentioned in feedback was reasoning, and the range of different evidence we saw of this being incorporated into activities year-round. It is important to note that some of the ‘pupil can’ statements combine more than one element. Sometimes this is knowledge of a set of facts coupled with evidence of reasoning and applying that knowledge, eg: ‘recall all number bonds to and within 10 and use these to reason with and calculate bonds to and within 20, recognising other associated additive relationships’ (Expected Standard), ‘recall and use multiplication and division facts for 2, 5 and 10 and make deductions outside known multiplication facts’ (Greater Depth).

    The ‘recall’ elements will be something teachers and teaching assistants can assess through oral maths starters, focusing on different pupils on different occasions. (On the whole, written work does not provide evidence of recall, because, even if a page of written number bonds or tables facts may be correct, one cannot tell if the pupil knows these facts or had to work them out somehow. On the other hand, written work showing lots of mistakes in these facts can be evidence that the statement has not been met.) The development of the reasoning with these facts may also be largely oral - “If you know this, what else do you know?”, but would hopefully lead to some good written evidence in books. For example, “Starting from the number bond 7+3=10, write down as many related facts as you can” - 17+3=20, 7+13=20, 20-17 = 3 etc.

    For really good evidence of reasoning and application, you need really interesting tasks that challenge children to think beyond their comfort zone. For users of the Herts for Learning ‘Essential Maths’ resource, the destination questions provide excellent rich evidence, but many other maths schemes and planning support include similar resources. For users of PA+, there is a useful folder of activities under ‘Assessment Tasks’ in the maths resource area.

    The Maths Team also have a range of helpful blogs on this topic. In particular, the recently published ‘Where’s the reasoning?’ and ‘Reasoning: where to start?’ provide a basis for discussing how to developing reasoning skills. The latter is a guest-writer account of implementing a reasoning-rich approach to teaching maths based in a junior school setting, but has helpful pointers for both Key Stage 1 and 2.

    To support Year 2 teachers with their assessments, mid-year cluster meetings will be happening just before the February half-term – details will be sent out shortly after the Christmas break. There will also be the usual summer term clusters for supporting Year 2 summative teacher assessment in April/May.

    Contact details