Impact, Impact, Impact

Impact, Impact, Impact

Having initially focused on intent and then implementation, Ofsted is already moving increasingly toward impact as a key area of focus during inspections. This was seen in recent inspections in three schools I work with regularly. Inspectors were now looking at the data the school showed them, as well as asking how leaders used the data. This is a shift away from earlier inspection practices under this framework. Their train of thought is moving very much towards, “So what is the impact of…?” when discussing aspects of the school’s work with leaders. The inspection handbook has recently been changed slightly with sub-headings for the Quality of Education grade descriptors, including the word ‘Impact’.

This highlights a potential issue in the future in just about every school I work with, namely, the accuracy of summative assessment information. As we approach the end of another school year, teachers and subject leaders will collect and collate summative data for each class. This normally involves gathering the percentage of pupils working below, working at and working above age expectations in each subject. In reading and maths, this is often supplemented by data collection points at the end of each term and usually includes some form of test results alongside teacher assessment. All other subjects and writing are typically based on teacher assessment, although some schools use assessment tasks and tests built into their unit plans.

However, having discussed this data with numerous subject leaders and senior leaders in a wide variety of schools no-one would put their mortgage on the absolute accuracy of the information, nor can many leaders clearly state what they do with that information.

Some subject leaders have moved further on with this. Any gaps the data suggests for groups of pupils not achieving expected standards (based on the school’s curriculum model) form the basis for three lines of enquiry. Is it due to either:

  • the teaching
  • the curriculum, or
  • the pupils

 The last one sounds harsh, but there could have been high mobility in that class, an increase in SEND need, long-term absences or various other reasons why some pupils did not achieve as well, based on their starting points that year.

If it was because of the curriculum, what is being done to address this? Obviously, subject leaders need to bring any issues and possible solutions to the attention of senior leaders.

If teaching was the issue, then what precisely was the problem? Is it subject knowledge, lack of time, or a misunderstanding about the level of challenge…?

I recently worked with a small MAT looking at the data that core subject leaders were collating. Their system presents classes as a coloured bar, coded for working well below, just below, at or GD.

The CEO and I met with leaders from the schools over a couple of days, with each meeting including a discussion about pupils’ progress and attainment.

We soon realised there needed to be greater consistency in the information different schools and leaders put into the system. Some were too cautious, so pupils appeared to have gone backwards since the previous year. Some were putting all pupils in as working well below at the beginning of the year, not realising that termly data was supposed to be based on what had been taught so far, i.e. on track for the end of the year. Some had no pupils at GD or removed any that had been because they thought that that had to wait until the end of the key stage, so the inconsistencies continued.

Below, I have included the example we put together to explain what the ‘bar’ in the tracking should represent. It is not definitive or exemplary, but it might be helpful to consider any system you have and how rigorously and consistently it is being used.

All assessments are based on curriculum content taught so far. They are not based on end-of-year content until, obviously, the end of the final term in the summer.

The expectation would be that pupils who are Green and Blue from previous data points would stay Green and Blue, respectively, although some pupils could move from Green to Blue.

Likewise, there should be pupils who move from Orange to Green as the support /catch-up/ adaptation/ interventions have an impact over time.

Any significant changes in the bands from one data drop to another, or from one year to another, should be cause for discussion and investigation by subject/phase leaders in the first instance.

As a result of these discussions, going forward, the MAT subject leaders will carry out light touch sampling at the end of the year to moderate teacher assessments.

At the end of term, senior leaders will choose three or four foundation subjects. The respective subject leaders will carry out a book scrutiny and pupil voice of three or four children in each year group to see if the evidence supports teacher assessments. This will then give assurance to the school about the validity and accuracy of the assessment information. The next step will be what we do with that information, but that’s for another blog.

I’m sure other schools will have alternative and probably more effective methods for validating the information they collect. If so, please share!

Continue the Conversation

For more information about Tim’s courses, click here.

To book Tim or one of our consultants to work with your school, please complete the enquiry form or email us at consultancy@focus-education.co.uk

You can find us on X (Twitter) @focuseducation1 or please contact us by email at customerservice@focus-education.co.uk.

Back to blog