So we’ve assessed – what now?

Just about every school I worked with this year has been developing their summative assessment systems and practice, especially in foundation subjects. And now as we finally approach the Summer break that information is being finalized and subject leaders will then have a look at it. But very few subject leaders have actually had much (or in some cases any) guidance and support in using the information from the school’s tracking system or wherever the summative information is kept.

As a profession we don’t always have the initial skillset for data analysis. In Jo Owen’s book “How to Lead” he summarizes research which compared the average skills of three professions, comparing teachers, civil servants and city traders.

On a scale of 10 down to 1 teachers scored 9/10 on people skills. Civil servants managed 4/10 (so perhaps not they are not so civil after all?). And city traders scraped a  2/10, nicely reinforcing any stereotypes you might have about their personalities…

LIVE COURSE: Let’s Take a Deep Dive into Foundation Subjects

This course draws together the very popular individual foundation subject-specific deep dive courses we have run recently and is ideal for newly appointed subject leaders or for subject leaders in schools likely to be inspected soon.

This half day course will provide primary foundation subject leaders and senior leaders with background information on the deep dive process as well as ideas and tools to use in school to audit and evaluate specific foundation subjects against the three I’s – intent, implementation and impact.

We also scored well on organisation skills with 8/10, the same as civil servants and smashing those city traders with their paltry 4/10. To be fair, try teaching a class of primary children without having any organizational skills and it isn’t going to end well. Or start well for that matter.

However, when it came to analytical skills and action focus teachers scored 6/10 on average. Not bad at all, but indicative of the need for support and clear guidance for subject leaders in how to interpret assessment information and formulating a clear idea of what to do next in action planning. After all, analysing data is unlikely to be what most teachers have any experience of as they come into the profession.

The other potential issue is that if some direction and support is provided, colleagues can get bogged down in the detail. Spending hours working out the percentage of girls who forgot their PE kit on a Wednesday, support Derby County and are working at age expectations in DT by the third week in September compared with boys who are left-handed and wear size 5 trainers is not really that useful. That kind of extreme ‘drilling down’, albeit perhaps slightly exaggerated in my example, is the lingering hangover of the inspection models of yesteryear, still causing something of a headache to this day in some schools.

So what could subject leaders be looking for in summative information?

In the schools I have visited where it works well, the focus is on the bigger picture. After all, we can’t be expected to provide 1:1 teaching for pupils in every subject.

Typically, class teachers are asked to assess in a way that summarises pupils into three groups in each subject; those working below, those working at and those working above the school’s own curriculum end-of-year expectations in each subject.

In some schools there is no need to even list or name those working at age-related; they simply go through on the nod, deemed ready for the following year by default. In other schools it is more nuanced depending on the tracking and/or summary system used.

Useful questions for subject leaders, looking at this information for each class, include:

  • Do any classes or year groups stand out in the data? Where are outcomes strongest and why is this?
  • Do any groups stand out as working below across the school or key stage? e.g. disadvantaged pupils, SEND, boys, summer born?
  • Are there noticeably more pupils working below expectations in one class compared with the other class/es in the same year group? This needs to be explored to find out why. It might be the starting points of pupils in one class were lower, higher numbers of EHC pupils, mobility, staff subject knowledge, or that one teacher is more rigorous in their assessments. These questions can also be considered if differences between year groups are evident.
  • How do school leaders and subject leaders know the information has validity? Some schools sample just a few random books and pupils for pupil voice from those deemed ‘Working at’ by class teachers to moderate that judgement. This can be really useful in providing validation, and useful for the subject leader in gaining a picture of their subject across the age range of the setting.
  • Many schools include a Greater Depth summative judgment and this needs to be used with caution. After all, there is no national curriculum descriptor for this in foundation subjects so what is this based on? Do all teachers really have a clear picture of what a really solid ‘expected’ looks like compared to whatever the school deems GD to be in all subjects? And can a child be GD in a subject before finishing a key stage? After all a child might be particularly adept at gymnastics and cricket, and based on this they go into the tracking as GD at the end of that year. The following year when they don’t excel in swimming and netball they ‘go backwards’ as far as the data is concerned. It is worth deciding as a school how this is addressed uniformly across the setting.

Ultimately, we still spend a lot of time collecting data in schools, although thankfully far less time than we used to. But even so, the important thing is what we do with it and subject leaders will probably be grateful for some guidance and support in this area. And a Summer break.

Browse our range of  products to support teaching SEND pupils here

Related Articles