I am becoming more and more convinced that the role of graded lesson observations in the FE sector should be re-considered. In many colleges, they are the main measure of teacher performance and a key catalyst for staff development. Yet the process of graded observation itself is riddled with weaknesses and the process of assessing and developing teachers is more complex one than graded cycles suggest.
My niggling suspicion is that colleges are attached to it because it can create reams of data and trackable steps that feed the auditing monsters of the sector. It allows quality teams to show that they are “taking action towards improvement.” Yet how confident are we that the graded observation model is the appropriate tool for assessing the quality of T&L? What evidence do we have that the use of this model leads to sound and lasting improvements?Here are just a few of the many concerns I have about this model.
The snapshot: Teaching is a complex, rich craft, more art than science. The one-off snapshot nature of the graded observation means that there is very limited evidence to form a valid judgment about a teacher’s skills and approach. Too often teachers are labelled as “grade 3 practitioners” off the back of a one-hour observation and action planned within an inch of their lives as a result. This may be entirely inappropriate for them, plus a waste of valuable development resources, as the issues arising in that single lesson may not be the most significant ones for developing that teacher’s practice. And that is to say nothing of the problems created by observers with poor skills in the role.
We need a much broader review of a teacher’s practice before forming any judgment of their strengths and areas for development. A combination of short regular learning walks, 121s with teachers, feedback from learners, a look at achievement data over time, videoed lessons with reflections from the teacher, lesson plan, SoW and course file reviews would tell us much more than a graded observation alone.
If we are to make this commitment to a more thorough and thoughtful approach to graded observation, it might mean adopting a more differentiated approach with our resources:
1. A cycle of graded observation for each team followed by a year of other development work on experimental, innovative work and sharing good practice, so that there is space for growth and enough observers to go round
2. Targeting graded observation on the basis of need, e.g. prioritizing graded observations early in the year for teams or individuals whose performance is a concern due to student achievement etc; waiting till later in the year to do the areas with a strong profile or even choosing to omit them for a year while other dissemination activities are focused on in those areas. People can still be developed without being measured.
Hoop jumping: it is widely known that many teachers find graded lesson observations stressful, time-consuming and a distraction from the main business of working with learners. As graded observations have become a key part of performance management, there is the feeling of being judged, the fear of failing the test, the awareness of a great power imbalance in this interaction. In the new Gove universe, this pressure is likely to increase.
There can be a range of negative effects of the graded process, such as teachers trying to second guess how to get the magic grade 2, taking a checklist approach to bringing in the “hot” methods of the moment in order to get through the assessment and therefore not delivering the kind of lesson they generally give. It is all about hoop jumping and not about reflection and genuine professional review.
In preparing for the lesson, there is often too much focus on paperwork and not enough thought about delivery to those learners for that specific lesson because of the sense of surveillance involved. The graded process can throw the teacher off their stroke and lead to behaviour that is not representative of their practice. How many of us have had an observed lesson where we honestly felt we didn’t react to learners in natural ways, due to the presence in the corner?
So what exactly are we observing here? I think we are rarely seeing an authentic example of classroom practice for the practitioner. Yet we form a serious professional judgment with ramifications in this way. We need to look at ways of getting a more authentic and sophisticated view of how teachers interact with learners and plan their learning. This is a delicate operation if we are to get enough information to form a more accurate picture of the teacher without them feeling harrassed and policed. If we are truly committed to getting a more rounded and accurate view of T&L in our colleges, we need to take this challenge seriously. This would involve a dialogue in colleges about how best to do this with resources redirected to service the enhanced model and staff opinions canvased at all levels. In my view, we should be actually asking if graded cycles are what we need or whether dispensing with them could offer some very positive benefits and better deployment of resources. More of this heretical view in a later blog….!
The follow up: I am involved in training observers in developmental feedback skills and it is apparent that many lack the skills needed to write useful, clear, insightful reports and carry out a professional dialogue in which there is reflection and teacher-led development. It appears to me that this is not their fault as in general, they have been trained by Ofsted trainers to focus attention on assessing the standard of the lesson instead of communicating about how to develop and improve it.
I often see a “grade and run” approach, where assessment and measuring is the primary focus, as if the written report with nice, comfortable numbers all over it, will ” improve ” the teacher almost by magic. From my friends in Quality teams, I am hearing that this process often doesn’t deliver.
I see many colleges providing scant follow-up after an observed lesson, which I am afraid shows a lack of professional respect for the notion of quality improvement and the individual involved.
There is nothing sound about this. At best it is dubious; at worst I think we are wasting valuable time and effort on a process that is not fit for purpose. Some colleagues from the sector have told me they feel the action plan is a bureaucratic form of lip service to the process, rarely followed up with time and effort spent with the teacher.
On a much more encouraging note, I see positive development happening in colleges where coaching conversations are enabled and resourced through time and roles. They can happen at peer level with trained T&L coaches and through management interactions when curriculum managers are trained to use coaching approaches in meetings and 121s. OCVC have shared their experiences of the benefits of coaching within the graded process through a video clip:
Encouraging teachers to reflect, think differently and embrace new approaches should be part of an appropriate follow-up process, and coaching approaches can create depth and momentum in these conversations. After any observation, especially one where different sources of info have been triangulated, the appropriate actions may involve peer observation, lesson plan reviews, visits to another college, action research or reflection after videoing your own lessons. All of these activities can be enhanced by a skilled coach.
In conclusion, I think we should have some robust conversations to interrogate our graded observation processes and stop swallowing the model whole. We need to seek out reflections from a range of stakeholders about how well they work and reflect on ways to enhance them. We need to integrate them with a richer and more sophisticated approach to action planning and weave in powerful coaching approaches. We could also do well to question whether we need them at all.