….is that they are either too tight and don’t let you observe and reflect deeply or too baggy so that too many tiny details get noted down. All in all, in twenty years working in the FE sector, I have not found an observation feedback form template that really hits the spot for the teacher, observer and quality team. Yet they seem to have a tyrannous control over people in colleges somehow. Time and time again, in my work as a trainer, consultant and coach, I see forms with these common pitfalls:
1. A grid of intimidating boxes with headings which observers feel compelled to fill with something, even if they have a nagging feeling that there were many other relevant points about learning that just don’t seem to FIT anywhere. If the purpose of the observer is to act as witness to the lesson and stimulate professional, reflective dialogue, such forms do us no good service. A poor form such as this can also mean that the feedback discussion is slanted in ways that don’t reflect what was really seen. It is the tail wagging the dog! Part of the problem here is the dual purpose of observation as a developmental tool and an auditing, evidencing mechanism. I am beginning to wonder whether we shouldn’t just separate out those two functions much more explicitly and create two processes and documents to suit them.
2. An alarming checklist of micro criteria, sometimes requiring the observer to grade each tiny element of the lesson, e.g. behaviour management 2; use of group work 3 etc. These are common in colleges anxious to track everything and be able to produce summary reports with lots of numbers that look meaningful. I think we need to think a lot more deeply about whether those numbers really tell us anything significant or valid. Anyone who has sat on a thoughtful moderation panel looking at observation reports will be well aware of the huge challenges in getting an observation team to interpret what was seen in a consistent way. Teaching and learning are complex, rich practices and evade simple categorisation. Showing observers a video clip of a lesson, I have seen gradings vary widely, so imagine grading micro criteria?
3. A big blank page of white space for the observer to just free-style their feedback. Observers who struggle to see the wood for the trees, lack experience or just love detail can produce feedback reports in this format that leave teachers feeling muddled, de-motivated or overwhelmed as there is just too much information to process. On one memorable feedback form that I saw in my own college some years ago, there were 19 areas for development listed. Fortunately, a moderation panel caught this before a teacher was hit by it.
4. Monstrous forms generated by software packages, in which tiny aspects of the lesson are allocated to more general headings and scored somehow. This produces lovely excel sheets and a false sense of having it all under control in neat boxes. My worry is that these skew the observation feedback discussion and can create problems of omission as well. They also de-personalise the process to a great extent, creating the aura of a scientific report out of what was really one person acting as witness to a lesson – a snapshot of learning and teaching in one class on one day, so in fact a very personal, one-off experience.
So, what to do?
My initial thought is that observation teams need to be honest and open about the glitches, challenges and imperfections of their paperwork. I have spoken with many who see all of these negatives but feel somehow constrained by the bureaucracy and unable to challenge it. Together, they need to have the professional authenticity to raise this with the authorities prescribing the documentation, as it is the observers, coaches and teaching staff who are most engaged with and affected by it. Process and paperwork need review, as organisations evolve and contexts and requirements shift. In the current FE context, thanks to the insightful work of people such as Matt O’Leary, the graded lesson observation process is under the microscope and up for debate. It is time we questioned our processes and paperwork and took some principled, creative steps to enhance them.
Teachers seem to have little input to the observation processes and paperwork used “on” them in graded quality cycles. I think some open evaluation of how the process and paperwork affect teachers can be a very helpful place to start your review. I like to see observation feedback forms with spaces for the teacher to note evaluation comments about the process of being observed. If there is a need for this evaluation data to be gathered anonymously, a link with a survey can be sent to people after their observation. Many teachers and managers have good ideas about improving these processes but their views don’t seem to be collated in most colleges I visit.
It can be healthy to devise new paperwork, trial it and come back together to work through the glitches before rolling it out more widely. I also wonder if in future some feedback will be held on sound files or video clips as well as/instead of in Word documents. This might provide opportunities for richer, more personal and sophisticated reflections on the lesson seen and discussed.
Colleges vary widely in their staging of the observation process and the format of their paperwork so why not contact partner colleges to share some practice and reflect together on ways to improve this? Lesson observations are a really contentious and problematic aspect of college life in many settings, so the more we tackle them together, the better, harnessing a range of thoughtful, professional perspectives and approaches.