Accuracy will Never be Enough in Educator Evaluation

What answer do you think you will get when you ask the average teacher which they would prefer:

Have an evaluator observe and tell them what they did during the lesson?

OR

Have an evaluator help them understand how what they did during the lesson represented strengths in helping students learn, and to point out areas of future development that will allow students to grow even more?

There is no N/A, other, or leaving the question blank. What I will not accept as an answer to my question is that the average teacher wants neither of those things. In fact, studies (Megan Tschannen-Moran, 2002 & 2014) find that teachers, especially early career professionals want feedback. Furthermore research into the principles of performance improvement (Stone and Heen, 2014; Killion, 2015; MET, 2015) demonstrate the impact quality feedback can have on performance.

The answer is pretty obvious. Teachers want quality feedback about their practice. In order for them to trust this system, however, educators want to know the person providing that feedback is capable.

In their study, Can principals promote teacher development as evaluators? Matthew Kraft of Brown University and Allison Gilmore of Vanderbilt University revealed that the quality of feedback teachers receive through the evaluation process depends critically on the time and training evaluators receive.

It is a time for schools, districts and states to seriously consider how they are training their evaluators.

Increasing Evaluator Capacity

For the past four years, ReVision Learning has worked with districts in four different states. Without hesitation, I can say that the most important element in considering the effectiveness of an evaluation model is not whether or not a teacher does or does not willingly accept feedback. Instead, it is the capacity of the evaluator to collect, analyze, and deliver that feedback that makes the difference.

So, how do we answer the call for professional learning with evaluators?

As Revision Learning works with districts we use our Cycle of Planning and Performance Improvement to help guide the development of educators in their delivery of service. That cycle supports meta thinking and micro implementation required to shift the mindset of educators about evaluation while simultaneously shifting the programming that maintains that mindset. This applies directly to the development of evaluators.

Screenshot 2016-05-06 11.01.50

Screenshot 2016-05-06 13.14.29Strategic Planning

When districts set to strategic planning (Step One of Cycle) that focuses on goals, measures, and practice to address learning throughout their educational community, whether that be for students, teachers, administrators or the organization as a whole, evaluators need to be at the forefront of thinking and planning. First and foremost, and most simply stated, the role of an evaluator is typically held by school leaders and the blending of instructional supervision and evaluation needs to guide our progress towards the strategic goals we establish.

 

 

 

 

Screenshot 2016-05-06 11.04.33Crosswalk the Performance Standards

This also provides an opportunity to ensure that the performance standards that guide the work of evaluators are aligned and addressing the highest leverage leadership strategies and skills (Step Two of Cycle). In doing so, we ensure that evaluation and instructional leadership practice are viewed as the core leadership responsibility and are inextricably connected. Districts connect their standards of performance to what we know is the most important aspect of leadership within schools, supporting improved classroom practice for student learning. As a shared understanding of leadership practice is established within a district, we can set the stage for evaluators to do their job more effectively through targeted support. ReVision Learning’s Supervisory Continuum has been aligned to national and state standards and used by over 800 evaluators in helping to support this type of professional learning.

 

 

 

Screenshot 2016-05-06 13.16.31Aligned and Supportive Professional Learning

This support needs to come in the form of professional learning designs (Step Three of Cycle) for evaluators that focus on key skill development aligned with research and literature on instructional leadership. Current professional learning designs that measure or claim to improve accuracy in observation are not enough. For evaluators to effectively provide meaningful feedback to support improvements in teacher practice, feedback must become the focus. Evaluators need professional learning that targets their ability to observe teacher practice in relationship to student learning, review various applications of teacher practice, and develop written and verbal feedback skills that support teacher growth, not simply identify teacher action on a four point scale. The MET Project Program Guide released in August of 2015 comes closest to capturing the type of skills that need to be developed through quality professional learning for evaluators. Learning Forward has designed Professional Learning Standards that can be used to support an understanding of what constitutes quality professional learning programming. ReVision Learning’s Collegial Calibration model has a proven record in CT for moving evaluator practice beyond accuracy in observations towards feedback for growth. District and state decision makers need to take a careful look at these three approaches to supporting evaluators as they plan to provide professional learning programs in support of evaluator development.

 

 

Screenshot 2016-05-06 11.04.43Routine Feedback on Performance

Finally, feedback is not “one-way”. Evaluators need on-going feedback (Step Four in the Cycle) to improve their practice as much as the teachers they serve. They are learners as much as the teachers and students they serve. Ensuring that evaluators have the targeted, on-going feedback they deserve to become the learning-leader they desire to be is essential. Greg McVerry, Co-Founder of ReVIEW Talent notes that roles of school administrators must evolve.  In his post he highlights how the ReVIEW Talent Feedback System is strategically designed to allow for feedback to become a consistent, integral part of a school’s culture, ensuring the evaluators and teachers alike are provided the feedback necessary to learn as a result of their current evaluation system.

Concentrating now in the creation of evaluation systems that value feedback for growth over inspection of practice is paramount. It is the next evolution in educator evaluation.

Making Sense of Educator Evaluation

As an educational community, it is time to reconsider our current approach to educator evaluation. Changes made over the past five years have turned performance review into a series of events strung together to monitor either teacher or leader performance within a school.  This narrow definition has been promoted by well-meaning policy makers and reinforced by professional learning providers who have capitalized on opportunities to create compliance driven tools and resources that merely reinforced poor supervisory practice.

It now becomes increasingly important that leaders of schools, districts, and even state departments of education seek resources and professional learning programs that ensure coherence, alignment, and meaning.  Too often, districts and schools are reacting to guidelines provided by their state department of education, resulting in a more complicated version of what they were already doing.

Instead of compliance we need to focus on the Cycle of Planning and Performance Improvement. This focus allows educator evaluation to be a more complete pathway towards performance improvement in schools.

CycleofPlanningandPerformanceImprovement_Graphic_CYCLE

Districts set themselves on a path towards authentic school improvement when they invest time in rethinking their vision for students.  The planning associated with this rethinking should include the articulation of goals that include quality measurements of the highest leverage student outcomes.   Districts and schools that then align adult and organizational practices to those goals establish an opportunity to ensure coherence in their work.

Those who do not take this path remain reactionary to what seems like an endless sea of needs, pulling human and fiscal resources in a thousand different directions.  Focus on the practice we know leads to the most important student learning allows each of our professional and organizational actions (including educational evaluation) to work towards the same end – learning – once again, at a student, adult and organizational level.  Because aren’t we all supposed to be learning?

Educational organizations must make authenticity a priority when evaluating, creating or selecting resources and professional learning designs for all personnel.  What is most important is what these tools and services are designed to collectively generate; allocation of resources, both human and financial, towards performance improvement in alignment with a district’s or school’s strategic initiative.

Bottom line, and regardless of what tools and resources school leaders may select and implement, seeking ways to generate cogent plans for improvement rooted in quality feedback and responsive professional learning must be in place in order to make sense of educator evaluation in our schools.

Below are some resources that align well to an approach of educator evaluation that is rooted in feedback for improvement.

High Quality Resources for Planning and Performance Improvement

Strategic Planning

Coherence Planning – Jonathan Costa, EDUCATION CONNECTION

Jonathan Costa, Director of School and Program Services has clearly taken the lead in providing a practical solution for strategic planning.  Unlike traditional strategic planning processes that encourage a diffusion of improvement energy and resources, Strategic Coherence Planning uses data-based planning assumptions to focus the process on an obtainable vision of a successful graduate and those highest leverage improvement processes that have demonstrated over time to make the largest impact on student learning and preparation for information age success.

Crosswalk to Teacher & Leader Performance Standards

Alignment Crosswalks & Rubric Implementation – ReVision Learning Partnership, LLC

ReVision Learning has worked hand in hand with hundreds of educators over the past five years to crosswalk teacher and administrator performance standards to the goals and measures outlined in their district strategic planning initiatives.   Targeted training that ensures a deep understanding of teacher and leader performance standards, cultivates the skills to deliver on those standards and, ultimately, supports the development of supervision and feedback loops that provide targeted feedback for learning.

Professional Learning for Teachers and Leaders

Professional Learning Maps – Amplify Education, Inc.

Integrating findings into a map that includes targeted, supportive, fully vetted & aligned resources creates an environment of continuous learning for educators.  Through diagnostic surveys, Amplify Education’s Professional Learning Maps personalizes learning by enabling individuals to understand and articulate the support they need, and for leadership to determine how to invest their time and resources that are most relevant to school and district improvement needs.

Routine Educator Feedback

Talent Feedback & Support – ReVIEW Talent Feedback System, LLC

Feedback is the single most important element of making evaluation practice meaningful.  Once the focus areas have been clearly established educators need a tool to support the on-going interaction among supervisors and the educators they serve.  The ReVIEW Talent Feedback system goes beyond tagging of evidence from event-based evaluation models to providing meaningful educator feedback aligned to the highest leverage performance standards, allowing for feedback to serve as the first tier of on-going professional learning.

Feedback on Teacher Performance is Not an Event

It took twenty years of dialogue and professional learning to begin to break down the walls in educational leadership practice that distinguished a principal’s role as an instructional leader versus a manger of budget and buses. The new question in educational leadership is; “What is the distinction between the role of a principal as a supervisor and his/her role as an evaluator?”

In many ways, we are rehashing the same conversation once again.

If I ask 100 leaders to describe what they do as supervisors, the majority will talk about coaching strategies and the ways in which they build relationships with teachers in an effort to ensure their on-going professional growth.
“I inspire and talk about our vision for the school and for instruction.”
“I coach best practice and support staff in their work.”
“I help staff make connections with stakeholders and the community.”

When you ask the same leaders to describe what they do as evaluators, they will typically answers in terms of events.
“I have a goal setting conference.”
“I complete three informal/formal observations.”
“I do a ‘review of practice’.”

If I ask those same leaders to identify what happens during those events, most will identify the form they complete first. With a few probing questions we may eventually get to how the dialogue is helping the teacher to reflect on his/her practice.

This is the fundamental issue clouding evaluation practice. Policy decisions made about evaluation practice have seemingly ignored educational and literature-based understandings of leadership supervisory practice. As a result, evaluation in schools has become a series or string of events versus a system upon which we lead our staff to new levels of practice.

Why the separation? Historically…

  • supervision is more a long-term process versus evaluation described as more of a monitoring process of isolated practice.
  • supervision requires knowledge, skills, attitudes and values that contribute to the effectiveness of the organization and it’s ability to teach and prepare students
  • supervision is done to coordinate efforts that contribute to student achievement while evaluation is done to monitor teacher practice.

Those evaluators we have worked with for the past five years who have not allowed these definitions to define their evaluation practice have been the ones who have been most successful at moving teacher practice. Leaders who choose to define their role as an evaluator in terms of how they are “teacher of teachers,” do not separate their role of supervision from their role of evaluator. They have moved beyond the silos of traditional definitions and, instead, ensure that they…

  • unwrap teacher performance standards with their teachers and ensure the highest levels of clarity about the expectations of practice. Then they use those standards as the basis for a learning plan that over time supports teachers in continuous cycles of improvement.
  • do not rely on single events or even single forms of evidence and data about teacher practice to make assessments in support of teacher performance. They engage, instead, in authentic assessment of practice over the course of the school year and provide timely feedback that targets teacher learning.
  • don’t let the model get in the way. The do not let ineffective and ill-designed policies drive their practice. They don’t just do it right, they do the right thing. These evaluators take on their responsibilities with a transformative-orientation versus a compliance orientation and consider events as a means to ensuring dialogue towards improved practice.

The simple message here is that evaluators need to learn strategies that safely blur the lines of supervision and evaluation. I know everyone is talking about student outcomes being used to evaluate teacher performance and politicians are beginning to placate potential voters during this election year by reducing the percent of impact of standardized assessment or, as was done here in CT, eliminating whole tests from state programming. What I am talking about in my state is the one thing related to evaluation that does make a difference, quality feedback to teachers about their practice and how it impacts student outcomes. Feedback is that one thing related to evaluation that has quality foundational research tied to it and can, when done well, truly impact classroom practice towards student achievement.

Student Self-Assessment: Building Growth Minded Learners

Guest Author: Amy Tepper
Amy is a Senior Contributing Consultant with ReVision Learning Partnership

I recently reread the Voices of ReVision post Patrick wrote in January for the new year about goals. Did everyone catch that one? He’s a runner, and though I was Captain of the Cross Country team in high school and can relate to some of his challenges and triumphs, I choose a less punishing form of workout these days—yoga.

Because we love what we do, Patrick and I see lessons for our teachers and administrators in everything that we do—and yes, there I was in yoga yesterday having an epiphany about not just goals, but how we measure our own progress toward them. My ongoing goal: To not think of work in yoga. In the past 6 weeks, I’ve been immersed in Collegial Calibration sessions with administrators across the state specifically deconstructing and debriefing lessons related to the assessment indicators of the districts’ instructional frameworks. So here I am working on contortionist arm balances considering how students learn, thinking about how long it took me to get up in the elusive headstand a few years ago…and how I continue to learn and master certain skills that seem to be insurmountable challenges upon first introduction. How is it possible that I can accomplish these unbelievable feats?

I always have someone who I model or emulate who talks through the process and steps, outlining and demonstrating the various odd uses of limbs and of abs. I watch and try. I notice kicking your feet up wildly does not seem to be working. My teacher notices and mentions kicking your feet wildly will not work. I adjust. She adjusts her suggestions. I persevere. I triumph. What was at play to allow this to happen?

Of course, a combination of factors contributed to my success: a growth mindset (no disdain for those who can already do this and a definite willingness to look ridiculous in front of a group…), a positive environment, an understanding of what it looked like to be successful, self and teacher monitoring, adjustment and feedback. This sounds vaguely familiar. Hold on. Here I am in tree pose thinking these are all of the attributes of 3c on the CT Common Core of Teaching–how we have defined effective assessment cycles and support in our classrooms (3c.1).

I have asked my Collegial Calibration participants two key questions everyday, “Why do we need to present our students (or work with our students to create) criteria?” and “What exactly can be considered criteria?” We all arrive at the determination that if we are truly developing lifelong learners, we must share and ultimately turn over the learning to our students. Students can be guided in selecting or designing the best ways to demonstrate their own learning and determine related criteria. If we can show them what mastery looks like, they can learn to self-monitor and self-assess progress toward that goal. Ideally, we will do that for the rest of our lives as we work to accomplish tasks and challenges. This is all encompassed in one concise attribute or indicator on a framework, but has so much depth and infinite benefit for our students.

How, as teachers, can we show students what mastery looks like? The immediate answer I receive in my groups is “rubrics.” And when I read feedback provided to teachers, we are consistently telling them they need to have “rubrics.” We have to expand our thinking as instructional leaders and teachers around the fact that there are many ways to convey criteria to students. It would be downright weird and clearly not very Zen to stop and consider a rubric in yoga. Providing an example—an exemplar, a model through a think aloud or even a nonexample all fit this bill, especially for learning tasks that result in a performance or creation of a product. And what about checklists? In math, we encourage students to use strategies–the inverse operation or plug in a number for a variable. We are teaching them to ask themselves, “Does that make sense?” “How can I check myself?” Then, as we float and monitor, we can remind students in our feedback to use the tool, strategy or resource—building independence and consistently messaging a high expectation and a belief in our students’ ability to move forward.

As instructional leaders, we must realize as we coach and support our teachers in the area of assessment that there are many layers and interconnected factors that usually reside in planning. Assessment does not exist in isolation:

  • Lessons need to be scaffolded and logically organized to advance students toward specific daily learning targets.
  • Assessments need to be directly aligned to provide evidence of progress toward these targets.
  • Teachers need to be able to define what that progress and mastery should look like.
  • They need to create environments where students will take risks and persevere toward that mastery.
  • We must teach our students HOW to use the criteria to monitor and move forward.

If our end goal is for students to self-assess, simply sharing the criteria with them is not enough. Last week, I watched a teacher model how to break down text with different colored markers into main idea (stated) in red, supporting details in blue and transitions in green. Students then followed along, developing an example that they could take into independent work. However, when observing them on their own, I heard a student ask her partner: “Does everything need to be in blue?” Her colored example was not effectively serving as a tool for a self-check. The tool was suitable, but the coaching point resided in the instructional strategies and scaffolding in the mini-lesson that led up to this moment.

As instructional leaders, it is important to engage with students to objectively and accurately determine the effectiveness of any practices. During classroom visits, we can ask students questions such as, “How do you know when you have it?” or “What should this look like or contain?” (ex. when working collaboratively or writing an essay comparing two texts) “How will you know you have the right answer?” (ex. when solving math problems or examining laws in science) or more specifically, “How are you using this checklist?” The answers provide the key for our feedback that will further increase the teacher’s impact and ability to build independence.

As students are given the opportunity to build independence through scaffolded supports and an understanding of how to effectively utilize resources and tools, they develop the ability to problem solve on their own. They become willing to take risks, grapple with complex ideas or tasks, think outside the box, and persevere. By encouraging these steps and building the capacity around them we are promoting a lifelong growth mindset and messaging everyday, “The journey is the reward.”

I have had to learn to embrace that mantra in yoga. I am now off to practice “side crow.” I am feeling good about this one. It is one of my favorites as, when I have it, I am in a pose resembling a 1980’s breakdancing stance.

Please feel free to respond to this post with various methods or tools you use or have observed in use in establishing criteria for students.

 

ReVision Learning’s Collegial Calibrations Overview

ReVision Learning’s Collegial Calibrations professional learning model is design to support development of the knowledge and skills required for deeper instructional analysis and quality feedback to teachers about their practice.