A Sense of Hope for CT Education
Listening to today’s news on the car ride to school with my son, a sense of tremendous optimism for CT education came upon me. In a decision that could fundamentally reshape public education in Connecticut, the state was ordered on Wednesday to make changes in everything from how schools are financed to which students are eligible to graduate from high school to how teachers are paid and evaluated. My son became my initial researcher during our car ride, looking up articles and organizing an outline for this post (real world instruction for sure). While all elements of the court’s decision are indeed “fundamental” to reshaping CT Education, due to my investment in educator evaluation and my organization’s work in over 48 CT districts and in four different states, the last element of the court decision generated the greatest sense of hope.
According to a NY Times article, “The judge…criticized how teachers are evaluated and paid. Teachers in Connecticut, as elsewhere, are almost universally rated as effective on evaluations, even when their students fail. Teachers’ unions have argued that teachers should not be held responsible for all of the difficulties poor students have. And while the judge called those concerns legitimate, he was unconvinced that no reasonable way existed to measure how much teachers managed to teach.”
What needs to happen now is taking this opportunity to address the design to educator evaluation originally presented to districts and provide better training and support to improve implementation of educator evaluation by CT evaluators.
What’s Gotten in the Way
The question that we all need to be asking is what has gotten in the way, previously and over the past four years, in the creation of and implementation of educator evaluation. To be clear, this is not one of those simple attempts that you often see in blog postings to assign blame. Instead, I sit to write today to highlight three primary reasons we are in need of this change that hopefully can provide guidance on the new path towards the “reasonable ways to measure how teachers manage to teach” and “how educational leaders manage to lead”.
Reason One: We started with an ill-conceived definition of evaluator capacity.
As the State Department and districts began to implement guidelines from the Performance Evaluation Advisory Council (PEAC), they worked from the premise that if an evaluator could “accurately assess teaching practice” they would be able to support teacher effectiveness and improvement. This is no fault to them since they were simply going on the research and literature they had at the time, mostly the Measures of Effective Teaching (MET) studies. The fundamental flaw is that in no way did these studies examine how the accurate assessment of practice and the corresponding training models would turn “accurate evidence” into feedback to ensure growth for a teacher. These studies have since expanded the definitions of evaluator capacity and PEAC needs to consider this type of new information to restructure how we are defining evaluation guidelines.
I begin with this reason because much to their credit, the CT State Department of Education has already taken steps to change what it means to be an effective evaluator. The Talent Office has introduced a new training model for evaluator capacity that focuses on feedback for learning rather than inspection of practice.
The greatest impact will come from the expansion of this definition of evaluator capacity. Evaluators need to be measured not only on how accurate they are but also on their ability to…
- observe for and collect specific evidence,
- align that evidence to the teacher performance expectations outlined in the CT Common Core of Teaching,
- focus evidence collection on the impact of the teacher on student learning both in the moment and over time, and,
- organize that evidence into objective, actionable feedback that can ensure teacher growth.
This is the intent of the CT State Department of Education in making the change and I applaud them for that effort. The concern of course is that the recent funding issues for the state of CT may in turn have an impact on the reach of these services to the districts. The change is underway, however, and with the support of policy-makers, we can continue to ensure that every teacher has access to a high quality evaluator who can provide feedback for learning.
It is important to note that this capacity discussion applies to those who are evaluating our building based leaders as well. Remember, we provide supervision and evaluation to our leaders (more often than not these are the evaluators of teachers) through our educator evaluation model as well. Aligning our training models for evaluators is an absolute must if we wish to experience a better evaluation program overall.
In addition to changes in the training and development of our evaluators, we need to give careful consideration to the number of teachers we are asking a building based leader to evaluate. At times, this number can reach up to 30 teachers being evaluated which given the complexities of the work, is not, to put it in the court’s words, part of a “reasonable way to measure how teachers manage to teach”. Legislation needs to support the State Department of Education in careful examination of the structures and policies that ensure that evaluators can provide deep, impactful feedback.
Reason Two: We are applying inaccurate and sometimes all together invalid data when we connect teacher practice to student outcomes through Student Learning Objectives.
The discrepancy in teacher ratings and student performance cited by the CT Judge are the direct result of two flawed approaches in analysis of student achievement through the existing educator evaluation model. First, as stated, evaluators need better training to ensure that their measurement of classroom practice includes a quality analysis of practice while focusing on student learning. Overinflation still occurs in our rating of a teacher’s performance in a classroom (which constitutes 40% of a teachers overall score) because the evaluator rating the teacher is not equipped (based on time or skill) to complete the task effectively. What we have also seen, however, and I am certain that the data the State is looking at can verify this, is that even when an evaluator is assessing practice rigorously and classroom performance is rated below a proficient level, a less than rigorously designed and, once again, more than likely invalid set of Student Learning Objectives (SLO) which constitute 45% of the teacher’s overall evaluation inflates the scores.
Simply put, it is either a poorly assessed and invalid set of data provided by the evaluator about classroom practice (40%) or an invalid SLO (45%) or some combination of the two factors that is creating the discrepancy. Take for example the following SLO one might see in a teachers plan:
Other details about what elements of the reading assessment constitute “reading skills” are provided in the plan, however, the real issue comes in how the teacher rating is calculated based on student performance of this goal. Let’s go on the idea that this elementary level teacher has 30 students in their class. Once results are in, student performance is reviewed based on a locally driven formula. Based on the number of students achieving one-year’s growth, the teacher receives a rating of Below Standard through to Exemplary (1-4). Typically, a percentage is applied to the percentage of students identified in the SLO. In other words, if 100% of the 80% of the students make one year’s growth, the teacher receives an Exemplary rating (that would be 24 students out of the 30 in class make goal). A “Proficient” teacher would fall into the range of 75% – 80% of the students meeting goal (that would mean at least 19 of the students met goal). So, in this situation, 11 students can go without one year’s growth and the teacher will still receive a “Proficient” level rating for 45% of their overall evaluation. Even if the evaluator has rated the teacher’s performance and practice (40%) at a “Developing” range, the teacher who has 11 students out of 30 students not meeting one-year’s growth on key reading skills will be deemed overall to be “Proficient.” This is one of the key reasons we see discrepancies in the data throughout all the states.
In any situation, it is the design and implementation of the evaluation model that comes into greatest question, not necessarily the idea of using student achievement in the evaluation of an educator.
Potential solutions lie in designing whole school or grade level/subject goals in which all teachers and educators in the school are tied to overall achievement levels of students in alignment with the strategic needs of the district. Additionally, developing district capacity to align, design, and analyze assessments to specific outcomes of learning for a teacher’s students needs to be a focus. Reliance on a single standardized assessment not only is flawed in that it cannot adequately represent teaching quality but the structure and implementation of SLOs still leaves too many students behind. Moving to grade level or whole school goals has its own flaws that still need to be considered, however, at least we will ensure that we are promoting the village’s responsibility and not just the individual in ensuring the success of our students.
Reason Three: We have not made learning the true objective of educator evaluation.
One of the reasons this court case includes a decision about educator evaluation is that we (adults) have not viewed evaluation as an opportunity for learning. The idea of being evaluated by someone else is often met with skepticism or downright mistrust in the purpose. Old paradigms of “us versus them” or the belief that somehow coaching cannot happen through the evaluator role are at the center of this thinking and needs to be confronted.
The CT State Department of Education through PEAC needs to make changes to the policy and structures to the educator evaluation model – the way in which we define evaluator capacity and the way in which student outcomes are designed and measured. Fundamentally, they also need to engage in a dialogue about learning, clearly outlining the values and beliefs we hold as educators about our own growth mindset and willingness to learn and the notion that each student’s growth is not only needed but expected.
Gratitude can transform common days into thanksgivings, turn routine jobs into joy, and change ordinary opportunities into blessings.
~ William Arthur Ward
This past month, ReVision Learning Partnership celebrated its Five Year Anniversary.
I wanted to take a moment to stop and reflect upon the importance of those around me who have helped keep its vision and purpose alive.
I started with the simple idea of creating professional learning that could impact the way educators enVisioned what they do and, more importantly, why they do it. I sought to provide professional learning for educators built on the understanding that best practice is best realized in collaborative, supportive relationships.
What I understood, and continue to know for certain, is that reaching that very goal required cooperation and commitment, hence the “Partnership” in our very name.
Upon reflection, there are three primary reasons why ReVision Learning has been able to reach this milestone and is continuing to plan for its next five years.
We have forged partnerships with several organizations with a common mission and purpose. Those organizations that have been open to and continued to maintain collaborative, supportive, learning focused relationships built upon a foundation of not only mutual benefit but mutual trust are the ones that have helped us make a difference. Organizations such as the CT Association of Schools, ReVIEW Talent Feedback Systems, the BERC Group, and Professional Learning Maps and Amplify Education (a new growing relationship) support our purpose as they too demonstrated a commitment to ensuring that learning is at the center of all they do with and for their clients.
Clients as Partners
We view our relationship with our clients as one of partnership. There is no ivory tower approach that is applied in our work with clients. Instead, we approach each new commitment with a school, district, or State Department of Education as a partnership in learning for all members of the educational community. While we base each of those relationships on the quality products and services we have developed over the past five years, we collect and collaboratively analyze personalized data about existing practice and engage a learning cycle that allows on-going feedback to be the focus. It’s not just customization, its personalization.
Contributing Consultants as Partners
Lastly, it has been the Contributing Consultants who have provided a pivotal partnership in maintaining our vision. Each day that these amazing educators engage with their clients, they do so with the intent of impacting that person’s life as a professional. Because of their commitment and the partnerships they forge with the educators they serve, they are never satisfied with just delivering the content or doing the job, and, instead, are motivated to support each individual towards change in themselves and those they serve.
Ultimately, these partnerships all help to deliver on the dream of ReVision Learning Partnership.
I want to thank our partners, our clients and our Contributing Consultants for keeping that vision alive. I am deeply grateful to all of you for the impact you have had on the thousands of educators served by ReVision Learning Partnership in the past five years.
Please know that I am honored to be working with you as colleague and blessed to know you in this life as a friend.
At ReVision Learning Partnership we want to ensure evaluator capacity focuses on high quality feedback. Teachers deserve to trust their observations. School administrators need to know evaluators can deliver feedback focused on student achievement. Taxpayers should demand an effective teacher for their children.
For the last half decade schools have “ensured” evaluators calibrate against some video. This approach has severe limitations. At ReVision Learning Partnership we want to focus not just on calibration but on evaluator capacity. The difference may seem subtle but school districts find the two approaches worlds apart.
In fact the 2015 MET Program Guide, Seeing it Clearly: Improving Observer Training For Better Feedback And Better Teaching noted how a simple test of agreement will never do:
developing these competencies is largely a matter of repeated modeling and practice. To master a skill, you need to see how it’s done, try it yourself, and learn how you did.
ReVision Learning Partnerships has joined forces with ReVIEW Talent Feedback System to rethink video based professional learning. We now provide video based calibration but focus more on capacity building than score agreement. First and foremost, pre-scored, videos provide insight to an evaluator’s understanding of the teaching framework being used within a state or by the individual district. Yet just getting a report of agreement does not help an evaluator grow.
The innovative Video-Based Calibration modules in ReVIEW Talent Feedback System take agreement scoring to the next level. Evaluators using our system receive high quality feedback from our talented coaches using the ReVision Learning Supervisory Continuum. We coach your administrators by giving them the same narrative report teachers deserve.
Feedback and modeling. It drives everything we do at ReVision Learning.
How Does It Work?
ReVIEW Talent Feedback Systems Video Based Calibration modules can work with any framework or rubric. If you are new to ReVIEW we conduct a norming activity to your chosen framework. Then we can add your rubric directly to our system or simply have evaluators upload their completed feedback reports. Next your evaluators are assigned a normed video to score. They write the report directly in ReVIEW or upload the report. A ReVision Learning coach will then score the report against the RVL and send the evaluator a detailed report. As a district you get a snapshot of agreement and data that we can use together in behaviorlization activities.
How Does This Benefit My District?
You learn early in the process how well an evaluator can recognize practice inherent in the teaching rubrics. You will evaluate whether or not they can record evidence from a piece of classroom instruction to ensure that ongoing training is targeted, personalized, and impactful.
Your district also gets access to data that can help you shape professional learning moving forward. While our Collegial Calibration approach embeds our coaches into the classroom we truly believe we have built the next best thing with ReVIEW Talent Feedback System. As a district leader you can be ensured that not only are you meeting the state guidelines for calibration activities, but also that your evaluators receive coaching that increases their capacity to deliver feedback designed for growth.
Want to Learn More?
As teachers we have all been there. Stuffed into a dimly lit room with a speaker droning on while bullet point after bullet point in a useless PowerPoint flies by. Educators often have no escape from mandated professional development. They get rewarded not with learning but a certificate of showing up. A badge of being able to sit through irrelevant details that do little to support our learning.
These memories, or nightmares more likely, came rushing back to me as I prepared a recent session I will facilitate at the12th Annual Summer Leadership Institute for CT Association for Public School Superintendents. CAPPS, using a framework developed by the Nellie Mae Education Foundation has rightfully been stressing student-centered learning in their work.
If we know to hold these truths for children, why do we forget this lesson when talk about teachers? Simply put, learning is learning and the need for personalized learning does not end once a person receives a diploma.
Personalized Learning for Students
The CAPSS diagram suggest that in order to ensure students have the knowledge, skills and disposition to succeed in college, career and civic life, they must be exposed to “deep learning”, typically centered on 21st Century skills and learning should be:
- “Personalized” i.e. recognizing that students learn and engage in different ways and that students benefit from individually paced, targeted tasks, that start from where the student is, formatively assessing existing skills and knowledge and address student needs and interests.
- “Student-Owned” i.e. incorporating student interest and skills and allowing for student to support their own as well as others’ progress in learning;
- “Competency Based” i.e. driven through clear demonstrations of proficiency in content and/or skill, and,
That learning can/should happen:
- “Anytime, Anywhere” i.e. occurring beyond the school day and even school year and well beyond the four walls of a classroom.
Personalized Learning for Adults
Let me begin with the simple tweak I made and then reinforce its importance. When I found the graphic, all I did was ask the question, “If I replace student with educator would all of the explanations about the need for personalized learning fit in the context of the adult learner engaged in professional learning? I think you know my answer is yes, but let me outline why.
What professional learning providers and, more importantly, the district decision makers who hire those professional learning providers have to begin to understand is that learning is learning is learning. In fact, research in adult learning supports the idea that it must be personalized, competency based, educator owned, and on-demand.
- “Personalized” i.e. recognizing that educators learn and engage in different ways and that educators benefit from individually paced, targeted tasks, that start from where they are in their current practice, formatively assessing existing skills and knowledge and addressing their professional learning needs.
- “Educator-Owned” i.e. incorporating teacher or administrator interest and skills and allowing for the educator to support their own as well as others’ progress in learning;
- “Competency Based” i.e. driven through clear demonstrations of proficiency in content and/or skill, and,
That learning can/should happen:
- “Anytime, Anywhere” i.e. occurring beyond the school day and even school year and beyond the four walls of a classroom.
See what I did there…
The argument that we should expect that educators should be fully prepared to meet the expectations of their professional responsibilities rings false. No one is perfect at all aspects of their work regardless of industry or profession. When it comes to teaching students of the 21st Century, the complexities of that responsibility requires constant attention and professional learning and the best teachers are those who never stop engaging in that learning. Same is said for those administrators charged with leading the creation of, and sustaining environments for that learning.
Four main points that professional learning providers and district/state decision makers need to focus on:
- Educators will forever require on-going, targeted learning that begins where they are and allows for a formative assessment of how they are progressing towards or improving their knowledge, skills, and dispositions. Professional learning designs that are not first built upon confirmed needs of the educator cannot and will not have an impact.
- Educators need to self-assess and self-address their own application of adult standards of performance and generate professional learning goals in ownership of their growth. This, in combination with routine and meaningful formative feedback can support adult learning.
- Educators will need to understand how their own learning needs translate into professional learning plans that map towards new levels of learning and that can be accessed routinely through open lines of communication and feedback from others. Tools and resources such as Professional Learning Maps become the best way to understand needs and applying to a cycle ensure the types of policies and structures for an organization to sustain on-going growth.
- And, finally, educators need to be able to access feedback in multiple environments and venues, not just in a hard chair in the cafeteria of their school surrounded by 90 of other members of their educational community.
More information can be found about the CT Association of Schools conference at the following link:
Doing it right means so much more than getting it right. A lesson we have learned working with evaluators across the country, especially with teacher evaluations. At ReVision Learning we know the challenges of observing teachers, collecting evidence, and analyzing our notes against attributes and frameworks. We realize then taking all of this information and translating into evidence based feedback statement is a highly challenging endeavor.
For the last two years ReVision Learning has toiled away on a system to meet this challenge. We wanted to create a model for feedback or, better put a thinking frame evaluators could utilize to organize and deliver essential information to teachers.
From this work, and with a nod to Jon Saphier for his inspiration, came ReVision Learning’s Claim, Connect, Action – a simple and direct model of structuring feedback to teachers.
Evaluators employing the Claim, Connect, Action learn to make claims against specific attributes, connect those claims to observed feedback, and provide actionable feedback using the key levers in a chosen framework.
Jeff Wallowitz, Principal of Webster Elementary School in West Hartford CT worked with ReVision Learning to employ Claim, Connect, Action for the past year. Jeff notes:
Through all my work and professional development with ReVision Learning, one area stands out as having the greatest impact on my ability to provide meaningful and effective feedback to teachers. As “Claim, Connect, Action” became more clear to me, I began to feel its profound impact on all teachers and not just the professionals who had an area that fell in the “developing” range.
At its heart observational based feedback begins with a claim. A statement that we then justify with evidence collected during observation, a design of an artifact, or the demonstration of learning captured in student work. The Claim, often rooted in language taken directly from instructional framework indicator represents high leverage areas for continued growth.
In our implementation of this model we have found the focusing the claim directly on the attributes we seek in teachers improves instructional capacity. In fact Jeff Wallowitz notes:
First, I must mention that Claim, Connect, Action acted as a magnet which drew me into the language of our instructional framework. In order to make claims and connections, I found myself truly analyzing the language within the framework’s indicators in order to better articulate to my teachers what I had observed during my observations. I felt myself becoming more clear and precise, I knew my messaging was more consistent, and I was taking advantage of opportunities to educate my teachers about the framework and the powerful connections to their teaching practices.
Too often evaluators fail to make clear connections from what they saw in the classroom to the claims they want to make about a teacher’s practice. Simply put, they were not being trained to validate their claims about practice within their feedback. Instead, they were trained to provide a rating based on the rubric. An incomplete endeavor at best.
In our model we connect observed evidence to the narrative claims and not tagged lists of scripted evidence. Instead, ReVision Learning trains evaluators to connect to evidence they might observe in a variety of sources. We observe teachers using question maps, and floor diagrams, as well the classroom dialogue. This diverse evidence is then rooted in the claims we make against a designated rubric.
When your only goal is scoring against a rubric you do not connect to the classroom practices of teachers. This is especially true of those rated effective or highly effective who were not often provided feedback rooted in observed practice.
The Connect and the Action portion of our framework addressed what appeared to be the most challenging aspects of teacher feedback for administrators. The challenge for many principals was making clear connections between the claim (what they wanted to share with a teacher about their practice related to a teacher performance standard) and the related evidence from the observation or an artifact.
Additionally, and in many ways as a result of, administrators were not providing clear, actionable feedback for improvement based on their observations. This was especially true with those teachers they were citing as “Effective or Highly Effective”.
As Jeff Wallowitz, notes Claim, Connect, Action, focuses on the instructional capacity of all teachers:
Prior to these trainings, if a teacher fell into the “effective” range in all the indicators, I used a post-observation conference as an opportunity to acknowledge the elements of the solid teaching I had observed. It was a pleasant meeting, where I was able to compliment my strongest teachers on a job well done. Since most of my teachers are mostly effective, post-observation conferences provided affirmation instead of an opportunity for growth. If any area was deemed “developing” or “below standard”, I used that as an opportunity to teach, coach, or strategize to support an area of challenge.
Every district faces a choice. You can make your evaluation system one of compliance or capacity. Using our model teachers receive actionable feedback focused on teacher growth. Through our key lever coaching, evaluators learn to drive their conversations with teachers using the designated framework.
Every teacher, no matter how their performance, portfolio, or even lesson was scored, deserves actionable feedback. All educators want to grow, and evaluators using our system learn to drive instructional capacity through actionable feedback. In fact our work in CT has supported hundreds of evaluators through the use of Claim, Connect, Action. Jeff noted that it helped shift his school to a growth mindset:
Through Claim, Connect, Action, I now see every post- observation meeting as an opportunity to support teachers, more readily identifying a key coaching point to improve their practice. In meetings with effective (and even highly effective) teachers, I now make a claim, make the connection, and formulate an action to, and share how they can elevate their practice to the highest level of teaching and learning—promoting a growth mindset.
ReVision Learning provides districts training in the Claim, Connect, Action system using our flexible tools and platforms. In fact the evaluators we work with develop their skills with Claim, Connect, Action by receiving the same high quality feedback we expect them to deliver to teachers.
1. The ReFLECT System provides an opportunity for administrators working within our professional learning programs to receive routine feedback about their written reports and support their own feedback to teachers.
2. ReVision Learning also partners with the ReVIEW Talent Feedback System, LLC and incorporates their video-based observation module to allow administrators to routinely and independently practice providing feedback to teachers through videos provided by the BERC Group.
Feedback is at the core of learning whether it is feedback from a teacher to a student, feedback from an administrator to a teacher, feedback from a professional learning provider to the adult learner. The system of Claim, Connect, Action provides a simple, consistent and meaningful approach to feedback that supports clear, concise, and impactful lines of communication in support of that learning.