A Message to PEAC:  TEVAL Should Promote Reflection*

*A follow up to my previous post – Connecting Teacher Action with Student Outcomes.

This morning I attended a meeting of the Performance Evaluation Advisory Council (PEAC).  PEAC, charged with leading Educator Evaluation in the state of CT, began conversations today on the weighting associated with the current components of educator evaluation. It has been long understood that the structures have required discussion and change, so I applaud the focus on this important topic.

In conjunction with some creative PDEC Committees (Bethany Public Schools comes to mind), ReVISION Learning has processed the importance of educator reflection in ensuring student performance and its corresponding role in educator evaluation.  In other words, we began our work together with the premise that teacher and/or collective reflection on student outcomes is at the heart of greater levels of student growth (Hattie, 2009).  As a result of focusing discussions in this way, development of district-level policy is driven by student growth and, as equally important, a deep and meaningful understanding of how that student growth has been directly impacted by practice.

What has been clear to this point in the evolution of CT’s educator evaluation programming, is that while there has been progress made in conversations about teaching practice, there is still much to be done in creating reflective practice that connects teacher actions to student outcomes.  Even as some will cite reflection among teachers and administrators as having improved through the current process, the question now becomes…

How does the system ensure that this reflection is not only about how teachers do their job,

but also…

how and why do their practices impact student learning?

————————————————————————————–

Below is an outline of our suggestions to PEAC as they consider their decisions in the upcoming month and year.   

To understand our proposal, I’ll start with the current CT structure:

Evaluation Component Current Weight Weighting Breakdown
Practice Rating 50%
Performance and Practice 40%
Stakeholder Feedback 10%
Outcomes Rating 50%
Student Learning Objectives (SLO) 45%
Whole Student Learning Goals and Student Feedback 5%

Our Proposal (rooted in what we know is good practice about learning)

Practice Ratings

Change from 50% to 60% or higher.

We have long proposed that the emphasis in educator evaluation be placed on HOW teachers are impacting students.  Revealing this understanding among all educators is what the performance and practice component should be codifying.   

The 60% Practice component would include:

  • Performance and Practice -50% based on three core modalities of evidence collection by a qualified supervisor, each tied directly to a four-point rubric.
  1.  Observation
  2.  Artifact Review
  3.  Collegial Dialogue.

The 2015 version of the Common Core of Teaching provided by the State Department of Education provides a quality tool for analysis of learning in the classroom.  What is most important is that evaluators stop focusing their evidence collection on only teacher practice and begin to collect evidence of learning, setting up teacher reflection on the thing that matters most – how students are learning.    

It is important to note our common disclaimer at this very moment – evaluators NEED training AND training needs to be measurable through a quality performance assessment completed by evaluators.

Also, non-negotiables throughout the state need to be established about the quality of evaluators:  

  1.  Evaluators need to clearly demonstrate capacity to observe and collect evidence, analyze evidence, and provide feedback.
  2.  Districts need to document inter-rater agreement among all evaluators.

If we do not invest in evaluator training then we simply will be attempting to improve a system that will be poorly executed (at no fault to all the evaluators throughout the state currently working very hard to make the work meaningful).    

Maintain the current 10% based on Stakeholder Feedback with one important change.

Our recommendation is that targets should be set as a school and, through collective practice,  teachers and administrators are then charged with addressing fundamental issues revealed in annual climate surveys.  Additionally, PDEC committees should consider alignment between the associated rubrics used to measure performance and practice (such as CCT) and the establishment of actions steps to be taken to meet targets (such as communication with stakeholders-CCT Domain 4).  For example, if annual surveys reveal that ongoing communication with parents is an issue, then all teachers would work collectively to remedy this and measurement of success would be based on changes revealed in the corresponding climate survey.  Collective efficacy is an essential part of successful educational practice and should be measured accordingly.  

Outcomes Ratings

Change from 50% to 40% or lower.  Before anyone claims that we are trying to water down the student outcomes, take time to re-read the Practice Outcome information just shared and then pay close attention to the design we are presenting to our Student Learning Objectives (SLOs).  We do more with less than what is currently in place – I assure you.

We recommend up to two SLOs, each would be worth 20%.  This would include:

  • Up to five Indicators of Academic Growth (IAGDs) for each SLO, depending on the assessment being used to monitor progress.
    • One IAGD would be based on completion of a Student Outcome Portfolio (designed to support targeted reflection linking practice and outcome).  The remaining IAGDs would be designed as Banded Student Growth Goals.  Banded Student Growth goals are design to target the different levels of student performance commonly found in our classrooms while closing the gap that currently exists.

Need more information on either of those ideas, contact us or keep an eye out for future posts.

  • At least one SLOs/IAGDs would be based on a standardized assessment.  Where possible, both SLOs/IADGs should be based on standardized assessments.
  • All SLOs should be directly connected to the strategic initiatives of the district and school improvement plans.

Eliminate the 5% associated with Whole School Indicators or Student Surveys.  Student Surveys can and should be part of the Stakeholder Feedback component already described and Whole School Indicators are examined more effectively through the design of SLOs in alignment with strategic school improvement plans.

The suggestions we are outlining are designed in alignment with our work in educator evaluation over the past five years.  We have worked with over 800 evaluators and 52 districts since our inception, and one thing that has been overwhelmingly clear is that policy needs to catch up with practice.  Practitioners are ready for meaningful, reflective practice, now PEAC needs to provide the policy.

Connecting Teacher Action with Student Outcomes

“Growth” has been one of the more elusive words in educational jargon over the past few years.  Its use triggers reactions from so many due to ill-fated or, better put, poorly constructed attempts to connect it to educator evaluation.

Anyone who has read or spoken with me in-depth knows that I do consider student outcomes to be an important aspect of every educator’s evaluation and do not believe it should be removed as a criterion for determining levels of performance.  However, the current approaches being used to design and implement SLOs typically provide for little connection between what the teacher does and the outcomes for students, serving no other purpose than to inflate and invalidate data on teacher or leader effectiveness.

What I have recently been working on is a redefinition that I believe can address some of the shortcomings of previous approaches and that could/would actually be embraced by teachers as meaningful and realistic.

 

 

Yup, that is a big job I know but how else do we expect to change and improve?

A ReVision of Student Learning Objectives

First, it is important to note that the idea is not original and is more of a mix of other ideas that have been successful.  I am, however, as far as I can tell, the first to propose the approach to satisfy the typical requirements of educator evaluation practice. Additionally, this is written in alignment with the Connecticut Guidelines of 45% of student outcomes measuring overall teacher and administrator performance.

My proposal is simple – stop measuring just the outcome in terms of student achievement but also measure the influence specific teacher practices have on those outcomes.

If you need to know what targeted practices can be measured that will influence student achievement then look no further than John Hattie’s list in Visible Learning for Teachers.  Below is a glossary of those influences in order of their effect.  These all have an effect size greater than .70.

If we know that these practices have the most positive effect size on student achievement then why not measure these in alignment with student outcomes? While student outcomes, in my opinion (and I am sure others will/can disagree), must remain a part of performance measurement in our schools. Paying closer attention to how we achieve those outcomes, however, is the most important information to ensuring sustainable achievement. In other words, it’s not knowing that kids achieved that makes me effective, it is knowing what I did to support that learning that allows me to apply it again and again.

Proposal (with more details to be worked out with PDEC committees who are ready to step up):

“Visible teaching and learning occurs when there is deliberate practice aimed at attaining mastery of the goal, when there is feedback given and sought, and when there are active, passionate, and engaging people (teacher, students, peers) participating in the act of learning.” ~John Hattie

While it will be a challenge for many – once again, nothing great happens without a significant challenge.

My proposal is to continue to create SLOs/IAGDs that examine student improvement based on standardized and/or non-standardized measures (still believe CBMs are the best assessments for this purpose – once again more on that later).

Teachers would design the required two SLOs and corresponding IAGDs to address growth over the course of the year for EVERY student in their classrooms or, for our secondary teachers, a significant number of students in their classrooms or for whom you have an impact. Stretch significantly for those students who require greater growth to bridge any existing gaps – that is not only a collective responsibility but EACH individuals as an educator of that student.

Teachers and their supervisors would analyze beginning, middle, and end of year performance of students based on the design of goals that are well-designed to the assessment being used. Student performance would collectively reflect 22.5% of the teacher’s overall evaluation. Criteria have already been established in most educator evaluation models and can continue to be used. This is based on a simple model – if all or a quantifiable number of our students are demonstrating learning, meeting the goals we are setting, then 22.5% of their overall evaluation is scored accordingly.

Sample provided below…

Exemplary Proficient Developing Beginning
100% of students’ met the SLO and IAGD Targets. At least 90%’of students’ met the SLO and IAGD Targets. At least 80%’of students’ met the SLO and IAGD Targets. Less than 79% of students’ met the SLO and IAGD Targets.

For the remaining 22.5%, teachers would engage in investigation, reflection, and on-going re-examination of what Hattie has demonstrated is the most significant element of that success – educator action/application of key strategies.

Here is where “system support” becomes essential so that there is an educational environment for analysis. Supervisors create a system (preferred system would rely on grade level Data Teams with potential portfolios) for teachers to collect evidence of their practice in one or more of the above listed educator actions from Hattie’s meta-analysis.

Then, collaboratively, teachers and supervisors connect those practices to the successes in student achievement. If data teams are in place, for example, the analysis and documentation of student performance and related teacher actions associated with a review are readily available. In other words, the development of a portfolio of a teacher’s investigation, reflection, and on-going re-examination is not an add-on to the work – it’s just what we do. By the way, if a school or district has clearly defined the vision for it’s instruction and has aligned the goals to that vision, the layers of support for teachers is inevitable.

Some will tell me that I am just increasing the performance and practice measurement by 22.5% and in many ways they would be right. I have always thought that adult action is the most valid measures for educator evaluation. The more cynical will tell me that even with a shift, a teacher should never be individually measured by student outcomes. To either party I simply ask, if you are not willing to try something new, what change do you ever expect to see.

A Message to Disconnect in Order to Connect

I left for the Learning Forward Conference in Vancouver yesterday and, since I left, I have received two messages – loud and clear – that I need to “disconnect” a little. Now that word means many different things to different people and in different situations. My meaning for it is pretty simple in this context and, as I thought about it this morning, I believed it was a great message to share with my fellow participants here at the Conference.

Message #1

It began when I boarded my first flight from Hartford, CT yesterday. I have traveled quite a lot in my career but was taken by surprise when I ascended the stairs to board the plane (yes – walked outside to get on the plane), was greeted by the Pilot who, admittedly, I thought was a flight attendant in the moment, and found a plane675b4bde-3687-4dc7-9fd3-6f5f3d6c2dd2 with 18 seats – 9 single seats each side. Definitely the smallest commercial plane I have ever flown.

Now, when I get on planes, the first thing I am doing is opening my computer, finding how the WiFi works, whether I need to pay, and getting to work. Needless to say, there was no WiFi on this plane. This became my first opportunity to disconnect. I put on a Jim Oliver recording, meditated for ¾ of the flight and contemplated what and where I was going this week and how I had gotten here in the first place. It was the most productive 60 min I have had in quite a long time.

What was even better was that I was present (the word “present” has a whole bunch of meaning right here) to witness, as we approached Montreal, a 4th grade student sitting just across and in front of me and who was on his first trip outside of the US. As the clouds cleared and we were making our approach, I noticed he had yet to see outside the window. Having heard him earlier express his excitement to his Mom and Dad who were in the rows in front of him, I leaned over and said, “Hey buddy, that’s Montreal right down there”. His eyes lit up as he excitedly called to his Mom and Dad and they all shared in a wonderful family moment. I would have never had the opportunity to watch that unfold if my face was stuck in the computer screen.

Message #2

My second message came during this morning’s run.

img_7692

As I started the run, I did the usual…started up my playlist – headphones on, head down and get running.   You know, being a productive runner.

About ½ mile into the 7 mile loop, I caught the pic below (also featured above):

img_7691

That was the first moment on the run when I realized, wow; I should probably be paying attention to something other than the running.

Then, only 1 mile into the 7 mile loop, my iPhone died.  It was too cold and the battery went dead.

From that point on, I noticed and experienced so much more than simple exercise.

I heard a half dozen different birds calling in Stanley Park, I saw the morning mist rise and give way to Grouse Mountain and Mt. Fromme. I listened to water fall from the high ledges on the western side of Stanley Park looking out at English Bay.

The run meant so much more without all the interference and noise that typically coincides with my routine exercise.

So, what’s the message?

Well, to my fellow participants and presenters alike here at the Learning Forward Conference this week….

While here – in the midst of all the new learning and meeting of new people, the networking and attending of events and functions, the great sessions you will attend, and all the materials and resources at vendor tables you will discover – stop and take some time to… not be productive. Instead, take time to see and listen to all that is happening around you and disconnect from being “productive”.

Disconnect a little as often as you can and see if it might lead to something even more productive than what you had planned out of this trip in the first place.

Let wonder and excitement guide you this week and, if you do, let me know how it worked for you.

Two CT Districts Heading to Vancouver

will be presenting along with ReVision Learning Partnership next week at the Learning Forward Conference in Vancouver, British Columbia.

The Learning Forward Conference, entitled Connecting Landscapes for Learning, has been designed this year to examine the impact of professional learning and quality feedback on teacher practice and student outcomes and increasing coherence and relevance of professional development.

The attention to these ideas in West Hartford and Canton Public Schools cannot be questioned and it is only fitting that their efforts will be recognized in this international setting.

conf-session_snapshot

 

Next Tuesday, Natalie Simpson, Assistant Director of Human Resources of West Hartford Public Schools and Jordan Grossman, Assistant Superintendent of Canton Public Schools will lead a session for “advanced leaders” that focuses on their districts’ successful journey to shift the focus of teacher evaluation from an inspection model to a growth model.  Participants will learn how these two districts designed, implemented, and analyzed the professional learning for leaders to more directly support student learning in the classroom.  The importance of student outcomes in West Hartford and Canton classrooms has become the core focus of observation and feedback for on-going learning and support.    

 

Professional Learning in West Hartford and Canton:  It Begins with Great Leaders

never-stop-learningWhat has been most impressive about the professional learning designs for leadership in these two districts has been the coherence they have sought to establish. Each district has designed learning opportunities that go beyond the simple workshop approach, targeting, instead, the most important aspect of educational leadership practice – serving as an instructional leader.

Focusing on the knowledge, skills, and dispositions of an instructional leader for all members of their administration has led to a more coherent and meaningful implementation to their supervision and evaluation practice.

Professional learning has been designed and implemented to ensure that each building and district-level administrator, department head, and curriculum leader has the capacity to observe for and provide feedback about the way students are learning in classrooms.  In this manner, the districts have chosen to ensure supervision and evaluation is focused on supporting teachers directly in the most important aspect of their jobs – knowing what kids are learning and how a teacher’s practice is impacting that learning.

Round Table Discussions

Recently, ReVision Learning sponsored a Round Table discussion with six leaders from these two CT districts in which leaders described how their district’s professional learning program directly impacted their practice and support for the teachers they serve.

Supporting teachers by shifting their observations from a simple assessment of teacher practice to a careful examination of student learning has helped these two CT districts to create more substantial and meaningful outcomes from their on-going supervision and evaluation practice.

Two videos were created to help capture and share the work that is happening in these two districts. It is our hope that these can prove helpful to other district leaders as they consider the design, implementation and impact of their leaders’ professional learning.

District Profiles:  Leadership Professional Learning  (9:12 min)

An abbreviated version providing just a few highlights

Girls Playing A Game On Classroom Floor

 

 

 

 

 

 

Access more extensive video footage here (21:20)

More extensive footage of the Round Table discussions.

screen-shot-2016-11-30-at-7-42-16-am

 

 

 

 

We have been honored to have had the opportunity to work with these two districts for the past few years and we encourage other district leaders to reach out to them to hear more about the outstanding work and outcomes within these two districts.

CT Court Decision Can Help Reshape Educator Practice

A Sense of Hope for CT Education

Listening to today’s news on the car ride to school with my son, a sense of tremendous optimism for CT education came upon me. In a decision that could fundamentally reshape public education in Connecticut, the state was ordered on Wednesday to make changes in everything from how schools are financed to which students are eligible to graduate from high school to how teachers are paid and evaluated.  My son became my initial researcher during our car ride, looking up articles and organizing an outline for this post (real world instruction for sure).  While all elements of the court’s decision are indeed “fundamental” to reshaping CT Education, due to my investment in educator evaluation and my organization’s work in over 48 CT districts and in four different states, the last element of the court decision generated the greatest sense of hope.

According to a NY Times article, “The judge…criticized how teachers are evaluated and paid. Teachers in Connecticut, as elsewhere, are almost universally rated as effective on evaluations, even when their students fail. Teachers’ unions have argued that teachers should not be held responsible for all of the difficulties poor students have. And while the judge called those concerns legitimate, he was unconvinced that no reasonable way existed to measure how much teachers managed to teach.”

What needs to happen now is taking this opportunity to address the design to educator evaluation originally presented to districts and provide better training and support to improve implementation of educator evaluation by CT evaluators.

What’s Gotten in the Way

obstacle

The question that we all need to be asking is what has gotten in the way, previously and over the past four years, in the creation of and implementation of educator evaluation.  To be clear, this is not one of those simple attempts that you often see in blog postings to assign blame. Instead, I sit to write today to highlight three primary reasons we are in need of this change that hopefully can provide guidance on the new path towards the “reasonable ways to measure how teachers manage to teach” and “how educational leaders manage to lead”.

Reason One:  We started with an ill-conceived definition of evaluator capacity.

We know for certain that the teacher is the #1 influence on student success in our classrooms. What we also know is that the #2 influence is the school-based leader who is often charged with the role of evaluator.  In fact, I would advocate that the #1 reason why an effective school-based leader is the #2 influence on student success is the means they have to influence the effectiveness of teachers.  At the center of this is their capacity to become an instructional leader.  A fundamental element of this leadership is their capacity to review performance and practice of teachers in conjunction with student outcomes to support professional learning, this being the real definition and design of educator evaluation.

As the State Department and districts began to implement guidelines from the Performance Evaluation Advisory Council (PEAC), they worked from the premise that if an evaluator could “accurately assess teaching practice” they would be able to support teacher effectiveness and improvement.  This is no fault to them since they were simply going on the research and literature they had at the time, mostly the Measures of Effective Teaching (MET) studies.  The fundamental flaw is that in no way did these studies examine how the accurate assessment of practice and the corresponding training models would turn “accurate evidence” into feedback to ensure growth for a teacher.  These studies have since expanded the definitions of evaluator capacity and PEAC needs to consider this type of new information to restructure how we are defining evaluation guidelines.

I begin with this reason because much to their credit, the CT State Department of Education has already taken steps to change what it means to be an effective evaluator.  The Talent Office has introduced a new training model for evaluator capacity that focuses on feedback for learning rather than inspection of practice.

The greatest impact will come from the expansion of this definition of evaluator capacity.  Evaluators need to be measured not only on how accurate they are but also on their ability to…

  • observe for and collect specific evidence,
  • align that evidence to the teacher performance expectations outlined in the CT Common Core of Teaching,
  • focus evidence collection on the impact of the teacher on student learning both in the moment and over time, and,
  • organize that evidence into objective, actionable feedback that can ensure teacher growth.

This is the intent of the CT State Department of Education in making the change and I applaud them for that effort.  The concern of course is that the recent funding issues for the state of CT may in turn have an impact on the reach of these services to the districts.  The change is underway, however, and with the support of policy-makers, we can continue to ensure that every teacher has access to a high quality evaluator who can provide feedback for learning.

It is important to note that this capacity discussion applies to those who are evaluating our building based leaders as well.  Remember, we provide supervision and evaluation to our leaders (more often than not these are the evaluators of teachers) through our educator evaluation model as well.  Aligning our training models for evaluators is an absolute must if we wish to experience a better evaluation program overall.

In addition to changes in the training and development of our evaluators, we need to give careful consideration to the number of teachers we are asking a building based leader to evaluate.  At times, this number can reach up to 30 teachers being evaluated which given the complexities of the work, is not, to put it in the court’s words, part of a “reasonable way to measure how teachers manage to teach”. Legislation needs to support the State Department of Education in careful examination of the structures and policies that ensure that evaluators can provide deep, impactful feedback.

Reason Two:  We are applying inaccurate and sometimes all together invalid data when we connect teacher practice to student outcomes through Student Learning Objectives.

I do not mean to imply or suggest that we should not be considering student outcomes as part of a teacher’s or leader’s evaluation.  What I will suggest is that we currently do not have the structures in place for connecting student success discretely to a teacher’s work with the student in such a way as to ensure validity to that evidence.  The growing shift to collaborative school environments and the sheer number of people working to support student learning each day makes this connection difficult.  Also, I have often said, if it takes a village, why are seeking to measure the individual influence?

The discrepancy in teacher ratings and student performance cited by the CT Judge are the direct result of two flawed approaches in analysis of student achievement through the existing educator evaluation model.  First, as stated, evaluators need better training to ensure that their measurement of classroom practice includes a quality analysis of practice while focusing on student learning.  Overinflation still occurs in our rating of a teacher’s performance in a classroom (which constitutes 40% of a teachers overall score) because the evaluator rating the teacher is not equipped (based on time or skill) to complete the task effectively.   What we have also seen, however, and I am certain that the data the State is looking at can verify this, is that even when an evaluator is assessing practice rigorously and classroom performance is rated below a proficient level, a less than rigorously designed and, once again, more than likely invalid set of Student Learning Objectives (SLO) which constitute 45% of the teacher’s overall evaluation inflates the scores.

Simply put, it is either a poorly assessed and invalid set of data provided by the evaluator about classroom practice (40%) or an invalid SLO (45%) or some combination of the two factors that is creating the discrepancy.  Take for example the following SLO one might see in a teachers plan:

80% of my students will make at least one year’s growth in their reading skills as measured on the district reading assessments for informational text.  

Other details about what elements of the reading assessment constitute “reading skills” are provided in the plan, however, the real issue comes in how the teacher rating is calculated based on student performance of this goal.  Let’s go on the idea that this elementary level teacher has 30 students in their class.  Once results are in, student performance is reviewed based on a locally driven formula.  Based on the number of students achieving one-year’s growth, the teacher receives a rating of Below Standard through to Exemplary (1-4). Typically, a percentage is applied to the percentage of students identified in the SLO.  In other words, if 100% of the 80% of the students make one year’s growth, the teacher receives an Exemplary rating (that would be 24 students out of the 30 in class make goal).  A “Proficient” teacher would fall into the range of 75% – 80% of the students meeting goal (that would mean at least 19 of the students met goal).  So, in this situation, 11 students can go without one year’s growth and the teacher will still receive a “Proficient” level rating for 45% of their overall evaluation.  Even if the evaluator has rated the teacher’s performance and practice (40%) at a “Developing” range, the teacher who has 11 students out of 30 students not meeting one-year’s growth on key reading skills will be deemed overall to be “Proficient.”  This is one of the key reasons we see discrepancies in the data throughout all the states.

In any situation, it is the design and implementation of the evaluation model that comes into greatest question, not necessarily the idea of using student achievement in the evaluation of an educator.

Potential solutions lie in designing whole school or grade level/subject goals in which all teachers and educators in the school are tied to overall achievement levels of students in alignment with the strategic needs of the district.  Additionally, developing district capacity to align, design, and analyze assessments to specific outcomes of learning for a teacher’s students needs to be a focus.  Reliance on a single standardized assessment not only is flawed in that it cannot adequately represent teaching quality but the structure and implementation of SLOs still leaves too many students behind.  Moving to grade level or whole school goals has its own flaws that still need to be considered, however, at least we will ensure that we are promoting the village’s responsibility and not just the individual in ensuring the success of our students.

Reason Three:  We have not made learning the true objective of educator evaluation.

never-stop-learning

One of the most important elements of our profession and lives as educators, is that we are purveyors of learning, whether that be student learning or adult learning (i.e. teachers and leaders).  On-going learning, the learning process, seeing people (little or big people) generate and discover new ideas and establish new knowledge, beliefs and values is what we love and why most of us began teaching and leading in the first place.  Being a life-long learner requires on-going self and external assessment of practice along with the will to respond to that assessment with actions that improve our performance.

One of the reasons this court case includes a decision about educator evaluation is that we (adults) have not viewed evaluation as an opportunity for learning.  The idea of being evaluated by someone else is often met with skepticism or downright mistrust in the purpose.  Old paradigms of “us versus them” or the belief that somehow coaching cannot happen through the evaluator role are at the center of this thinking and needs to be confronted.

The CT State Department of Education through PEAC needs to make changes to the policy and structures to the educator evaluation model – the way in which we define evaluator capacity and the way in which student outcomes are designed and measured.  Fundamentally, they also need to engage in a dialogue about learning, clearly outlining the values and beliefs we hold as educators about our own growth mindset and willingness to learn and the notion that each student’s growth is not only needed but expected.