This is the second in an occasional series on new teacher evaluations in Colorado. Previously, we looked at broad changes coming as the state’s new teacher evaluation law takes effect in the fall. Tomorrow, we’ll take a look at other changes being made to Denver’s LEAP program as a result of feedback from its pilot years.
As Denver teachers and administrators prepare to implement its new teacher evaluation system district-wide in the fall, a debate is emerging over exactly where to draw the line separating “effective” teachers from those approaching that level.
Newly released data reflecting the results of this year’s principal and peer observation – one piece of the LEAP evaluation process – show that under the original scheme for the district’s evaluation program, known as LEAP, just over one in three Denver teachers scored high enough to be labeled effective when individual scores on each indicator are averaged.
But an additional 28.2 percent of district teachers could receive that ranking if DPS rounds up scores that fall just below its original bar for effectiveness on the LEAP framework.
The LEAP rubric for classroom observations ranks teachers on a scale from 1 to 7 on 12 indicators. A score of 1 or 2 on an indicator means the teacher is not meeting expectations; a 3 or 4 means a teacher is approaching expectations; a 5 or 6 signals effective; and 7 represents distinguished.
The new data is based on learning environment and instructional practices as assessed by principals and a cadre of peer observers who score teachers against a detailed rubric after making spontaneous classroom visits a few times during the school year. The ratings do not include other key components of LEAP, such as student outcomes (mostly standardized test results and other assessments), student perception data or professionalism scores.
But the scores are still key early indicators of how the system will work since routine observations by trained observers and school leaders against a rubric accompanied by conversations and tangible tips for improvement are cornerstones of LEAP, which stands for Leading Effective Academic Practice.
But as the raw data from the classroom observation pilot comes in, it is clear that the majority of teachers are straddling the line of approaching expectations and effective. In fact, 52.1 percent of the teachers included in the pilot landed between 4.5 and 5.5 when their scores on individual indicators were averaged. More teachers landed in the 4.5 to 5.0 range than did in any other range of the rubric.
Denver Classroom Teachers Association representative to LEAP Pam Shamburg said that the district’s original cut points for separating effective teachers from those approaching effectiveness were always meant to be “rough lines in the sand.” From the beginning, she said, both district officials and the DCTA knew adjustments would be needed once the scores came in. District staff combined the principal and peer observations and averaged them across indicator to get the ratings.
The problem is determining exactly what a rating of 4.8 means.
“Is that ‘approaching’ or ‘effective’?” said Jennifer Stern, executive director of Teacher Performance Management for Denver Public Schools. “Where do you draw those lines? We’re still working through it.”
District staff and representatives of DCTA are meeting this month to begin discussing options, taking into account the pros and cons, benefits and detriments of rounding up or down.
Starting next school year, some personnel decisions will be made according to whether or not a teacher is ranked effective or not, and so the district’s decision to round up or down could have far-reaching repercussions. Shamburg said that because so many teachers scored in the middle, the ratings aren’t precise enough to attach high stakes to them.
“You could have 4.2 and look really different than teachers that have a 4.3,” Shamburg said.
That lack of precision could be especially problematic because the goal of LEAP and other teacher evaluation systems is to move beyond the days when an overwhelming majority of teachers were rated “satisfactory” even when some students weren’t making progress in their classrooms.
“I’m worried when you take that big middle, and divide [teachers] between partially effective and effective, that’s where the problem is going to come in,” Shamburg said.
New data represents only part of complete LEAP evaluation
LEAP started as a 16-school pilot in 2011 but the first set of scores was only recently released, and they are still only partial scores.
Shamburg said preliminary research on student outcomes, including standardized test scores and results of other assessments, show similar a similar distribution to the classroom observation data, with most teachers clustered in the mid-level scores.
Today, LEAP covers 150 public schools in Denver, except charters, which are allowed to create their own teacher evaluation systems that comply with the requirements of Senate Bill 10-191, the so-called teacher effectiveness law.
Innovation schools can opt out of LEAP as well, although most chose to participate. Six of the 26, however, are participating in a modified way. For instance, Denver Green School, Valdez Elementary, Cole Arts and Sciences and Whittier Elementary are using internal peer observers rather than tapping into the 45 peer observers that work across DPS.