Kristin Klopfenstein is the executive director of the Education Innovation Institute at the University of Northern Colorado.
When I saw last Friday that the feds want to push for accountability in teacher prep programs I about jumped out of my chair. This is great news.
What’s exciting is the way they want to do it – by requiring states to use hard data to evaluate prep programs based on the success of the teachers they graduate. Success measures would include achievement results for students taught by graduates of each prep program, data on job placement and retention of graduates, and feedback from new teachers about the adequacy of their preparation.
Done right, this data-based approach can provide guidance to teacher prep programs working to improve their curriculum, policymakers deciding how to distribute scarce resources, future teachers seeking the best preparation and school districts seeking the best new teachers.
“Done right” is an important phrase. The US DOE’s new proposal is short on detail, saying only that the department would consult in coming months with “the teacher preparation community” to devise the new reporting requirements that “focus on the best measures of program impact” before setting timelines for implementation.
It is critically important that they consider measures that include an array of results we care about, including behavioral and attitudinal outcomes as well as test scores. I don’t think I’m alone among parents when I say I care even more about the behaviors and social skills my kids learn in school than I do about their test scores. Any accountability system for teacher prep programs should take even the difficult-to-measure outcomes of schooling into account and align the incentives that drive how teachers are prepared accordingly.
Focusing on results is the key
What excites me about the DOE initiative is it has the potential to help fill gaps in our understanding of which aspects of teacher training produce the best results in student achievement, and which don’t work. Very little rigorous research has focused on such outcomes at even the local level, much less on a national scale.
Instead, prep program evaluations tend to focus on “inputs” — the many requirements and criteria that shape how a program works. An example is a report issued last summer by the National Council on Teacher Quality (NCTQ) set out to evaluate student teaching programs across the country by measuring technical factors like whether mentor teachers were selected by prep programs or local school districts and whether student teachers were placed locally or far away.
If the proof of good preparation is in the pudding, most people would prefer a recipe that details how well prep graduates fare with different kinds of students than about prep program logistics.
A few states and large urban districts are already using student performance data to assess teacher prep programs. The DOE’s proposal cites examples from Louisiana, Tennessee and North Carolina in which the evaluations have led to revisions of prep programs, and in extreme cases, closure.
The first step for enabling such evaluations is creation of a statewide database that links records for teachers to those of their students, an enormous and painstaking undertaking. Colorado has begun this process, which will take a few years to complete. But some universities, including my institution, the University of Northern Colorado, are already working on smaller-scale models to evaluate the effectiveness of their teacher prep graduates once they are in the classroom.
Using school records over several years from districts that already link students with their teachers, we will compare the test scores and behaviors of students in classrooms led by UNC graduates to those of educators from other prep programs working in similar settings. We also expect to learn whether our program at UNC is strong or weak in specific subject areas or with particular demographic groups. After all, to drive program improvement, we need to know how the students taught by UNC graduates perform in both math and reading.
We also need to know whether our graduates are more or less effective with low-income children or if they, on average, work equally well with students of all backgrounds. Our ultimate goal is to inform a cycle of continuous improvement in our training program.
Emulating the best teacher prep programs
Good teacher prep programs, including UNC’s, already collect myriad data related to performance, including student work samples, surveys, and focus groups with teachers-in-training, program graduates and school administrators. That’s useful information and we should continue to gather it to make sure we get a complete picture. But we also need reliable and valid statewide data.
And given the proliferation of certification programs in recent years, any new federal requirements should apply to alternative training programs as rigorously as to those based in universities. It is a good sign that the DOE proposal included an endorsement from Teach For America founder Wendy Kopp.
Recent high-quality studies have found that good teaching makes more difference in a student’s academic progress than many other factors, including class size and graduate degrees held by a teacher. It naturally follows that researchers, policymakers, educators, and parents want to know what makes a teacher effective.
Linking student performance data back to teacher prep programs won’t answer all the questions, obviously, but it will give us more information about what works and what doesn’t in training teachers. That’s a necessary step if we are serious about improving the teaching profession.
You can read the DOE’s proposal, Our Future, Our Teachers: The Obama Administration’s Plan for Teacher Education Reform and Improvement online.