Switch to Desktop Site
 
 

A right way and a wrong way to link teachers and student test scores?

The Los Angeles Times plans to publish names of teachers and how well they do in raising their students' standardized test scores. Some say it will result in a backlash to emerging 'value added' analysis of teacher performance.

Image

The Robert F. Kennedy Community Schools in Los Angeles will open in September. A controversy is brewing over whether the Los Angeles Times newspaper should publish teachers’ names along with an analysis of their students’ standardized test scores.
Los Angeles Times

Newscom/File

About these ads

A controversy is brewing in Los Angeles over whether a newspaper should publish teachers’ names along with an analysis of how well they do in raising their students’ standardized test scores.

The debate has generated heated assertions that transparency should prevail at all costs, or, on the other side, that it’s unfair to label individual teachers using possibly flawed statistics. But the bigger questions – such as the way to responsibly use these kinds of data – are being lost, say some analysts. They worry that anger over the forthcoming Los Angeles Times article will cause a backlash against so-called "value added" analysis of teacher performance – which is the method the Times uses.

“This [episode with the L.A. Times] is where the advocates for value-added are getting a bit ahead of themselves,” says Douglas Harris, an education professor at the University of Wisconsin in Madison. “Teachers are already feeling under the gun on this kind of thing. They’re willing to go in this direction if it’s done right, but this is an example where [the paper is] not being careful, and it could easily backfire and undermine the more productive uses.”

“Value-added data” is the latest trend in teacher accountability: the idea that a student’s gains from the previous year's test – as opposed to his or her overall performance – can be measured and tied to the latest teacher. The hope is that the process weeds out a lot of external factors that can influence student achievement on the tests. While not a brand new idea – it was initially developed in Tennessee and Dallas, which have data going back many years – it’s only beginning to be used widely in districts and states, and many are just now getting the kind of data that makes such analyses possible.

Next

Page:   1   |   2   |   3

Share