By Lisa Graham Keegan, A for Arizona
Sunday's headline story on differences in AIMS scores from year to year wants us to conclude that these differences cannot be related to teaching effects. I think that Dr. Haladyna's response - which you chose to highlight - captures this with disappointing clarity: "If you're a slow runner, you're a slow runner, and you don't break the record for running all of a sudden."
In other words, students are either smart or dumb, and nothing that happens in a classroom can alter that fact. Welcome to why we are failing our students.
The single biggest impact on a student is the quality of the teacher in the classroom. The teacher effect on student progress outweighs every other variable, including wealth, race, or type of school. The data that has emerged in schools across the country over the past decade has allowed thousands of studies on this issue, and is resulting in efforts to eliminate the "Last In, First Out" policies in school district teaching contracts.
Teachers don't just make A difference, they make THE difference. As the New Teacher Project puts it in their seminal paper of a few years ago, teachers are not interchangeable widgets. They are specialists, and their effects can be measured in a myriad of ways. Test scores are one of those ways.
Nothing about this reality means that we should not inspect test results for signs of cheating. We absolutely should.
But to suggest that gains in excess of what is "expected" must be assumed to be an aberration is to deny the effects of teaching...good or bad. I was struck that there was seemingly no interest by the Republic in checking for scores that under perform the expected gain. Because we have plenty of that going on as well. If students score very poorly in one year compared to another, do we assume testing alteration?
I hope your readers compare your story with the long running series in the Los Angeles Times, where individual teacher effectiveness has been published by teacher name and grade in a massive database, created jointly by the Los Angeles Times and RAND Corporation. The entire point of publishing this database was to give parents much needed information about the quality of teaching in their schools, and to point out the vast differences in gains or losses in an individual teacher's classroom. I hope Republic readers will follow this story in the LA Times and elsewhere.
Running a front page story encouraging the public to suspect only cheating can lead to "spikes" in testing (and ignoring "valleys" in testing) is an interesting decision. You have basically led this discussion by dismissing the possibility that teachers can matter that much. Instead we are treated to the numbing assurance that slow students are slow students and there is nothing we can do to radically alter that reality. Relax, drink your coffee.
Your conclusion is in direct opposition to my experience, and to national evidence . Should you continue this research ( and I hope that you do) you will no doubt uncover occasions of cheating that should be dealt with forcefully in my view. But you will also find that variations in student achievement by grade and by classroom can be enormous, but they are predictable over time by teacher.
Saying" teachers matter most" is not some soothing and patronizing bromide. It is a fact, and effective teachers change lives radically. Our quest ought to be to find out who they are, where they are, and to try to get them into Arizona classrooms.