Friday, May 9, 2014

Alberta Education’s Task Force on Teacher Excellence Report: There’s Some Serious Cherry-Picking Going On Here

This was written by Laura Servage who is a doctoral student at the University of Alberta. She blogs here. This post was originally found here.

by Laura Servage

It is hard to anticipate the outcomes of the brewing political war in Alberta over The Education ministry’s release of the “Task Force for Teaching Excellence.” Nor can I begin to address the array and complexity of the issues it raises within one short blog. For the moment, I want to use a small segment of the report to highlight the partisan nature of the “teacher excellence” debate. The segment in question may be absorbed without too much question by a reader, which is why I wish to draw attention to it. Namely, it is the report’s citation of two US studies correlating student achievement with teacher effectiveness.[1] What is so disturbing about this aspect of the report (and it is not at all the only disturbing aspect) is that the inclusion of a couple of sexy graphs gives the whole thing a roundly undeserved air of scientific rigour.

Few would argue that there is a link between teacher effectiveness and student learning, and few would disagree that this link is of central importance. Measuring the link is another matter entirely – a matter so complex as to warrant reams – and I mean reams – of academic research focusing on the challenges of such measurement.

“Value Added” Measures of Teachers: A Taste of Methodology Concerns

“Methodology” describes researchers’ efforts to come up with the best ways to do their work. In addition to doing research, researchers are always debating the accuracy and reliability of the methods used to collect data, represent it, and draw conclusions from it. Such debates occur in both quantitative (statistical) research, and qualitative research. An even cursory search of research databases yields some of the methodological concerns around measuring the impacts of teacher effectiveness on student learning.[2] Here, I’ll implore you to read just until your eyes glass over (it won’t take long) and stay with me by jumping down to the point of this very boring sample of material I’ve pulled from studies concerning the measure of a teacher’s “value added”:
Value-added models have been widely used to assess the contributions of individual teachers and schools to students’ academic growth based on longitudinal student achievement outcomes. There is concern, however, that ignoring the presence of missing values, which are common in longitudinal studies, can bias teachers’ value-added scores.
Value-added approaches to teacher evaluation have many problems. Chief among them is the commonly found class-to-class and year-to-year unreliability in the scores obtained.
In this article, the authors provide a methodological critique of the current standard of value-added modeling forwarded in educational policy contexts as a means of measuring teacher effectiveness. An alternative statistical methodology, propensity score matching, [would] allows estimation of how well a teacher performs relative to teachers assigned comparable classes of students.
Despite questions about validity and reliability, the use of value-added estimation methods has moved beyond academic research into state accountability systems for teachers, schools, and teacher preparation programs (TPPs). Prior studies of value-added measurement for TPPs test the validity of researcher-designed models and find that measuring differences across programs is difficult.
Empirically, we reject nearly all assumptions underlying value-added models.

I want to make it clear that I did not spend hours picking out snippets to support my position. I am not a statistician, and I cannot even comment on the validity of the studies just sampled. My point in including the above segments from research papers is to illustrate just how complex this measurement problem is. Which should lead one – anyone, statistically inclined or not – to challenge the authority of a report that cites exactly two quantitative studies on the matter and declares “problem solved.”

It’s Not “About the Kids”

In media interviews accompanying the release of this report, Task Force Chairperson Glen Feltham declared that “the interest of the student was paramount – the child came first.” Alberta Education Minister Jeff Johnson echoed, “If we truly want to do what’s best for kids and students, we’ve got to have the guts to have these conversations.” The thing is, saying you want what’s “best for kids” is like saying you like kittens, puppies and ice-cream. Who doesn’t? The Alberta Teachers’ Association backs its position with the same language.

Really, then, who isn’t in it “for the kids?” The Task Force for Teacher Excellence report is “about the kids,” sort of. But it’s much more about a high stakes ideological battle for the hearts and minds of Alberta’s electorate. And here is where it’s important to note that the two studies cited in the report are not contextualized politically any more than they are by academic research. The cited studies come out of the United States: a country so rife with partisanship as to warrant skepticism when it comes to almost any public policy research it produces. It is about the last place we should be looking to for education research, and it is certainly the last place on which we should be modelling public policy debates.

There is little doubt that there are a few (very few) Alberta teachers who ought to be put out to pasture. But this report isn’t any more about teacher excellence than it is “about the kids.” It’s about a political battle represented chiefly by two organizations – Alberta’s Ministry of Education and the Alberta Teachers’ Association – and reflecting two very different perspectives on the extent to which education ought to remain public, or move toward the PC government’s preferred vision of increasing privatization. And it is endlessly frustrating to see research “cherry-picked” on ideological grounds rather than assessed on its own merits. I call Data Abuse. When an entire policy platform is built around the premise that there is a solid causal link between teacher “excellence” (however that’s defined, but that’s a whole other problem) and students’ learning, we ought to have some confidence that this link is sound. I’m not seeing it.

[1] Chetty, R., Friedman, J., & Rockoff, J. (2011). The long-term impacts of teachers: Teacher value-added and student outcomes in adulthood. Working paper 17699, National Bureau of Economic Research. Retrieved from The lengthy report concludes with (sensible) caution about the application of its findings to policies impacting teacher pay, assessment, and retention.

Sanders, W. & Rivers, J. (1996). Cumulative and residual effects of teachers on future student academic achievement. University of Tennessee Value-Added Research and Assessment Center. Retrieved from

[2] Here I jumped into the University of Alberta’s subscription to ProQuest, which aggregates peer-reviewed research from many different academic journals and disciplines. The search terms “value added” and “teachers” yielded 562 hits, and I skimmed the abstracts (the article descriptions) for the first two dozen of these hits.

No comments:

Post a Comment

Follow by Email