This was written by Ron Eberts is a teacher and an administrator in Red Deer, Alberta. Ron blogs here and tweets here. This post was originally found here.
by Ron Eberts
Over the past several weeks I have been following a controversy about assessment in a neighbouring school jurisdiction. The controversy has to do with how Battle River School Division will be reporting student progress at the high school level. Traditionally percentages have been used to report to parents how their children are doing in their various courses. In Battle River the jurisdiction is moving to a system in which student assessment will be based on comparing a student’s competency in respect to specific outcomes in the Programs of Study, as opposed to simply an arithmetic compilation of a number of marks collected from assignments, quizzes and tests. They are proposing a process in which students progress, in relation to the outcomes, will be reported as falling into one of four ranges:
by Ron Eberts
Over the past several weeks I have been following a controversy about assessment in a neighbouring school jurisdiction. The controversy has to do with how Battle River School Division will be reporting student progress at the high school level. Traditionally percentages have been used to report to parents how their children are doing in their various courses. In Battle River the jurisdiction is moving to a system in which student assessment will be based on comparing a student’s competency in respect to specific outcomes in the Programs of Study, as opposed to simply an arithmetic compilation of a number of marks collected from assignments, quizzes and tests. They are proposing a process in which students progress, in relation to the outcomes, will be reported as falling into one of four ranges:
- Beginning
- Developing
- Achieving
- Excelling
These ranges would describe how the individual student is progressing in relation to the specific outcomes being assessed at the time. For instance, one of the outcomes in the Grade 10 Social Studies curriculum asks students to explore the impacts of globalization on their lives; specifically, students must evaluate efforts to promote languages and cultures in a globalizing world, and within this evaluation students must include an analysis of language laws, linguistic rights, cultural content legislation, cultural revitalization, and linguistic revitalization.
Proponents of percentages would say that the number “83.6%” would provide more accuracy in respect to a students competency in relation to this outcomes than the description “developing” or “excelling” would… and herein lies the problem… what exactly does 83.6% tell the parent (or the student, for that matter)? Does it mean that the student in question understands how to evaluate efforts to promote languages and cultures in a globalizing world 83.6% of the time? Or, does it mean that the student grasps 83.6% of the various ways that languages and cultures can be influential in a globalizing world? Or does it mean that the student only included 83.6% of the necessary analysis of the language laws, linguistic rights, cultural content legislation, cultural revitalization, and linguistic revitalization that was required in the demonstration of this outcome?
When viewed this way, the number, 83.6% provides almost no insight whatsoever in respect to an individual student’s understanding or application of the outcome being assessed. Yet, a large number of parents and students (and apparently, if the news articles are to be believed, teachers too) believe being told the student is excelling at this outcome is less descriptive than the student is 83.6% of this outcome.
Now, notwithstanding the lack of accuracy a percentage provides when reporting progress against an outcome, there is the mathematical magic that goes on behind the electronic gradebook scenes that makes a single number assessment even more suspect. Let me illustrate this by borrowing from assessment guru Thomas Gusky, in his article entitled, “Computerized Gradebooks And the Myth Of Objectivity”; in the image below a teacher has collected percentages from a number of students. Depending on the methodology used to calculate the “final” percentage, the students can range from all having the identical number, to a range of over 19 points difference:
Advocates of percentages who are arguing about the specificity and accuracy of using percentages cannot dispute that depending on the teacher and his/her method of calculating a “final” mark, an almost 20 point swing is anything but specific and accurate! What is worse is when the art of WEIGHTING is brought into the discussion. Weighting is when categories of “learning” are designated a certain percentage weight out of the total course. For instance, some traditional categories for weighting include such labels as tests, quizzes, worksheets, projects, and homework (and, unfortunately, sometimes labels with no academic value at all, such as participation, effort, or even attendance are weighted into a final mark). Let me illustrate…
Below are a set of numbers collected by a teacher for “Johnny”:
Method for grade calculation #2: Tests 50%; Quizzes 25%; Assignments 10%; Group Project 15% = 80%
Method for grade calculation #3: Tests and Quizzes 10%; Assignments 50%; Group Project 40% = 67%
So, as you can see, even “hard data”, strictly based on percentages, yielded a 13% swing in results. This leads me to the question, who again believes percentages are specific and accurate?
In the end I understand that most high schools will continue to use percentages to report student progress. I just wish that the proponents of percentages would at least be honest about why they prefer them – not because they are specific or accurate, but because they are easy! If they really wanted accurate and descriptive, knowing how their children are doing in respect to the outcomes they are being taught would result in a descriptor-based system, not percentages.
Below you can find some news articles on this issue:
http://www.edmontonjournal.com/Camrose+protest+student+grading+system+draws+people+video/8230250/story.html
http://www.edmontonjournal.com/Editorial+Getting+testy+over+grades/8246281/story.html
http://www.edmontonjournal.com/grading+system+sparks+controversy+Battle+River+schools/8241655/story.html
Proponents of percentages would say that the number “83.6%” would provide more accuracy in respect to a students competency in relation to this outcomes than the description “developing” or “excelling” would… and herein lies the problem… what exactly does 83.6% tell the parent (or the student, for that matter)? Does it mean that the student in question understands how to evaluate efforts to promote languages and cultures in a globalizing world 83.6% of the time? Or, does it mean that the student grasps 83.6% of the various ways that languages and cultures can be influential in a globalizing world? Or does it mean that the student only included 83.6% of the necessary analysis of the language laws, linguistic rights, cultural content legislation, cultural revitalization, and linguistic revitalization that was required in the demonstration of this outcome?
When viewed this way, the number, 83.6% provides almost no insight whatsoever in respect to an individual student’s understanding or application of the outcome being assessed. Yet, a large number of parents and students (and apparently, if the news articles are to be believed, teachers too) believe being told the student is excelling at this outcome is less descriptive than the student is 83.6% of this outcome.
Now, notwithstanding the lack of accuracy a percentage provides when reporting progress against an outcome, there is the mathematical magic that goes on behind the electronic gradebook scenes that makes a single number assessment even more suspect. Let me illustrate this by borrowing from assessment guru Thomas Gusky, in his article entitled, “Computerized Gradebooks And the Myth Of Objectivity”; in the image below a teacher has collected percentages from a number of students. Depending on the methodology used to calculate the “final” percentage, the students can range from all having the identical number, to a range of over 19 points difference:
Advocates of percentages who are arguing about the specificity and accuracy of using percentages cannot dispute that depending on the teacher and his/her method of calculating a “final” mark, an almost 20 point swing is anything but specific and accurate! What is worse is when the art of WEIGHTING is brought into the discussion. Weighting is when categories of “learning” are designated a certain percentage weight out of the total course. For instance, some traditional categories for weighting include such labels as tests, quizzes, worksheets, projects, and homework (and, unfortunately, sometimes labels with no academic value at all, such as participation, effort, or even attendance are weighted into a final mark). Let me illustrate…
Below are a set of numbers collected by a teacher for “Johnny”:
- Quiz 1 = 39/45 –> 87%
- Quiz 2 = 46/50 –> 92%
- Quiz 3 = 32/40 –> 80%
- Unit Test = 58/70 –>83%
- Assignment 1 = 12/15 –> 80%
- Assignment 2 = 15/20 –> 75%
- Assignment 3 = 4/8 –> 50%
- Assignment 4 = 6/10 –> 60%
- Group Project = 59/100 –> 59%
Method for grade calculation #2: Tests 50%; Quizzes 25%; Assignments 10%; Group Project 15% = 80%
Method for grade calculation #3: Tests and Quizzes 10%; Assignments 50%; Group Project 40% = 67%
So, as you can see, even “hard data”, strictly based on percentages, yielded a 13% swing in results. This leads me to the question, who again believes percentages are specific and accurate?
In the end I understand that most high schools will continue to use percentages to report student progress. I just wish that the proponents of percentages would at least be honest about why they prefer them – not because they are specific or accurate, but because they are easy! If they really wanted accurate and descriptive, knowing how their children are doing in respect to the outcomes they are being taught would result in a descriptor-based system, not percentages.
Below you can find some news articles on this issue:
http://www.edmontonjournal.com/Camrose+protest+student+grading+system+draws+people+video/8230250/story.html
http://www.edmontonjournal.com/Editorial+Getting+testy+over+grades/8246281/story.html
http://www.edmontonjournal.com/grading+system+sparks+controversy+Battle+River+schools/8241655/story.html
So how do you view letter grades, which suffer from the same distortions and inaccuracies...
ReplyDeleteHow would you provide information to powers that be, ministers of education, school board superintendents, parents and even students as to 'the progress' of students?
What do you make of report cards?
How to provide information is the bureaucratic problem, yes.
DeleteI never had problems with grades when I taught at a university where students were really good and academically oriented. So much didn't matter then -- you could grade in very traditional ways and still come up more or less fair and accurate.
The problem is that with a broader range of students all these issues of arbitrariness really become obvious. I am glad to see someone besides me sees this, as my colleagues do not, but I have come to see grades as highly unproductive. Descriptor-based is *so* much better.