Showing posts with label multiple choice. Show all posts
Showing posts with label multiple choice. Show all posts

Monday, April 27, 2015

From Detesting to De-Testing

This post was featured in Cathy Rubin's The Global Search for Education: Our Top 12 Teacher Blogs.

How do you balance preparation for high stakes assessments with teaching and learning in your classroom?


In my classroom, I have replaced tests and grades with projects and performances collected in portfolios. It’s been 10 years since I used a multiple choice test to assess my students, so it’s safe to say that I do not agree with having to administer a standardized multiple choice test for the government at the end of an entire year of making learning visible via blogging.

Teachers are repeatedly told that the best way to prepare students for standardized tests is to teach the curriculum, but this is at best misleading. We know that multiple choice tests require a certain amount of test taking skills, and that students who have a better understanding for the nuances of multiple choice tests can score well without having learned what the tests claim to be measuring.

So how do I live with myself when I have an obligation to administer standardized tests that I don’t support?

In his article Fighting the Tests: A Practical Guide to Rescuing Our Schools, Alfie Kohn writes:
Whenever something in the schools is amiss, it makes sense for us to work on two tracks at once. We must do our best in the short term to protect students from the worst effects of a given policy, but we must also work to change or eliminate that policy. If we overlook the former – the need to minimize the harm of what is currently taking place, to devise effective coping strategies — then we do a disservice to children in the here and now. But (and this is by far the more common error) if we overlook the latter – the need to alter the current reality — then we are condemning our children’s children to having to make the best of the same unacceptable situation because it will still exist.
In the short term, I teach the curriculum the best I can, and I waste as little time as possible preparing students to fill in bubbles. However, as test day approaches we do a practice test in small groups to reduce anxiety and increase familiarity. The best teachers act less like conduits for the tests and more like a buffer that protects students from the harmful effects of testing, so I also assure students and parents that I do not use the standardized test as a part of their report card.

In the long term, I tweet, blog, write articles and talk with anyone and everyone about how and why standardized tests are broken and how and why the alternatives to the tests are far more authentic. I go out of my way to make the alternatives to standardized tests so obviously better that parents and students see the tests as an unfortunate distraction from real learning.

To advocate for authentic alternatives to standardized tests I actively work with my Alberta Teachers’ Association to create local public events with speakers such as Sir Ken Robinson, Alfie Kohn, Pasi Sahlberg, Yong Zhao and Andy Hargreaves. I’ve joined a political party in Alberta and influenced their education policies. I wrote Telling Time with a Broken Clock: the trouble with standardized testing and co-edited De-Testing and De-Grading Schools: Authentic Alternatives to Standardization and Accountability.

Together parents, students and teachers join together to opt-out of testing.

Saturday, March 21, 2015

Making school worse, faster

This post appeared in a series titled: The Global Search for Education: Our Top 12 Global Teacher Blogs that answered the question: What are the biggest mistakes teachers make when integrating technology into the classroom.




“Before the computer could change school, school changed the computer.”

-Seymour Papert

I love technology, and I use it every single day. I teach with it, and I learn with it. Without technology, my teaching and learning would suffer.

However, too much of what is being sold as “Education Technology” merely shoehorns technology in a way that supplements traditional, less-than-optimal teaching and learning practices which ultimately leads the classroom to revert to the way it was before.

Here are three mistakes schools and teachers make when integrating technology:
1. Technology is used to prove, instead of improve: Test and punish accountability regimes have convinced (and often mandated) that technology be used to mine students for spreadsheet friendly data. For too long, teachers have been lured by the efficiency of multiple-choice tests, and with the use of technology, it is easier than ever to reduce learning to tests and grades.
Psychologist Carol Dweck reminds us to ask, “Why waste time proving over and over how great you are, when you could be getting better?” Proving and improving are not the same thing and if we aren’t careful, technology can be used in wonderful ways to prove the quality of our schools and teachers without improving learning.
2. Technology is used by the teacher, not the students: Interactive White Boards (Smartboards) are the definitive example of how schools can spend a lot of time, effort and money on technology that is almost exclusively used by the teacher while students sit passively, waiting to be filled with knowledge.
In his book The Pedagogy of the Oppressed, Paulo Freire called this the banking concept of education where, “Education thus becomes an act of depositing, in which the students are the depositories and the teacher is the depositor”. Too much of our school’s technology demands that students simply sit and do nothing.
3. Poor Pedagogy + Technology = Accelerated Malpractice: Classroom management and tracking programs like ClassDojo are used to elicit compliant behaviour and are sold as a daring departure from the status quo when really they are a tactic taken from the same behaviourist strategies that have been strangling the life out of classrooms for decades.
Schools and teachers who ignore technology risk becoming irrelevant to their students, and this is unacceptable. However, it is equally wrong to use technology as a 21st century veneer on the “sit-and-get, spew-and-forget” model of learning that has dominated our schools for too long.

Teachers and schools must be mindful of their pedagogical practices, because if they are not, technology will only make things worse, faster.

Thursday, October 4, 2012

This. Changes. Everything.

If you're like me, you know all too well how little school has changed over the last 150 years. Many people hold technology up as the long awaited disruptive force that will smarten up our glacial education systems.

The wait is over.

Change is here!

If you thought the Scantron machine and the Khan Academy were disruptive forces on education, you're going to blow your load when you hear about this.

YouTube is testing a feature that allows you to create a multiple choice quiz.

Picture by Anthony Ha

This. Changes. Everything.

What a deliciously disruptive idea! I would have never thought of taking something like multiple choice tests and making them digital. Seriously, how do people come up with such crazy creative ideas? I'm sarcastically blown away with how this will change the classroom. Consider this:
  • Teachers won't have to tell (or yell at) students to NOT write on the test. Teachers will never need to photocopy and re-photocopy those multiple choice tests that they made 15 years ago.
  • Teachers won't have to waste time plugging in the Scantron machine or risk hearing loss when the Scantrons are run through the machine and are turned into replica AK-47s.
  • Teachers will never again have to try and decode the Scantron analysis sheet. All the data-analysis can be done mindlessly by YouTube.
  • Rather than wasting time allowing children to learn by doing something in a context and for a purpose, teachers can reduce needlessly complex and ambiguous concepts such as citizenship, creativity, communication and critical thinking to a YouTube video and an online multiple choice quiz.
Let's stop here for a minute before my sarcasm breaks the Internet.

In case you missed it, I think this is the:

Worst. Idea. Ever.


Why?

1. Multiple choice tests are a primitive form of assessment that has no place in a classroom that is interested in allowing children to construct an understanding for themselves in a way that encourages them to show not only what they know but what they can do with what they know.

2. Too often technology is used as a "shoehorn" -- that is, in a way that merely supplements traditional, less-than-optimal teaching and learning practices which ultimately leads the classroom to revert to the way it was before.

3. Using multiple choice tests on YouTube videos makes me think of what Seymour Papert said: "Before the computer could change school, school changed the computer."

4. The Khan Academy and online multiple choice tests are the definitive way of improving school by changing nothing. In other words, online multiple choice tests is nothing more than more of the same and is a great example of how not to use technology.


Let me be crystal clear: I use technology everyday and I know that if educators ignore technology, we risk becoming irrelevant to our students. This is unacceptable. If we are so drunk on technology that we spend all our time making our jobs easier by "improving" what we've always done, we risk being irrelevant to our students.

For every discussion we have about technology, we need to talk twice about pedagogy.

Chris Lehmann's comment on this post speaks volumes:
Simply put... if the best use of the technology at our disposal we can imagine is the evolutionary filmstrip and the evolutionary scantron, our failure will be epic and tragic.

Friday, September 7, 2012

14 reasons why multiple choice tests suck

John Spencer wrote a post where he outlines 14 reasons why multiple choice tests suck. Consider these points:
  1. Multiple choice is shallow. I know people try to create analytical, evaluative and creative questions. However, the medium itself is one of recall and recognition.
  2. Knowledge is often connective and deeply rooted in context. Multiple choice takes away connective thinking and puts it into empty silos, where knowledge is reduced to lowest common denominator. I want students to know information. However, I also want them to know applied information. I want them to put the information together.
  3. Multiple choice is unreliable. As long as a student can guess a correct answer, every question is somewhat suspect. I get it. Teachers can look for overall trends and take out the statistical probability of guessing. However, how does that help anyone figure out if a child needs additional support with a concept.
  4. There is rarely a chance to explain why something is true. Students should be able to articulate what they know and give a defence for why it is true.
  5. Multiple choice kills the desire to learn. It might not sound like a big deal, but every time my students take a test, they are less likely to enjoy what they learn.
  6. If you try for critical thinking, multiple choice tests become subjective. Students end up with questions like, "Which of the following best describes . . ." and the test becomes meaningless.
  7. Multiple choice does not allow for nuance, paradox or mystery.
  8. Students need to see knowledge as contextual and personal. Multiple choice tests are standardized, impersonal and void of any real context. Students internalize the idea that learning is something irrelevant.
  9. It reduces self-efficacy. Multiple choice tests fail to allow students to find information themselves and make decisions about their learning. To do so would make tests "unreliable." Too many variables. Unfortunately, life has multiple variables and "learn to be a critical thinking citizen" cannot be easily measured.
  10. Multiple choice pushes students toward a narrow, cerebral definition of learning. Multiple choice does not allow for social learning. Instead, students must prove what they know in isolation. It doesn't allow for multiple modalities or differentiation, either.
  11. Google has replaced multiple choice. In an era when knowledge is instantly available, we should be seeing whether students can find information, ask deep questions, engage with sources, curate what they find and find the bias in a source. Multiple choice doesn't allow for any of these necessary skills.
  12. It's way too easy to cheat, both for teachers and for students.
  13. Learning is often a creative endeavour. However, multiple choice tests do not allow students to be creative, divergent or innovative.
  14. Multiple choice tests are often not valid. For example, if a standard is concept standard, it often cannot be assessed in multiple options. If a standard is a process or a product, they probably should be assessed by actually doing the process and making the product. So, we're left with the assessment of skills, and only those that fit nicely into multiple options.
I also wrote a post on the folly of multiple choice tests.

What do you think?

Wednesday, August 1, 2012

Strength of multiple choice tests

A strength of multiple choice tests is that they allow us to assess large groups of students simultaneously and identically.

A fatal flaw of multiple choice tests is that they deceive us into thinking we can (and should) assess large groups of students simultaneously and identically.

Tuesday, May 1, 2012

The Folly of Multiple Choice

"Anyone can confirm how little the grading that results from examinations corresponds to the final useful work of people in real life."

-Jean Piaget


It's final exam time at my school, and my teacher colleagues are collectively herding to the multiple-choice test scoring machine. For just under $800 CAD, our scoring machine can:
  • Scan up to 35 sheets per minute 
  • Grade up to 100 questions per pass 
  • Score exams with up to 200 questions 
  • PC compatibility for advanced data collection and analysis
The front of the instruction manual proudly reads “GRADING YOUR TESTS JUST GOT EASIER!” After I watched scoring sheet after scoring sheet scurried through the scoring machine, I can personally attest to how easy this really is. It's no secret why multiple choice exams are so popular among teachers – their utility is second to none. But, what are the cons to multiple choice tests? Here are a few items to think about before giving your next multiple choice test:


Ambiguity

Misinterpreting a question can result in an "incorrect" response, even if the response is valid. A free response test allows the test taker to make an argument for their viewpoint and potentially receive credit. Depending on the number of possible answers that are provided, a test taker could have a chance of completely guessing the correct answer. It is conceivable for a student to select the wrong answer for the right reasons or to select the right answer for the wrong reasons. The results of such a multiple-choice exam are surrounded with uncertainty and doubt.


No partial credit

Even if a student has some knowledge of a question, they receive no credit for knowing that information if they select the wrong answer. Free response questions may allow a test taker to demonstrate their understanding of the subject and receive partial credit.
Even carefully constructed exams that reflect very detailed curriculums can be used to improperly assess students. If an exam was created to carefully reflect a certain curriculum, you might see only one question that covers a specific outcome. What if that student did in fact understand that outcome but for any number of reasons, they get the question wrong? That means that this test would report that that student understood nothing of that concept – which most likely would be wholly misleading and untrue. How often can a teacher honestly report that a student understands nothing?

Overemphasis on timeliness

A premium is placed on speed at the cost of creativity and thoroughness. This overemphasis on timeliness also contributes greatly to the ambiguity of the exam. Most test-takers are taught to madly fill in the remaining answers before having their exam taken away by the exam supervisors. There is no way to differentiate between these random guess responses and the responses that were carefully and thoughtfully selected. Recognizing guessing as a problem, some test creators enact a penalty such as deducting a mark for incorrect answers – the hope being that test takers will not guess and instead leave the question blank. This solution may stop the guessing, but it still does not address the ambiguity, as all those unanswered questions will simply show that the test taker got them all wrong – when in truth, the test taker may have had some level of understanding, but because they couldn't get finished in time, or they were too scared to guess, they receive no credit.


Subjectivity

How is the length of the exam decided? How many questions are necessary to show enough understanding? In the case of reading comprehension exams, how many reading selections will there be, and what is an appropriate length? How many answers will there be to select from? Which outcomes will be tested? Which will be excluded? Which will be more heavily weighted?

Depending on the date, the question "how many planets are there in our solar system?" has a different answer. What about all those students who were penalized for excluding Pluto as a planet before 2006?

The point here is not to try and figure out the answer to these questions; rather, there is no one answer for these questions. And yet, the choices made by the test taker can have an immeasurable effect on the test's results. One of my favorite quotes on the subjectivity of testsandgrades comes from Paul Dressel who said, "A mark or grade is an inadequate report of an inaccurate judgement by a biased and variable judge of the extent to which a student has attained an indefinite amount of material.


Behaviouristic in nature

These tests only care about whether the student got the right answer. They can't measure whether the student has a true understanding for the content. Even in a subject such as math that can be (mis)labelled as very black and white and right or wrong, it should very much matter how a student comes to answer the question 2+2=4. Did that student simply memorize his cue cards, or does he actually understand the addition process? A multiple choice test does not and cannot concern itself with understanding such valuable information.




Poor Testing can lead to Poor Teaching

Some teachers may use multiple choice exams voluntarily while others may find their use compulsory. Either way, teachers may feel pressure to achieve high scores on these tests, and that kind of pressure can lead to poor teaching, such as the use of lecturing on the behalf of the teacher and memorization on the behalf of the student. Take math for example, many teachers may teach tricks or shortcuts such as: when dividing two fractions, simply flip the second fraction and multiply. A student could mindlessly comply and perform quite well by choosing the correct multiple choice answer. In cases like this, a poor assessment tool has lead to a poor teaching technique (one that relies on mindless compliance and memorization rather than true understanding); however, if we use the test scores as an indicator for learning, that teacher and student appear successful. Inferences made from multiple choice tests can be undermined leaving the successful and superficial students indistinguishable.


Interrater Reliability

Multiple Choice exams are created with one right answer in mind for each question. This straightforward scoring system is used so that any two raters will always agree upon how well a student did. This need for agreement, also known as interrator reliability by statisticians, is gained at an alarming price; Authenticity is sacrificed for (perceived) reliability.

If we were compelled to identify who truly benefits from this kind of artificial measurement, I sincerely doubt anyone could honestly say that this is for the kids. Ultimately, this is an example of the needs of the system trumping the needs of the learner. Alfie Kohn puts it this way:
"You know it's a bad assessment if it's multiple choice. Multiple choice tests can be clever but they can't be authentic. You can't learn what kids know and what they can do with what they know, if they can't generate a response - or at least explain a response. Or as one expert in psychometrics told me many years ago, "Alfie don't you get it, multiple choice tests are designed so lots of students who understand the material will be tricked into picking the wrong response". That's why teachers would never dream of giving a multiple choice test of their own design because the same thing applies there."

Testing Test-taking skills

Multiple choice exams require a certain amount of test taking skills, and some students have better test taking skills than others. Many teachers will actually teach students strategies for writing multiple choice exams. For example, some test takers understand that an answer that has the words "always‟ or "never‟ is usually NOT the correct answer, because rarely is something ever "always‟ or "never‟. This is considered a fairly good strategy, and students who are aware of it may have a better chance of doing well.

However, there are some test takers who have come to believe in poor strategies. For example, some students believe the pattern of responses matters and so they say to themselves, “This can't be another "b‟ answer as we have just had three in a row.” Or they believe in myths such as “when in doubt, pick C”. Granted, we can all probably agree this is a silly strategy, but what if students actually use it? The format of the exam has skewed the measurement of that student's learning.


Averaging Averages

Traditional practice encourages test raters to not only mark each question right or wrong, but to also tally up the number of correct responses and compare that to the total number of questions – of course, we know this to be the average or mean. However, what does this number actually tell us?

Let's pretend there are three questions on the test for every outcome we taught. You could then look at the data and see how many of those three questions a specific student got right or wrong. Let's say for those three questions a student got 1 out 3 correct but for another three questions, that tested a different outcome, the student got 2 of 3 correct. Separately, he understood 33% of the first outcome and 66% of the second outcome. However when you average these averages, he gets 3/6 which comes to a mark of 50%.

What do these numbers mean anymore? Imagine how diluted the average has become when you have 50 to 100 questions that may be measuring the same number of different outcomes. And yet these grades' importance is elevated to grand heights. (Note that the problem of averaging averages is not exclusive to multiple choice exams)





Collaboration = Cheating

Ask any parent for a list of characteristics they wish their children to develop as they grow into adults and there is a very good chance that collaborative skills are somewhere on that list. When you think back to your schooling, how often were you permitted to collaborate with others during examination? If you did try to collaborate, we all know what that was called – cheating! And you got in trouble for it.

Unfortunately, there may some progressive classrooms out there, but it would be a very safe bet to make that most classrooms still have students sitting and writing their exams in isolation. Regardless of your job or profession, how often are you told to figure something out in total and complete isolation – no books, no help, no talking? In the real world, there simply aren't that many times you are expected to solve a problem or perform a task in complete and total isolation – and even if you were, it would be awfully archaic to refuse you the opportunity to reach out for the help you needed to get the task done.

When we say to children, "I want to see what you can do, not what your neighbor can do", this turns out to be code for "I want to see what you can do artificially deprived of the skills and help of the people and resources around you. Rather than seeing how much more you can accomplish in a well functioning team that's more authentic like real life." (Again note that the lack of collaboration during exams is not exclusive to multiple choice exams)


Thinkingcuffs

The very nature of multiple choice tests slaps students with a pair of thinkingcuffs. Who does the majority of the thinking on a multiple choice exam? Who asks all the questions? Who proposes all the answers? Thinking is messy. Learning is messy, but multiple choice tests conveniently remove the mess. All students are required to do is circle or fill in a dot. If we were truly interested in assessing student learning, shouldn't we encourage the students to show us as much of their thinking as possible? Because no one can construct meaning in a preconceived bubble, reducing something as beautiful as learning to a bubble sheet is an exercise in needless oversimplification.



Differentiated Instruction and undifferentiated Assessment

Many teachers today would readily admit that all learners learn differently, and it is the teachers responsibility to address these different learning styles with differentiated instruction; however, many teachers still use multiple choice tests in an attempt to measure their student's learning. There is a real disconnect between our understanding of differentiated instruction and our attempts to measure learning with our undifferentiated, standardized assessment tools.

While it is true that all children should have the opportunity to get an education that does not mean that all children should get the same education. When it comes to instruction and assessment, we need to stop trying to meet the needs of all learners by pretending all learners have the same needs.


Value what we Measure or Measure what we Value

It is true that it makes good sense to occasionally stop and reflect upon how well we are learning – the rest of the time we should concern ourselves with actually learning whatever it is we have set out to learn.
A short anecdote may enlighten this point: A man was seen on his hands and knees searching underneath a street light. It was late at night and very dark. When a passerby inquired what the man was doing, the man said that he was looking for his lost keys. The passerby then noted that the man was fortunate that he had lost his keys under the street light. The man quickly replied that he actually lost his keys a distance to the north, but it was too dark over there, and so he wanted to search where it was easy to see.
There is a big difference between measuring what is simply easily measurable and measuring what we actually consider important. Multiple choice tests measure a very limited and narrow kind of learning. If a great amount of importance is placed on these kinds of tests, people will come to see these limited and narrow kinds of learning as most important – sacrificing their pursuit of other valuable kinds of learning that are rarely measured on multiple choice exams.


While a lot of people concern themselves with what will be on the test, I find myself thinking more about about what can never be on these kinds of tests. Show me the multiple choice test that can assess things like sense of humor, morality, creativity, ingenuity, motivation, empathy.




***


Too many education systems have confused measurement with assessment and forgotten that the latin root for assessment is assidere which translates into "to sit beside". Assessment isn't a spreadsheet -- it's a conversation. 

Multiple choice tests were originally tools used by teachers, but today teachers are tools used by multiple choice tests. This shouldn't come as any surprise, especially if you are familiar with some of Marshal McLuhan's work who once said, "We shape our tools and thereafter our tools shape us."

Despite all these reasons for abandoning the use of multiple choice tests, their utility seems to trump their consequences. What's even more discouraging is that many teachers still choose to use multiple choice exams despite having a plethora of more authentic assessment alternatives such as performance assessments, portfolios, written response and personal, two way communications.

Teachers who continue to use multiple choice exams as their primary or default assessment tool are engaging in a kind of educational malpractice because they are reporting on their student's learning in a way that may range from being marginally inaccurate to wholly untruthful.

I asked Irmeli Halinen, head of curriculum in Finland, how often a teacher in Finland would use a multiple choice test as a way of assessing their students. Her answer said it all:
"Our teachers rarely if ever use multiple choice tests because they would rather have their students do something real."



Wednesday, April 11, 2012

Here's how I use Stickman Art to assess

Years ago I dedicated myself to drawing more with my students, so I searched the Internet for how to draw Stickmen. That's when I came across a college student's web site where they shared their stickman doodles. Here is a booklet that shares examples of how to draw stickmen doing all sorts of stuff.
Stickmen
Here is a booklet of stickmen pictures
that my students sometimes use to inspire
their projects.


Drawing is something I've never done very much of, and so it shouldn't be any surprise that I'm not very good. Because I am not very skilled at drawing, I make sure to draw in front of my students so that they see how I cope with doing something that I am not very skilled at.

I make a point of modelling patience and perseverance while drawing. I also make a point of never being ashamed or insecure about drawing. I want my students to see that I am pleasantly frustrated about learning to draw. Despite my modelling, many children are incredibly anxious about drawing in front of others. When it comes to their drawing skills, too many of them have what Carol Dweck would describe as a fixed mindset.

I provide my students with copies of the above booklet to help nudge and inspire them towards drawing something. Some kids don't need it while others are desperate for somewhere to start. For them, this little booklet of stickmen is a lifeline to doing something creative and original.

To integrate drawing into language arts, I often have my students try and draw new and challenging vocabulary that we encounter in our reading, viewing and writing.

In science, I've had students create comic strips that illustrate their understanding for concepts and vocabulary like buoyancy and density. I've often used these drawing projects as a substitute for the multiple choice tests that I used to think were necessary to assess student learning.

Here's an example of a project that two students collaborated on as a way to exhibit their understanding in science class. It was displayed on the wall as a mural in the hallway near my classroom door.

After missing a payment, the fish mafia
sent Mikey to "swim with the humans."

(Styrofoam, which is less dense than water, will float
because the particles are less dense, or crowded. It
has a low mass with high a high volume
making it buoyant)
Having children draw their understanding for what they are learning in school can provide them with the opportunity to create something from scratch that is their very own... or we can have them fill in 1 of 4 bubbles...

Unfortunately, I have to admit that some children would rather opt out of having to engage in doing something creative in favor of filling in a bubble sheet -- instead of seeing this as a learning style to be accommodated, I see this as a problem to be solved.

And stickmen are a great place to start.

Friday, February 17, 2012

Scantrons for Science

Dave Martin's witty post on Why we really do need scantrons finished by inviting others to share creative ways scantrons could be used to support real learning. Craig left this brilliant comment:

Drop a scantron off the school and time how long it takes to drift down to the ground. Next, poke all the holes out of the scantron sheet and re-time. Do you the holes make it fall faster? Slower?

Tuesday, January 31, 2012

Why we really do need scantrons!

David Martin teaches high school math in Red Deer, Alberta. You can find David's blog here and follow him on Twitter here. This post was originally found here.

by David Martin

Most schools buy Scantron sheets by the thousands, and the cost per year must be over $1 000 as each pack of 500 are approximately $100. I was thinking that there must be other uses of scantrons other than their traditional use for this amount of annual costs. After doing some thinking I am now a believer that Scantron sheets do have a place in schools and below is a list, divided by subjects, for which you can use scantron sheets: I do apologize they are broken up by subject not grade.

Math

· Provide each student with a scantron sheet and ask them to guess which would be the correct answer to the first question, if you were unable to see the question.. Talk about the percentage of the students in the class which would have guessed correctly. Extend this to two questions and so on. You can then talk about the math behind probability.

· Turn the scantron side ways and draw graphs. A constant graph would be were all the answers were A, an oscillating graph would go ABCDCBA…. I bet there are many different graphs which could be drawn. You could then talk about slope, absence of concavity and so on.

· Using the scantron, you could create a ratio of Surface Area to Volume, or perimeter to Surface area, and so on.

· You could talk about the costs of buying scantrons and create a Cost graph and calculate the slope and y-intercept and explain what do these values mean.

English

· Have younger students read the instructions on the scantron sheet and have them explain to a classmate what the instructions mean.

· For students learning the alphabet, have them create the different letters by connecting the dots on a scantron in different formations. (“U” might be a tricky letter!)

Other languages (French, Spanish, etc):

· Have students translate the meaning of the instructions on the scantron sheet into the desired language.

Physics

· Push a marble down the floor and ask the class to estimate the number of scantron sheets, vertically, it would take to stop the marble. Increase the speed of the marble and estimate again. Extend this to the problem “How many scantrons would stop a car going 30 km/h?” (I would like to know this answer?)

· Talk about the air resistance as you drop a scantron from a desk to the floor, and then ask “Would the same action occur if this was a vacuum?”

Chemistry

· Determine how much water one scantron can hold by weighing 20 of them dry and then soaking them in water and comparing the weight difference.

· Take one scantron and light in on fire. Talk about the combustion of paper. You can extend this by dipping the scantron in different liquids and talk about the difference in speed of ignition.

Art

· Instead of using construction paper, have students cut up Scantron sheets and art with them. They can be colored on, and easily cut!

· Have them draw a picture only by connecting the dots on the Scantron.


The best part of these activities is that you don’t need a scantron machine ($5000) to do them…oh wait..you don’t even need the scantrons!!

However, if you have other activities, I invite you to share below!

Happy Marking!

Friday, February 18, 2011

Limits of Twentieth Century Assessments

I have a heightened gag reflex when it comes to select a response assessments. Because Multiple choice, true/false, matching and fill in the blank assessments place a kind of thinking handcuffs on learners in the name of quantifiability, I don't use them. Ever.

I find it disturbingly odd that such assessments continue to coexist with the understanding we have for how magnificently messy real learning really is.

I think that is why I love this quote from Cathy Davidson:

One reason item-response testing (the twentieth-centurys dominant method of testing) is so deficient is that it tends to reduce what we teach to content (especially in the human, social, and natural sciences) or calculation (in the computational sciences). Think of the myriad ways of knowing, making, playing, imagining, and thinking that are not encompassed by content or calculation.

Thursday, October 28, 2010

Multiple choice tests: cui bono?

Mulitple choice exams are the definitive way to make invalid and unreliable inferences about learning.

But if multiple choice exams are not for the kids or even for the teachers, then why are they so predominant in schools? The Latin adage cui bono (literally translates to "as a benefit to whom?" or "to whose benefit?") can provide us with some meaningful reflection upon practices that we have come to take for granted as given truths.

Cui bono? is a tough question to ask because we might have to question our own practices and premises, which can be more than a little uncomfortable - especially when blaming the kids seems to be far easier and infinitely more comforting.

But what if the kids are not to blame?

What if it's not the kids whom need fixing?

What if I we shouldn't be giving multiple choice exams in the first place?

If this is the case, then who are multiple choice exams for? Let's first tackle this question with whom they are not for. Firstly, they aren't for the kids. Want proof? Think back upon your own schooling - to what extent can you attribute a thought-provoking and a rich learning experience with multiple choice tests? How many multiple choice tests did you complete and walk away saying "boy, that was a great way for me to best exhibit my learning!"?

Secondly, they aren't for good teachers. Any results expunged from a multiple choice exam is at best data-rich and knowledge poor. Anyone using data must understand that what we see largely depends on what we look for - and because multiple choice exams equate to a kind of thinking handcuffs, they simply don't let us see anything of any value. The best teachers know that any attempt to reduce something as rich and messy as learning to a number or symbol is infinitely foolish, intellectually indefensible and morally bankrupt. Alfie Kohn puts it this way:


In my experience, the people who work most closely with kids are the most likely to understand how harmful standardized testing is. Many teachers – particularly those who are very talented – have what might be described as a dislike/hate relationship with these exams. Support for testing seems to grow as you move away from the students, going from teacher to principal to central office administrator to school board member to state board member, state legislator, and governor. Those for whom classroom visits are occasional photo opportunities are most likely to be big fans of testing and to offer self-congratulatory sound bites about the need for “tougher standards” and “accountability.” The more that parents and other members of the community learn about these tests, the more critical of them – if not appalled by them -- they tend to become.

Thirdly, they aren't for the parents. When parents speak with their child's teachers, they don't really want to know which quartile or stanine their child fits in. At the heart of their concerns are their desire to know how their child interacts with others. Are they caring, empathetic, responsible, hard working, respectful, kind, etc. The data analysis printout multiple choice tests spew out merely act as a side-show distraction from what really matters in the development of a child.

So if multiple choice tests are not for the kids and they're not for the teachers or parents who are these things for? Who benefits from such assessment witchcraft?

Alfie Kohn offers an interesting take on the purpose of multiple choice tests: 

You know it's a bad assessment if it's multiple choice. Multiple choice tests can be clever but they can't be authentic. You can't learn what kids know and what they can do with what they know, if they can't generate a response - or at least explain a response. Or as one expert in psychometrics told me many years ago, "Alfie don't you get it, multiple choice tests are designed so lots of students who understand the material will be tricked into picking the wrong response". That's why teachers would never dream of giving a multiple choice test of their own design because the same thing applies there. 


So who would ever do this to children?

I call them The Suits.

Let me explain:

I've written before that accountability has come to be defined as More control for people outside of the classroom over those who are in the classroom, and I often refer to these life-sucking, data mongers as The Suits; and if schools are to be ran by these Suits who don't even work in the same building as the kids, they need data to drive everyone else's decisions.

The Suits may need multiple choice tests, but we don't need either of them.

Before you give another multiple choice exam, go ahead and ask cui bono?

Saturday, October 2, 2010

Alberta Provincial Achievement Test Taking Tips

While looking at Alberta Education's information on Provincial Achievement Testing, I noticed that this document had Test Taking Tips on page 7.

I thought it peculiar that the government took the time to give a number of test taking tips, but neglected to properly suggest that guessing at a question is more advantageous than leaving it blank, or that if you run out of time, you should madly fill in the remaining bubbles because any answer, even a guess, is better than leaving it blank.

Why did the government choose not to provide this very popular and wise test taking tip? I wonder if they would endorse such a tip?

Wednesday, September 22, 2010

Scientifically Tested Tests

The New York Times featured a brilliant article by Susan Engel on September 19. She wrote eloquently about rethinking assessment. 

Her main points are worthy of serious reflection:

  • why are we so hell-bent on census testing when we could be assessing samples of students?
  • why do we continue to use such limited assessment formats such as multiple choice?
  • why do we continue to stress the importance of such limited measurements when there is good reason to believe that some of the worse teachers successfully hide behind such narrow goals and rigid scripts?
  • why do we subscribe to such a behaviourist's approach of rewarding and punishing schools based on test scores that tend to conceal more than they reveal?
  • if even a fraction of these questions have validity, then the real question is cui bono? Who benefits from the use of such limited measurements?
As much as I love to ask challenging questions, I can see the merit in offering up probable solutions, and this is exactly what I love so much about Susan Engel's article. She provides a number of substitutes to traditional, fill-in-the-bubble standardized tests:

Instead, we should come up with assessments that truly measure the qualities of well-educated children: the ability to understand what they read; an interest in using books to gain knowledge; the capacity to know when a problem calls for mathematics and quantification; the agility to move from concrete examples to abstract principles and back again; the ability to think about a situation in several different ways; and a dynamic working knowledge of the society in which they live.
This task is not as difficult as one might think. In recent years, psychologists have found ways to measure things as subtle as the forces that govern our moral choices and the thought processes that underlie unconscious stereotyping. And many promising techniques already used by child development experts could provide a starting point for improving school assessments.
For instance, using recordings of children’s everyday speech, developmental psychologists can calculate two important indicators of intellectual functioning: the grammatical complexity of their sentences and the size of their working vocabularies (not the words they circle during a test, but the ones they use in their real lives). Why not do the same in schools? We could even employ a written version, analyzing random samples of children’s essays and stories.
Psychologists have also found that a good way to measure a person’s literacy level is to test his ability to identify the names of actual authors amid the names of non-authors. In other words, someone who knows that Mark Twain and J. K. Rowling are published authors — and that, say, Robert Sponge is not — reads more. We could periodically administer such a test to children to find out how much they have read as opposed to which isolated skills they have been practicing for a test.
When children recount a story that they have read or that has been read to them, it provides all kinds of information about their narrative skill, an essential component of literacy. We could give students a book and then have them talk with a trained examiner about what they read; the oral reconstruction could be analyzed for evidence of their narrative comprehension.
Researchers have also found that the way a student critiques a simple science experiment shows whether he understands the idea of controlling variables, a key component in all science work. To assess children’s scientific skills, an experiment could be described to them, in writing, and then they would explain how they would improve upon it.
Of course, these new assessments could include some paper-and-pencil work as well. But that work would have to measure students’ thinking skills, not whether they can select a right answer from preset options. For instance, children could write essays in response to a prompt like, “Choose something you are good at, and describe to your reader how you do it.” That would allow each student to draw on his area of expertise, show his ability to analyze the process, describe a task logically and convey real information and substance. In turn, a prompt of, “Write a description of yourself from your mother’s point of view,” would help gauge the child’s ability to understand the perspectives of others.


Tuesday, August 17, 2010

Negotiable Formats

Unless the exam format is literally the standard or objective, the format of the exam or project should be negotiable.

Unless you can show me a standard from anywhere in the world that specifically outlines the ability to answer a multiple choice question as the learning outcome, why in the world would we ever provide children with no other format to exhibit their learning?

And yet how many quizzes, mid-terms, final exams, standardized tests, homework assignments, minor assignments or major projects invite children to negotiate their format?

The lack of legitimate answers to these questions tell us one thing: most assessment formats are less about the students and more about the system. Cui bono?

This is exactly why there is such a chasm between the idea of differentiated instruction and standardized assessment.

The paradox is enough to drive anyone to drink.

Saturday, August 7, 2010

A good multiple choice question?

Here is the only multiple choice question I will ever endorse:

Choose the best answer:

A. Test

B. Teach

Sunday, July 18, 2010

Putting in time

I was visiting my 94 year old grandfather the other day when I asked him if he had to do multiple choice exams in school. He said that he didn't, and that he was only kind of familiar with what they were.

I took some time and explained the basic idea of a multiple choice quesiton.

After a moment of thought, he turned to me and said, "Sounds kinda like just put'n in time."

I laughed out loud - it was funny because it was so true! I couldn't summarize the whole idea of multiple choice exams any better.

It was pretty neat to have my 94 year old grandfather make me laugh like that. Where his body is failing him, his mind is, at times, as sharp as ever.

Thanks Grandpa!

Thursday, June 24, 2010

You can't construct meaning in a preconceived bubble

When I share with others the language arts and science performance assessments I do with students instead of a multiple choice final exams, I often get asked these kinds of questions:

1. How do you determine from these responses if they are comprehending well enough to understand the level of reading required for the next grade?

2. If they are experiencing difficulties in comprehension, how is this shown in your assessment project and what do you do to remediate this?

3. How do you translate this into constructive feedback for students and parents?

These are great questions. Here are my answers:

1. How do I assess if a student is comprehending? Let's be careful not to pretend that we actually know what it means to be reading at a grade 8 level versus a grade 9 level. Psychometricians might like to believe they can empirically diagnose a student's reading level down to a decimal point, but I would wager that most teachers understand the pitfalls of claiming such arrogance.

At best, multiple choice tests offer us spurious precision measuring a child's ability to decode text. Because reading is first and foremost about constructing meaning, the comfort we might gain from a multiple chioce test's pseudo structure must be seen for what it is: a subjective rating masquerading as an objective assessment.

Grades, stickers, happy faces, check marks, gold stars and other bribes that are born out of reductionist assessments such as multiple choice tests are not a kind of feedback to be accommodated; rather, they are problems to be solved and practices to be avoided.

Rather than concerning myself with "are they ready for the next grade" which is an infinitely ambiguous question, I concern myself more with questions like:

Do students enjoy reading?
Do they read willingly?
Do they construct meaning from reading?
How do I know if a student is constructing meaning? I observe it. Here's a sample of Liam's project. On the left is a poem he selected, and on the right are his thoughts. Click on the picture below to view it larger.



Look at his response! How could I ever have created a multiple choice question that would allow Liam to share with me his connection between a line of fan-fiction poetry from The Legend of Zelda to a Canadian Broadcasting Corporation's radio talk-show that features a 20 minute discussion on the concept of time?

Some might say this is too much work, and they don't have time to look at a student's work like this. To this I say don't be intimidated. The bulk of my assessing occurs while students are actually doing this.

How do I know he's ready for the next grade? I didn't need to number crunch or analyze data, I simply needed to observe him do this.

As I write this blog post, Liam is sitting next to me. I think he just summarized all this nicely:

You can't construct meaning in a preconceived bubble.

2. How do I know if a student is having difficulties comprehending? In three words, here is my answer: I observe it. There is no substitute for what a teacher can see with their own eyes when observing and interacting with students while they are learning.

Here's a sample from Lewis's project. About 2 minutes into this project, I saw him staring at his monitor like a deer in headlights. I quickly assessed that he was having difficulty. I walked over to him and asked if he needed help. He said yes. Below is a sample from his project. He asked me to provide an excerpt for him to read (its on the left). His thoughts are on the right. Click it to view it larger.



He was intimidated by words he couldn't understand - I know this because he told me. I told him to launch http://www.dictionary.com/ so he could look up these words. Then he showed his thinking by sharing with me that he had to look these words up. I can see that even after using the dictionary his dependence on copying the definition tells me he still isn't really getting it.

But what's the alternative? If I give him a multiple choice exam, he would stare at it, dejected, thinking he's an idiot who can't read. To save his own pride, he would likely remove all effort, madly filling in the bubbles only to declare this as "stupid".

A multiple choice test for Lewis would, at best, show me what he can't do. This kind of project allows Lewis the opportunity to show me what he can do. Designing and implementing remediation with this kind of project is intuitive. In contrast, I have no idea how I would remediate for Lewis if I was given his bubble sheet item analysis.


Quite frankly, Lewis and I would both be at a loss if given such data.

3. The best feedback parents can receive about their child's learning is for them to see their child's learning. Back in the old days, show-and-tell, science fairs and barn dances were exhibitions of learning. Communities came together to observe and listen to students while they performed their learning - and if things went really well, parents and community members might have actually interacted with the students. No one needed to translate the results - everyone could see with their own eyes that learning was or was not taking place. However, if we throw a number or letter grade at parents, or God-forbid try to show them an item analysis sheet like the one above, no wonder they need someone to translate the assessment. Summative assessment does not need to be diablolically complicated: gather information and share that information.You might put a hint of an evaluation, if needed, but otherwise, that's it. We don't need a stanine, quartiles, class averages or even grades and tests. Just gather and share. Rinse and repeat.

An understandable rebuttal to this might be that teachers, parents and community members might not be willing to engage in this - they might not have the time or be willing to expend the effort. Too often, this is as sad as it is true. In an attempt to counter this apathy, technology might provide a kind of solution; on-line communities such as discussion forums, blogs, moodles, nings and other social networking software can alleviate problems brought on by the limitations of time and place.

IN CLOSING, I want to address one more criticism I face with my alternatives to language arts and science multiple choice exams. I have had teachers ask me if I will be grading these projects - of course, the question is a setup - what they are implying is that I am cheating my students because I won't be going through their projects with a fine tooth comb, and that I am doing a disservice to them by not spending as many or more hours grading them as they did actually performing.

To this I say, how long does it take to rip a bubble sheet through a scoring machine?

The difference between a multiple choice exam and a performance assessment is not that the multiple choice exam can be counted and the performance can't. When it comes to quantifying either one, their respective strengths become their own weaknesses. The utility of a multiple choice exam and its artificially convenient interrator reliability comes at an alarming price;  authenticity is sacrificed for (perceived) reliability. Meanwhile, the genuineness of a performance assessment can be rich with both quality and quantity, but it's validity comes at a cost; quantifiability may be sacrificed for authenticity.

It might be argued that neither can be properly quantified, but if Linda McNeil from Rice University is correct in saying that measurable outcomes may be the least significant results of learning, maybe we shouldn't be too bothered by it all.

Could you imagine a school district of teachers coming together on a district inservice day where they could sit down and share a wide range of performance assessments? Just imagine the kind of rich dialogue that would revolve around real learning. Now that would be professional development!

Thursday, June 17, 2010

Exam Secrets

As I publish alternatives to multiple choice exams, I have been sharing student samples via http://www.youblisher.com/. I have no reservations in showing my students last year's final projects.

Would teachers who give multiple choice tests be so willing to share their multiple choice exams?

If an exam must remain a secret, what does that say about the exam itself?

Yet, sharing my language arts and science alternatives with students poses no threat to their learning or my assessment.

Discuss...

Wednesday, June 2, 2010

Alternatives to multiple choice science tests

I have blogged about alternatives to multiple choice reading comprehension tests and now I want to offer an alternative to multiple choice science tests.

For the most part, multiple choice science tests end up being glorified vocabulary exams. I'm not prepared to end a year's worth of class that was filled with cool experiments and thought provoking discussion with a vocabulary test that asks students to fill in bubbles.

For my grade 8 science final exam, I break from traditional exam format. Step 1: I provide the students with zero questions.

I have my students select a collection of science concepts that we studied through out the year and ask them to show their understanding. Here is a sample from Matthew:

Matthew Science

Here is a sample from the example that I created:

Teacher Exemplar

Because this project does not resemble the traditional rules of a final exam, I prefer to call it a project.

To complete this project, I have had my students typically use Microsoft Publisher, but keep in mind this project can be done on paper too. Here is a hand written sample from Stephanie's project:

Stephanie Science



Last year, my students did this project in the gym while seated in traditional rows of desks. I allowed them to bring in a folder of their clippings which could include things like pictures, diagrams and paragraph excerpts (the collection of these clippings serves as a very purposeful way of studying) I'm less interested in knowing if kids can draw the changes of state triangle or even recognize it, and more interested in knowing what they can do with that knowledge. Rather than wasting time drawing the diagram, I would rather they spend their time sharing with me what they think of changes of state. As you can see in Stephanie's sample above, she brought in a picture of erosion and paragraph on states of matter and then filled the page with her thoughts, ideas and understanding.
As a class, we decided that on its own cutting and pasting a picture or paragraph does not show much understanding. We agreed that for everything they paste on their project, they need to do something with it. This could include summarizing, explaining, sharing thoughts, feelings and opinions, making connections to other concepts, asking questions, telling stories, describing experiments, making metaphors, labelling diagrams, remembering field trips, etc.

Last year, Lizzie was the first to finish her project. She chose to write for just over an hour. Alex was the last student to finish writing - she chose to write for just over two hours. In contrast, the other students who were writing a 60 question multiple choice test were all done inside of 45 minutes, many were done inside of 30 minutes. On average, that's only 30 to 45 seconds per question...

On average, my students write for about 1.5 hours while showing their understanding for 10-20 concepts. When I gather feedback from students, I hear over and over again from them that this alternative to multiple choice tests, while less stressful, requires far more time and effort on their behalf while actually allowing them to show their learning. I've heard with my own ears students say to me, after writing for over an hour, that they actually had fun learning from this project. When was the last time you heard a student say that they had fun learning from a multiple choice exam?

A students love for showing their learning is not a fire we have to light, rather it is a flame we must be careful not to extinguish. Just as curiousity is the cure for boredom, the cure for curiosity is worksheets and multiple choice tests. Only after years of schooling does a kids thirst for sharing their learning dissipate and die.

If I were to stop now and ask if you had any questions, I would bet the farm you are all wondering how I assess this project.

If by assess you mean how do I calculate a mark or grade... then no, I don't. I've been with my students for 10 months. By the time this final project comes around, I already know all I need to know to assess their learning. And because I abolished grading from my classroom five years ago, I have no grades to average anyways. What we see with our eyes daily is more important than what students bubble in on one day of the year. To fully understand my perspective on grading, I suggest you take a look at this page of blog posts.

If by assess you mean how do I observe the students' learning... then yes, I do. I work with my students while always observing and listening to their learning. I don't labor over reducing their learning to a grade or a symbol - it's simply not a good use of my time. While it is true that this project cannot be ran through the bubblesheet machine, we must remember not to allow our misguided obsession with counting and measuring to narrow the kinds of learning opportunities we provide children, especially when it is easily arguable that the best kinds of learning are in fact immeasurable.

If we are to design authentic learning enviornments for students, we must resist the urge to alter our focus from the learner to the teacher too quickly. Asking how something might be properly assessed is not a bad question, but if it dominates our thinking, we may justify not providing students with projects that they would love to do and love to learn from, but hard on us to assess. If we are not mindful of all this, assessment, ironically, can become a sabateur of learning.

Remember that no project or exam is the end-all-be-all for assessment. Could you imagine the size and complexity of a project or exam that actually assessed all a student had learned in a year? It's important to remember that the best any project or exam can do is provide a sample of the student's learning at that particular time.

This alternative to multiple choice exams provides my students and me with a far more authentic sample of their learning. Next year, I plan on going even further and have my students actually perform a series of experiments or projects to exhibit their real learning. How will I assess that? I will work with my students while always observing or listening.

Remember, friends don't let friends grade, but they do let them learn.

Examples:

Rachelle Science

Friday, May 21, 2010

Alternative to traditional multiple choice reading comprehension exams

Here's what I do rather than a traditional multiple choice reading comprehension exam.


I teach my students that reading is thinking and if you're not thinking about what you are reading, then you are not really reading. Every year, I ask students if they've ever caught themselves reading only to realize they were not actually paying attention to what they were reading - leaving them with absolutely no idea what they just read. I use this to show that simply saying the words is not reading.

Mindfulness is required for real reading to occur.


I use the diagram above to explain that reading is brought on by thinking and thinking precipitates questioning. Questioning brings on an accute need for more reading, which causes more questioning, requiring more thinking. Rinse and repeat.

I provide reading excerpts with zero questions because I ask my students to read and show their thinking. I double space the text and remove the lefthand margin almost entirely so that I can double or triple the right hand margin. This provides students with the space they need to show their thinking.

Here is an example of a student reading about the skeletal system:


Show Your Thinking - Skeletal system

Through out the year, I teach students to show their thinking in a number of different ways:

-ask questions
-make metaphors and other parts of speech
-make connections to other subjects, past experiences, books, movies, etc.
-summarize
-identify difficult words & guess at their meaning (read in context, root word, etc)
-tell stories
-give opinions
-show emotions and feelings
-limited highlighting
-arrows to margin

This is not an exhaustive list, but you get the idea.

I encourage them to write with abbreviations and symbols with a key or legend. I encourage them to use colour coded highlighting systems if they are interested.

Here is an example of a student's final reading project:


Show Your Thinking - story excerpts

To assess this, I observe my students very carefully while they are reading, thinking and showing their thinking. I stop and talk with them.

By the end of the year, my students have done this enough for me to already know where their reading skills are at; therefore, there is no need to "grade" this project. The purpose of this assessment is NOT to even assess the students - that's what the previous 10 months have been for. Rather, I observe and review these projects in order to guide my professional development and teaching practices.

I am currently working on having kids do this kind of project on the computer and with video and screencasting.

If you have any questions, feel free to e-mail me: joe.bower.teacher@gmail.com or leave a comment below.

WARNING

Any assessment, even the best assessments, can be over-done. Please use this kind of assessment sparingly. While this kind of reading strategy might be appropriate when reading non-fiction, it can literally sap the thrill of reading fiction. This isn't to say you can't use this with fiction, but use it sparingly. In my attempts to fine tune this assessment, I have, at times, over used it at the cost of starting to turn my students off of reading - I will never make that mistake again.

Showing your thinking with this kind of detail can take an immense amount of time, effort and patience. Gathering and sharing for assessment purposes must never trump our primary objective of maintaining a healthy desire within the child to go on learning.

We must concern ourselves less with making kids know things and more with inspiring them to want to know.