My freshfolks finished a sample prompt in just over an hour, then waited a few days as other teachers read their work, scoring it according to the WASL rubric. Before they got their essays back, they read and scored three samples, then discussed--make that debated--the scores in small groups. We reconvened as a whole class, continuing arguing about whether this or that essay was a 2 or a 3, and then, at long last, I revealed the scores provided by OSPI.
All I will say is this: students of Washington, you do not want my students assessing your essays. The WASL rubric is generic and generous in parts, open to interpretation, not terribly objective--what's the difference, quantitatively, between "consistently" and "generally" when counting up spelling errors? But my students were harsh, harsh, harsh.
When they saw the discrepancy, sometimes as much as a difference between a 1 and a 4*, the "real" score, they were shocked. It makes me happy to know they have such high standards.
The writing WASL doesn't measure excellence; it measures competence. Go ahead, quantify the difference. Defend your answer in a five-paragraph essay, and submit it to my class for assessment.
I dare you.
*Out of 6.
1 comment:
That cracks me up. Kids are always way harder on themselves when it comes to grading things than most adults are. That was one of the things I learned the first time I did peer assessments. They rarely see things in shades of gray. Either it's a great paper/presentation, or it sucks. Not a lot of in between ...
Post a Comment