Twice a year, my school district administers a writing assessment for all students in grades 2-8. The students have a piece of text to read and a writing prompt based on that text. They then have approximately 45 minutes to write an essay. In grades 5-8, the essay is expected to be argumentative in nature and to incorporate evidence from the text as support for the arguments.
It's not a perfect assessment. Sometimes students do not have enough schema, even with the text provided, to really formulate arguments about the text. Sometimes they don't care much about the topic and therefore do not write much. But honestly, it's as close to the timed writing of other assessments as we can get. I do know the writing on the assessment is much better since we started giving students a text to read.
In my building, all of the students' essays are turned in to one of our administrative assistants who pulls off the cover page with the identification information and sorts them into piles to be scored. Then, on a half-day PD day, the entire staff gathers and scores. Every unidentified essay is read and scored by two people using a rubric based on the old ISAT writing test. In order to prepare for this scoring and to make sure we are on the same page, we have some inter-rater reliability training where the consultant who works with our district shares some trends and we score some papers together in our grade level teams. This helps us to all be on the same page in terms of what kinds of papers get what kind of score.
And how does the consultant identify the trends?
She and I score close to 150 of the 600 papers. Together. And amazingly, we are very much in sync on the scores, even after several months of not scoring. Eventually we can score separately and cross-check on random papers or on those that are giving us fits.
So that's what I did yesterday and today. I read and read and read and read and thought and thought and thought and thought. Then we sorted and talked and sorted some more until we found the anchor papers for tomorrow's trainings.
This is part of my job as literacy coach that I don't really enjoy. When I was in the classroom, I never read 150 papers over the course of about 9 hours. I couldn't have maintained the pace. The only reason I can do it now is that there is someone else there with me to double check the papers I'm unsure about and to keep referring me back to the rubric. I also don't know who wrote the papers, so I don't have any baggage to attach to them. I can be completely impartial.
What I do enjoy is seeing the trends that show instruction in writing is paying off. I can see that the teachers in my building are working with students on incorporating text evidence into writing. I can see that a great deal of time has been spent on writing effective introductions and conclusions that move beyond cookie cutter "snappy starts." I can see we still have a way to go in weaving direct quotes into the body paragraphs and appropriately citing those quotes. I can see we have to discuss the downfalls of the being hyperbolic and how that exaggeration weakens an argument.
Do I love this assessment? Nope. Does it measure all aspects of good writing? Nope. We're looking at a piece of rough draft writing produced under stressful conditions. Should this be the only data point we use to evaluate our students' writing and determine who might need remediation or extension? Absolutely not. But it is a snapshot that gives us information we can use to plan instruction in our grade-level teams and to start conversations among the staff about how to help our students become stronger better writers.
I think writing calibration is so important, especially on district level assessments. You are lucky to work in a system that supports such attention to the details of true calibration! I agree that an on demand writing prompt is just ONE measure of what students can do. Thank you for sharing.ReplyDelete