Revolution Lullabye

April 28, 2009

Yancey, Looking Back as We Look Forward

Yancey, Kathleen Blake. “Looking Back as We Look Forward: Historicizing Writing Assessment.” CCC 50 (1999): 483-503. Reprinted in Assessing Writing: A Critical Sourcebook. Eds. Huot and O’Neill. Boston: Bedford/St. Martin’s, 2009. 131-149.

Yancey divides the history of writing assessment into three “waves.” The first wave (1950-1970) focused on objective, non-essay testing that prioritized efficiency and reliability. The second wave (1970-1986) moved towards holistic scoring of essays, based on rubrics and scoring guides first developed through ETS and AP. The third wave (1986-present) expanded assessment to include portfolios (consisting of multiple examples of student writing) and larger, programmatic assessments. She looks at these waves from several perspectives: at how the concepts of reliability and validity are negotiated and valued; at the struggle between the knowledge of the assessment expert (and psychometrics) and the contextual, local knowledge of the non-expert teacher (and hermeneutics); and the move of assessment from outside and before the classroom to within and after the class. She voices concerns and directions for further scholarship and practice in writing assessment, challenging the field to look for ways to use assessment rhetorically and ethically to help students and programs develop and to produce scholarly knowledge.

Quotable Quotes

“It is the self we want to teach, that we hope will learn, but that we are often loathe to evaluate. What is the role of the person/al in any writing assessment?” (132).

Notable Notes

The role of the classroom teacher moving into writing assessment: in the 1st wave, testing specialists evaluated, but through the 2nd and 3rd waves, the roles of teacher and evaluator overlapped into the new discipline of writing assessment.

Questions: Who is authorized to speak about assessment? What is the purpose of writing assessment in education? Who shuold hold hte power?

the use of portfolios shifts the purpose and goals of the assessment: using pass/fail instead of scoring and communal grading moves more towards assessing a program and establishing community values than individual student assessment. Use different stakeholders to read?

waves fall into each other, aren’t strict lines that categorize what’s happening.

Advertisements

Moss, Can There Be Validity without Reliability?

Moss, Pamela. “Can There Be Validity without Reliability?” Educational Researcher 23.4(1994): 5-12. In Assessing Writing. Eds. Huot and O’Neill. Boston: Bedford/St. Martin’s, 2009. 81-96.

Moss challenges the primacy of reliability in assessment practices, arguing for the value of contextual, hermeneutic alternative assessments that can more accurately reflect the complex nature of writing tasks, knowledge, and performances. She describes the difference between hermeneutic and pyschometric evaluation, the latter which uses outside scorers or readers that do not know the context of the task, curriculumĀ or the student, as teachers would. Pointing out that many high-stakes assessments are not standardized or generalizable (like tenure, granting graduate degrees), she argues that the warrant that writing assessment scholars use in the argument of generalizability, the warrant of standardization, needs to be re-evaluated and rearticulated from a hermeneutic perspective. By making reliability (meaning standardization, I think) an option rather than a requirement, assessment practices can be opened up that reflect more of a range of educational goals.

Quotable Quotes

Hermeneutic: “an ethic of disciplined, collaborative inquiry that encourages challenges and revisions to initial interpretations; and the transparency of the trail of evidence leading to the interpretations, which allows users to evaluate the conclusions for themselves” (87).

“There are certain intellectual activities that standardized assessments can neither document nor promote” (84).

“potential of a hermeneutic approach to drawing and warranting interpretations of human products or performances” (85).

Notable Notes

some hermeneutic assessment practices: allowing studnets to choose the products they feel best represent them (not just the same tasks for all) – fair, ethical, and places agency in the student; alos critical discussion and debate during assessment, disagreement does not equate invalidity, the importance of a dialogic perspective of a community (what Broad and Huot draw on)

detached, impartial scorers silence the teachers, those who know students and curriculum best

look @ public education accountability movement

White, Holisticism

White, Edward M. “Holisticism.” CCC 35 (1985): 400-409. In Assessing Writing. Eds. Huot and O’Neil, 19-28.

White contends that assessment must be grounded in humanism, seeing writing and reading as whole-body and whole-mind processes, and in his essay, explains the argument, method, and limitations of holistic assessment of student writing. The first investigations into a cost-effective, efficient holistic scoring process began at ETS in the 1970s, which transformed “general impression” scoring into holistic scoring by introducing constraints that made these assessment reliable. These constraints included things like controlled essay reading, using rubrics and scoring guides in tandem with anchor papers, and multiple independent scoring. Holistic scoring tries to assess more completely the complicated task of writing better than objective tests, but still it only succeeds in ranking student (not giving any other educationally-relevant information) and is not entirely reliable due to the shifting nature of the writing prompts and the variability in scorers.

Quotable Quotes

“Writing must be seen as a whole, and that the evaluating of writing cannot be split into a sequence of objective activities” (28). – no analytic scoring

Blog at WordPress.com.