Revolution Lullabye

April 29, 2009

Williamson, The Worship of Efficiency

Williamson, Michael. “The Worship of Efficiency: Untangling Theoretical and Practical Considerations in Writing Assessment.” Assessing Writing 1(1994): 147-174. In Assessing Writing. Eds. Huot and O’Neill. Boston: Bedford/St. Martin’s, 2009. 57-80.

Williamson argues that educators must adopt a different educational model – that of the craft workshop – in order to create an assessment theory and practice that breaks the hold of the god-terms of efficiency, fairness, and reliability. Williamson traces how the concept of efficiency led assessment and educational practices during much of the 20th century, resulting in invalid assessments only based on one data point, assessments grounded in standardized tests that allowed for the development and dominance of factory and bureaucratic educational models. He points to other assessment practices, like those in France, rely on interviews and non-standardized assessments given by the teacher, who knows the curriculum and students best. If teachers are to be treated as the professoinals that they are, Williamson argues, they should be given the right and the responsibility to develop and give assessments to their students.

Quotable Quotes

“we will need to begin to trust teachers” (78).

“the privilege of true professionalism” (79).

“For the most part, students are assessed, labeled, and placed in school curricula on the basis of their scores on succeeding standardized tests…these tests remain one of the single most important indicators of a child’s future” (67).

“efficiency has governed both the theoretical and practical developments in assessment” (69).

Notable Notes

development of psychometrics to allow for an objective, outside scorer – this is reversed in the craft workshop model with teacher in charge

child-centered assessment v. system-centered assessment

libertarian assessment

history of shift from oral exams to written exams to multiple-choice testing (Arthur Otis)

efficiency is a key American cultural and social force

craft workshop model (Shedd and Bacharach; Schon’s reflective practicioner)

assessment as a contextual, dynamic, continuous, reflective process

assessments with multiple data points converging = valid

Royer and Gilles, Directed Self-Placement

Royer, Daniel J. and Roger Gilles. “Directed Self-Placement: An Attitude of Orientation.” CCC 50 (1998): 54-70. In Assessing Writing. Eds. Huot and O’Neill. Boston: Bedford/St. Martin’s, 2009. 233-248.

Directed self-placement is an assessment practice that shifts the responsibilty of placing students in the right first-year composition section from the teachers/WPA/administration to the students themselves. Gilles and Royer describe how they developed the idea and explain its benefits: cost-effectiveness, efficiency, a decrease in complaints by students and teachers, positive attitudes in basic writing and first-year courses, and, most importantly, a sense of “rightness,” telling and showing students that they can be entrusted, with guidance, to making decisions about their own education. They argue that directed self-placement is as (or more) valid and reliable than placing students into sections based on their standardized test scores or the score on a timed essay. Directed self-placement is grounded in pragmatic (Dewey) educational philosophy and looks inward, to the needs of students, giving them power and control and starting a culture of communication from the first day on campus..

Quotable Quotes

“Our placement program thus relies on honest student inquiry and interactive participation” (246).

“Normally, the placement universe revolves around teachers; we choose the methods, we score the essays, we tell students what courses to take. Now we began to envision students at the center” (239).

Notable Notes

In the first few years that their writing program implemented directed-self placement (explained and conducted at freshman orientation), 22% of incoming freshman self-placed themselves in basic writing.

simplicity and elegance, honesty about directed self-placement

narrative at beginning about how students are introduced and guided through directed self-placement at orientation

placement tests should be future-directed, about a student’s education, not focused on what teachers might learn about students from one decontextualized sit-down writing prompt

April 28, 2009

Yancey, Looking Back as We Look Forward

Yancey, Kathleen Blake. “Looking Back as We Look Forward: Historicizing Writing Assessment.” CCC 50 (1999): 483-503. Reprinted in Assessing Writing: A Critical Sourcebook. Eds. Huot and O’Neill. Boston: Bedford/St. Martin’s, 2009. 131-149.

Yancey divides the history of writing assessment into three “waves.” The first wave (1950-1970) focused on objective, non-essay testing that prioritized efficiency and reliability. The second wave (1970-1986) moved towards holistic scoring of essays, based on rubrics and scoring guides first developed through ETS and AP. The third wave (1986-present) expanded assessment to include portfolios (consisting of multiple examples of student writing) and larger, programmatic assessments. She looks at these waves from several perspectives: at how the concepts of reliability and validity are negotiated and valued; at the struggle between the knowledge of the assessment expert (and psychometrics) and the contextual, local knowledge of the non-expert teacher (and hermeneutics); and the move of assessment from outside and before the classroom to within and after the class. She voices concerns and directions for further scholarship and practice in writing assessment, challenging the field to look for ways to use assessment rhetorically and ethically to help students and programs develop and to produce scholarly knowledge.

Quotable Quotes

“It is the self we want to teach, that we hope will learn, but that we are often loathe to evaluate. What is the role of the person/al in any writing assessment?” (132).

Notable Notes

The role of the classroom teacher moving into writing assessment: in the 1st wave, testing specialists evaluated, but through the 2nd and 3rd waves, the roles of teacher and evaluator overlapped into the new discipline of writing assessment.

Questions: Who is authorized to speak about assessment? What is the purpose of writing assessment in education? Who shuold hold hte power?

the use of portfolios shifts the purpose and goals of the assessment: using pass/fail instead of scoring and communal grading moves more towards assessing a program and establishing community values than individual student assessment. Use different stakeholders to read?

waves fall into each other, aren’t strict lines that categorize what’s happening.

Moss, Can There Be Validity without Reliability?

Moss, Pamela. “Can There Be Validity without Reliability?” Educational Researcher 23.4(1994): 5-12. In Assessing Writing. Eds. Huot and O’Neill. Boston: Bedford/St. Martin’s, 2009. 81-96.

Moss challenges the primacy of reliability in assessment practices, arguing for the value of contextual, hermeneutic alternative assessments that can more accurately reflect the complex nature of writing tasks, knowledge, and performances. She describes the difference between hermeneutic and pyschometric evaluation, the latter which uses outside scorers or readers that do not know the context of the task, curriculumĀ or the student, as teachers would. Pointing out that many high-stakes assessments are not standardized or generalizable (like tenure, granting graduate degrees), she argues that the warrant that writing assessment scholars use in the argument of generalizability, the warrant of standardization, needs to be re-evaluated and rearticulated from a hermeneutic perspective. By making reliability (meaning standardization, I think) an option rather than a requirement, assessment practices can be opened up that reflect more of a range of educational goals.

Quotable Quotes

Hermeneutic: “an ethic of disciplined, collaborative inquiry that encourages challenges and revisions to initial interpretations; and the transparency of the trail of evidence leading to the interpretations, which allows users to evaluate the conclusions for themselves” (87).

“There are certain intellectual activities that standardized assessments can neither document nor promote” (84).

“potential of a hermeneutic approach to drawing and warranting interpretations of human products or performances” (85).

Notable Notes

some hermeneutic assessment practices: allowing studnets to choose the products they feel best represent them (not just the same tasks for all) – fair, ethical, and places agency in the student; alos critical discussion and debate during assessment, disagreement does not equate invalidity, the importance of a dialogic perspective of a community (what Broad and Huot draw on)

detached, impartial scorers silence the teachers, those who know students and curriculum best

look @ public education accountability movement

White, Holisticism

White, Edward M. “Holisticism.” CCC 35 (1985): 400-409. In Assessing Writing. Eds. Huot and O’Neil, 19-28.

White contends that assessment must be grounded in humanism, seeing writing and reading as whole-body and whole-mind processes, and in his essay, explains the argument, method, and limitations of holistic assessment of student writing. The first investigations into a cost-effective, efficient holistic scoring process began at ETS in the 1970s, which transformed “general impression” scoring into holistic scoring by introducing constraints that made these assessment reliable. These constraints included things like controlled essay reading, using rubrics and scoring guides in tandem with anchor papers, and multiple independent scoring. Holistic scoring tries to assess more completely the complicated task of writing better than objective tests, but still it only succeeds in ranking student (not giving any other educationally-relevant information) and is not entirely reliable due to the shifting nature of the writing prompts and the variability in scorers.

Quotable Quotes

“Writing must be seen as a whole, and that the evaluating of writing cannot be split into a sequence of objective activities” (28). – no analytic scoring

Create a free website or blog at