Revolution Lullabye

May 11, 2015

Rhodes, When Is Writing Also Reading

Rhodes, Lynne A. “When Is Writing Also Reading?” Across the Disciplines 10:4 (11 December 2013.) Web. 11 May 2015.

Rhodes, the Writing Assessment Director at University of South Carolina Aiken, argues for more explicit reading instruction across the disciplines. She describes how pre- and post-course reading diagnostic assessments in the first-year writing program at her university helped raise awareness of students’  poor reading skills, which she argues affects their ability to write researched arguments. Rhodes maintains that teaching students how to read research is the responsibility of all at the university, and she suggests looking toward strategies developed by K-12 teachers to help teach students how to read. She explains that her university’s decision to assess reading has helped her writing faculty develop a language to talk about and describe what they mean by “good reading.”

Notable Notes

the appendix contains a helpful rubric for the pre- and post-reading assessments, looking at students’ reading skills in term of comprehension, analysis, and interpretation on a scale of 1 to 5.

Rhodes draws on Randy Bass (1998) who advocates for doing “diagnostic probing” at the beginning of the semester. Where are our students? Do they understand the purposes of reading (Horning)

Students especially need help reading academic journals, and they need to be told why they are reading something – for content, for a model, to critique, etc (this makes connections with Horning 2007).

Quotable Quotes

“Post-secondary instructors rarely understand how unfamiliar student readers are with any kind of text beyond short, simple expository and creative works.”

“Our colleagues in K-12 have long understood the syntactical differences that make texts more or less accessible to readers, but most college instructors do not have the flexibility that primary and secondary grade-level teachers have when accommodating readers with weaker skills.”

“It is time to ask what faculty can and should learn about teaching students how to read complex texts by examining practices and assumptions. In our reading and writing classrooms, we should explain explicitly why and how we want students to address the texts we assign.”

Rhodes found that “over half of our students demonstrate perennial difficulties with researched writing tied specifically to their poor reading skills. Students who read poorly when they enter FYC currently do not improve significantly as readers and writers and continue to struggle in their major programs.”

“We simply must not give up on making assignments that challenge students to struggle and engage with texts.”

“We don’t often define expectations for ‘good reading.’”

“Reading processes are recursive, requiring dialogue and feedback, along with revisions of perceptions and readjustments. Just as instructors expect that student writers will need time and consultations to rewrite their papers, instructors should also understand that student readers will need supportive class discussions and time to reflect on reading selections.”

“Teachers across the disciplines will have to engage in dialogue with students and with faculty in other disciplines to make our expectations more obvious and clear to students when they work with texts, to read and write across the disciplines, as well as to explore our own practices as academic readers.”

“We must explicitly share our expectations with students about performances that we identify as good reading in our classrooms.”

“Assessment of student reading should be a common concern across a university’s campus, not a singular skill to be housed in an English department or a First Year Writing program.”

Advertisements

November 17, 2014

Odom, Not Just for Writing Anymore: What WAC Can Teach Us about Reading to Learn

Odom, Mary Lou. “Not Just for Writing Anymore: What WAC Can Teach Us about Reading to Learn.” Across the Disciplines 10.4 (11 December 2013). Web.

Odom argues that in order to improve students’ reading skills, faculty should adopt some of the pedagogical practices that have worked in writing-across-the-curriculum initiatives. Odom bases her argument on a three-year study of her institution’s WAC program. She looks at student course feedback and reflections from the WAC faculty (called WAC fellows) to describe pedagogical strategies that did work and that did not work to improve students’ reading skills. She shows that just merely asking students to read does not mean they will read well or learn what the faculty want them to learn from the reading.

Among the pedagogical strategies that worked to improve students’ reading were explaining to students the disciplinary conventions of a discipline-specific reading, asking students to engage with a reading on a personal level, and asking students to make connections between the reading they were assigned to read and either other readings or current events. Odom points out that all these strategies are also principles of effective WAC teaching. Among the strategies that did not work was using writing in the classroom or in electronic discussion boards to merely check that students had done the reading. Faculty complained that students in these forums rarely engaged with the texts beyond a cursory level.

Odom argues that problems in student writing can often be traced to students’ poor reading skills, and points out that reading is rarely taught beyond the elementary level: faculty assume students have the reading skills necessary to succeed in college. Reading in the disciplines is as invisible as writing in the disciplines once was, Odom contends, and she suggests that taking a WAC approach might solve this problem and better equip students with the critical reading skills they need to succeed in college and fully participate in contemporary civic life. In order for this to work, faculty need to be willing to reconsider how they ask students to read and what they ask students to do with the reading that they do.

Quotable Quotes

“It has been my experience that when we talk about student literacy struggles and practices in higher education, writing is talked about more frequently, more specifically, and with greater urgency than reading.”

“Reading instruction can be, particularly for faculty who want to move on and teach other content, unintentionally yet easily ignored.”

“Few and far between are the classes that do not incorporate or depend on reading, although reading skills cease to be taught or assessed.”

“Reading has in many ways become an invisible component of academic literacy” – it is not seen as the problem by faculty or students.

“Indeed perhaps the best reason efforts to rethink student reading should look to writing across the curriculum strategies is the WAC movement’s broad goal of improving not just student writing but student learning.”

“In sum, the issue of student reading is more than just complex; it is characterized by a transparency that renders it too easily and too often overlooked. Explicit reading instruction tapers off precipitously after elementary school, and students, teachers, and testing then tend to focus on the texts being read rather than the strategies used to read them. Just as texts alone do not provide meaning in isolation, the act of assigning texts alone does not guarantee that students will read. It is no surprise, therefore, that faculty dissatisfaction with student reading is vocal and widespread across the disciplines. When looking for ways to address this challenge, WAC, already proven to be a transformative force for teachers when it comes to writing, is a natural place to turn. Just as writing across the curriculum encourages faculty to consider the ways they ask students to write, efforts at improving student reading must begin with a conscious awareness that we ask and expect students to read in particular ways that may not always be familiar to them.”

“Our choices as teachers have very real consequences regarding how or if students read.”

How faculty can encourage better student reading across the disciplines: “First and foremost, faculty must see that they have a role – beyond simply assigning texts – to play in student reading behavior. Second, at the heart of this role must be a clear sense of the goals faculty have for student reading as well as a willingness to share those goals with students. Third, faculty must be willing to provide guidance for students reading complex, discipline-specific texts. Such guidance may come in the form of explicit conversation about disciplinary conventions and practices, but more often than not it can be conveyed in thoughtful, authentic assignments that students can connect to on an either a personal or ‘real world’ level. Adherence to these principles will not solve all the challenges of student reading; they can, however, begin conversations and initiate practices about reading that are long overdue.”

Notable Notes

research to look at: Newkirk (2013); Joliffe and Harl (2008); Horning (2007)

When faculty point to a problem in student writing, do they realize that this may be, at its core, a reading problem that is contributing to the lack of student learning?

Reading is an “assumed ability” as writing was in the 1960s and 1970s before composition studies challenged that paradigm (Mina Shaughnessy et al) – writing was shown to be far more complex than what students or faculty assumed.

Research shows that there is big discrepancy between what faculty assume students are doing as they read and what students are actually doing.

faculty have “a rather uncomplicated view of how writing and reading might work together,” such as the belief that merely asking students to write about the readings they read will result in critical engagement with those texts.

problem with assigning writing merely to assess or check that students have completed a reading (“quiz/coercion approach”), “reading compliance”

August 27, 2014

Newton, Value-added Modeling of Teacher Effectiveness

Newton, Xiaoxia A, et al. “Value-added Modeling of Teacher Effectiveness: Exploration of Stability across Models and Contexts.” Educational Policy Analysis Archives, 18 (23). 2010. Print.

Newton et al investigate measures of teacher effectiveness based on VAM (value-added modeling) to show that these measures, based on in large part on measured student learning gains, are not stable and can vary significantly across years, classes, and contexts. The study focused on 250 mathematics and ELA teachers and approximately 3500 students they taught at six high schools in the San Francisco Bay Area. The researchers argue that measures of teacher effectiveness based solely on student performance scores (those measures that don’t take into account student demographics and other differences) cannot be relied on to get a true understanding of a teacher’s effectiveness because so many other unstable variables impact those student test scores. Models of teacher evaluation that rely heavily on student performance scores can negatively impact teachers who teach in high-need areas. This is especially true with teachers who teach disadvantaged students or students with limited English proficiency.

Quotable Quotes

“Growing interest in tying student learning to educational accountability has stimulated unprecedented efforts to use high-stakes tests in the evaluation of individual teachers and schools. In the current policy climate, pupil learning is increasingly conceptualized as standardized test score gains, and methods to assess teacher effectiveness are increasingly grounded in what is broadly called value-added analysis. The inferences about individual teacher effects many policymakers would like to draw from such value-added analyses rely on very strong and often untestable statistical assumptions about the roles of schools, multiple teachers, student aptitudes and efforts, homes and families in producing measured student learning gains. These inferences also depend on sometimes problematic conceptualizations of learning embodied in assessments used to evaluate gains. Despite the statistical and measurement challenges, value-added models for estimating teacher effects have gained increasing attention among policy makers due to their conceptual and methodological appeal” (3).

Differences in teacher effectiveness in different classes: “An implicit assumption in the value-added literature is that measured teacher effects are stable across courses and time. Previous studies have found that this assumption is not generally met for estimates across different years. There has been less attention to the question of teacher effects across courses. One might expect that teacher effects could vary across courses for any number of reasons. For instance, a mathematics teacher might be better at teaching algebra than geometry, or an English teacher might be better at teaching literature than composition. Teachers may also be differentially adept at teaching new English learners, for example, or 2nd graders rather than 5th graders. It is also possible that, since tracking practices are common, especially at the secondary level, different classes might imply different student compositions, which can impact a teacher’s value-added rankings, as we saw in the previous section.” (12)

“the analyses suggested that teachers’ rankings were higher for courses with “high-track” students than for untracked classes” (13).

“These examples and our general findings highlight the challenge inherent in developing a value-added model that adequately captures teacher effectiveness, when teacher effectiveness itself is a variable with high levels of instability across contexts (i.e., types of courses, types of students, and year) as well as statistical models that make different assumptions about what exogenous influences should be controlled. Further, the contexts associated with instability are themselves highly relevant to the notion of teacher effectiveness” (16).

“The default assumption in the value-added literature is that teacher effects are a fixed construct that is independent of the context of teaching (e.g., types of courses, student demographic compositions in a class, and so on) and stable across time. Our empirical exploration of teacher effectiveness rankings across different courses and years suggested that this assumption is not consistent with reality. In particular, the fact that an individual student’s learning gain is heavily dependent upon who else is in his or her class, apart from the teacher, raises questions about our ability to isolate a teacher’s effect on an individual student’s learning, no matter how sophisticated the statistical model might be” (18).

“Our correlations indicate that even in the most complex models, a substantial portion of the variation in teacher rankings is attributable to selected student characteristics, which is troubling given the momentum gathering around VAM as a policy proposal. Even more troubling is the possibility that policies that rely primarily on student test score gains to evaluate teachers – especially when student characteristics are not taken into account at all (as in some widely used models) — could create disincentives for teachers to want to work with those students with the greatest needs” (18).

“Our conclusion is NOT that teachers do not matter. Rather, our findings suggest that we simply cannot measure precisely how much individual teachers contribute to student learning, given the other factors involved in the learning process, the current limitations of tests and methods, and the current state of our educational system” (20). 

Notable Notes

The problem of variables impacting the calculation of teacher effectiveness: the students’ background (socioeconomic, cultural, disability, language diversity), the effects of the school environment, how teachers perform year-to-year, the curriculum

VAM makes assumptions that schools, teachers, students, parents, curriculum, class sizes, school resources, and communities are similar.

The variables the researchers collected and measured included CST math or ELA scaled test scores, students’ prior test scores for both average and accelerated students, students’ race/ethnicity, gender, and ELL status, students’ parents’ educational level and participation in free or reduced school lunch, and individual school differences. Tries to look at the issue longitudinally by looking at student prior achievement (7). They were able to link students to teachers (8).

Darling-Hammond, Creating a Comprehensive System for Evaluating and Supporting Effective Teaching

Darling-Hammond, Linda. Creating a Comprehensive System for Evaluating and Supporting Effective Teaching. Stanford, CA: Stanford Center for Opportunity Policy in Education. 2012. Print.

This report argues for the development of an aligned, comprehensive K-12 teacher evaluation system that supports students, teachers, curriculum, schools, and communities by being an integral part of a larger teaching and learning system. The report outlines seven “best practices” for creating teacher evaluation systems. Teacher evaluation systems, the report argued, should serve teachers at all stages of their careers and be used for critical decisions at the licensing, hiring, and granting tenure/merit stages. Teacher evaluation systems need to be directly connected to ongoing teacher professional development and encourage collaboration among teachers, not competition. The report makes a distinction between “teacher quality” and “teaching quality,” arguing that helping teachers improve their teaching practices across different kinds of students, contexts, and curriculum will result in better teaching and better student learning. The report includes examples of district and state evaluation systems and procedures that it believes serves as models and starting points for creating a comprehensive teacher evaluation system.

 

Quotable Quotes

“Today, much attention is focused on identifying and removing poor teachers. But what we really need is a conception of teacher evaluation as part of a teaching and learning system that supports continuous improvement, both for individual teachers and for the profession as a whole. Such a system should enhance teacher learning and skill, while at the same time ensuring that teachers who are retained and tenured can effectively support student learning throughout their careers” (1-2)

The problem: “Virtually everyone agrees that teacher evaluation in the United States needs an overhaul. Existing systems rarely help teachers improve or clearly distinguish those who are succeeding from those who are struggling. The tools that are used do not always represent the important features of good teaching. Criteria and methods for evaluating teachers vary substantially across districts and at key career milestones—when teachers complete pre-service teacher education, become initially licensed, are considered for tenure, and receive a professional license.

A comprehensive system should address these purposes in a coherent way and provide support for supervision and professional learning, identify teachers who need additional assistance and—in some cases—a change of career, and recognize expert teachers who can contribute to the learning of their peers.” (i)

Distinction between teacher quality and teaching quality: “Teacher quality might be thought of as the bundle of personal traits, skills, and understandings an individual brings to teaching, including dispositions to behave in certain ways. Teaching quality refers to strong instruction that enables a wide range of students to learn. Teaching quality is in part a function of teacher quality— teachers’ knowledge, skills, and dispositions—but it is also strongly influenced by the context of instruction: the curriculum and assessment system; the “fit” between teachers’ qualifications and what they are asked to teach; and teaching conditions, such as time, class size, facilities, and materials. If teaching is to be effective, policymakers must address the teaching and learning environment as well as the capacity of individual teachers” (i).

Five elements to this teacher evaluation system, as part of a larger teaching and learning system:

  1. “Common statewide standards for teaching that are related to meaningful student learning and are shared across the profession.” These should help direct the preparation of teachers and ongoing professional development (i)
  2. “Performance assessments, based on statewide standards, guiding state function such as teacher preparation, licensure, and advanced certification” – there should be multiple assessments for different points in the profession (initial, mid, advanced) that look at how well teachers can “plan, teach, and assess learning” (ii)
  3. “Local evaluation systems aligned to the same standards, which asses on-the-job teaching based on multiple measures of teaching practice and student learning.” – things like observations, teaching artifacts like lessons plans/assignments, “evidence” of how teachers contribute to their colleagues’ work and student learning (ii) (example on page 11)
  4. “Support structures to ensure trained evaluators, mentoring for teachers who need additional assistance, and fair decisions about personnel actions” – including access to master teacher mentors, fair “governance structures,” and continued resources to maintain the system (ii)
  5. “Aligned professional learning opportunities that support the improvement of teachers and teaching quality” – all kinds of professional development (formal, embedded) that “trigger continuous goal-setting” and “opportunities to share expertise” (ii)

 “To transform systems, incentives should be structured to promote collaboration and knowledge sharing, rather than competition, across organizations” (ii)

“Criteria for an Effective Teacher Evaluation System

“In conclusion, research on successful approaches to teacher evaluation suggests that:

  1. “Teacher evaluation should be based on professional teaching standards and should be sophisticated enough to assess teaching quality across the continuum of development from novice to expert teacher.
  2. “Evaluations should include multi-faceted evidence of teacher practice, student learning, and professional contributions that are considered in an integrated fashion, in relation to one another and to the teaching context. Any assessments used to make judgments about students’ progress should be appropriate for the specific curriculum and students the teacher teaches.
  3. “Evaluators should be knowledgeable about instruction and well trained in the evaluation system, including the process of how to give productive feedback and how to support ongoing learning for teachers. As often as possible, and always at critical decision-making junctures (e.g., tenure or renew- al), the evaluation team should include experts in the specific teaching field.
  4. “Evaluation should be accompanied by useful feedback, and connected to professional development opportunities that are relevant to teachers’ goals and needs, including both formal learning opportunities and peer collaboration, observation, and coaching.
  5. “The evaluation system should value and encourage teacher collaboration, both in the standards and criteria that are used to assess teachers’ work, and in the way results are used to shape professional learning opportunities.
  6. “Expert teachers should be part of the assistance and review process for new teachers and for teachers needing extra assistance. They can provide the additional subject-specific expertise and person-power needed to ensure that intensive and effective assistance is offered and that decisions about tenure and continuation are well grounded.
  7. “Panels of teachers and administrators should oversee the evaluation process to ensure that it is thorough and of high quality, as well as fair and reliable. Such panels have been shown to facilitate more timely and well- grounded personnel decisions that avoid grievances and litigation. Teachers and school leaders should be involved in developing, implementing, and monitoring the system to ensure that it reflects good teaching well, that it operates effectively, that it is tied to useful learning opportunities for teachers, and that it produces valid results.

“Initiatives to measure and improve teaching effectiveness will have the greatest payoff if they stimulate practices known to support student learning and are embedded in systems that also develop greater teaching competence. In this way, policies that create increasingly valid measures of teaching effectiveness—and that create innovative systems for recognizing, developing and utilizing expert teachers—can ultimately help to create a more effective teaching profession” (iii-iv).

 

“Good systems must be designed so that teachers are not penalized for teaching the students who have the greatest educational needs. Rather, they should explicitly seek to provide incentives that recognize and reward teachers who work with challenging students” (24)

Notable Notes 

Need to create a system for evaluating teachers (and developing teaching) that takes into account all the stakeholders at local/state/national levels as well as the curriculum and standards.

The problem with relying on student performance scores to evaluate teaching: a teachers’ scores vary considerably from class-to-class and year-to-year, are affected by and tied directly to the type of students in the classroom (student differences), and the scores themselves are flattened – it’s impossible to discern what exactly impacted student learning: the teacher, the curriculum, the school environment, the home environment? (iii)

Student learning scores can be used in determining teacher effectiveness, but they can’t be the sole indicator and if used, they must be “appropriate for the curriculum and the students being taught” (iii)

Good graphic for representing the three tiers of a teacher career (and the argument to assess and evaluate teachers along these three tiers): initial, professional licensure, experienced/master teacher (7) and an example of New Mexico’s standards-based teacher evaluation system that evaluates teachers at these three tiers (8-9)

discussion of peer-based review of teachers, examples of systems using peer review (28-35)

August 25, 2014

NCTE Position Statement on Teacher Evaluation

NCTE Position Statement on Teacher Evaluation. National Council of Teachers of English. 21 April 2012. Web. 25 August 2014.

This 2012 position statement on K-12 evaluation argues that teacher evaluation is important and necessary way to improve schools, teachers, and student learning. NCTE bases this position statement on the belief that teaching is a complex process that must take into account the socioeconomic, political, cultural, and linguistic contexts of the students teachers teach and the schools and communities that they teach in. The position statement explains that the conversation surrounding teacher evaluation falls into two areas, distinguished by the end purpose of the teacher evaluation. The first purpose is “Test-Based Accountability,” which NCTE defines as using standardized student test scores to rank teachers and identify (and remove) ineffective teachers solely through student test score performance. The second purpose is “Professional-Development-Based Accountability,” which NCTE defines as using teacher evaluation as a way to promote ongoing teacher professional development. Ongoing teacher professional development helps teachers be better by allowing them to continually learn more about their subject matter, pedagogical methods, and their students. This position statement warns that an overreliance on this first kind of teacher evaluation – one based on student test scores – will take the craft out of teaching, resulting in cookie-cutter approaches to student learning and a curriculum focused on testing. The position statement also outlines principles for creating fair and effective evaluations for English Language Arts teachers.

 

Quotable Quotes

Epigraph: “For more than two decades, policymakers have undertaken many and varied reforms to improve schools, ranging from new standards and tests to redesigned schools, new curricula and new governance models. One important lesson from these efforts is the repeated finding that teachers are the fulcrum determining whether any school initiative tips toward success or failure. Every aspect of school reform depends on highly skilled teachers for its success.” – Linda Darling-Hammond, 2010

“NCTE recognizes that quality assurance is an important responsibility of school leaders and accepts that a successful evaluation system must assist school leaders in making major personnel decisions such as retention, tenure, and dismissal. Still, it firmly believes that an overemphasis on accountability rooted in testing sets the bar much too low for school improvement and leads to a curriculum too heavily devoted to test preparation.”

NCTE believes that multifaceted teacher evaluation is a significant component for student, teacher, and school improvement and advocates strongly for a system that emphasizes professional growth. English teachers must continually study their subject along with the craft of teaching in their efforts to make learning happen.”

“Student test scores are unreliable indicators of teacher performance and should play a very small role in evaluation.”

 

Notable Notes

Principles for creating teacher evaluation systems for ELA teachers:

  1. “based on a comprehensive review of effective teaching behaviors”
  2. “relies on a wide range of evidence”
  3. “aligns quality assurance purposes to professional growth”
  4. “is fair and nonthreatening”

Popham, Tough Teacher Evaluation and Formative Assessment

Popham, W. James. “Tough Teacher Evaluation and Formative Assessment: Oil and Water?” Voices from the Middle 21.2 (December 2013): 10-14. Print.

Popham argues that teachers who commit to using formative assessment techniques in their classroom will have better student performance on the new Common Core student assessments, and therefore these teachers, whose evaluations increasingly depend on student performance on high-stakes assessments, will have better evaluations. Popham explains that although the high-stakes state and federal assessments seem to only value summative assessments, students and teachers who regularly do formative assessment do better on these tests. Popham’s article shows how federal policies, such as the 2009 “Race to the Top” initiative and the 2002 No Child Left Behind Act, not only change curriculum and testing but also change teacher evaluation. He explains how teacher evaluation criteria vary considerably from state to state and district to district. Althoguh there is great diversity in the measures and the relative weight of those measures used, Popham insists student performance scores on high-stakes assessments are going to continue being one of the most significant factors used to evaluate teacher performance and effectiveness.

Quotable Quotes

“In short, because students’ achievement will play such a prominent role in almost all states’ teacher-evaluation procedures, and because teachers who employ the formative-assessment process will almost always engender improved achievement in their students, this is precisely the moment when sensible teachers should learn to employ the formative-assessment process. The higher the stakes associated with a given teacher-evaluation system, the greater should be a teacher’s interest in becoming a skilled user of formative assessment. This is a classic “win-win” situation” (14).

Notable Notes

explains formative assessment – not a particular kind of assessment, but a process of using a few or occassional “checks” to determine how well students are learning and to adapt instruction based on that feedback. Describes it as a “means-ends approach” (11)

Analysis of the teacher-evaluation system: is it that simple, really?

October 15, 2013

Rose, Mastrangelo, and L’Eplattenier, Directing First-Year Writing

Rose, Shirley K, Lisa S. Mastrangelo, and Barbara L’Eplattenier. “Directing First-Year Writing: The New Limits of Authority.” College Composition and Communication 65.1 (September 2013): 43-66.

The authors repeated and expanded a study conducted by Gary A. Olson and Joseph M. Moxley in 1989 on the responsibilities, power, influence, and authority held by directors of first-year writing programs. The study is based on 312 responses to an online survey distributed through the WPA-L listserv and a direct-email list of department chairs, and respondents included WPAs, chairs of English or independent writing programs, directors of college writing programs or writing centers, and those who report to directors of first-year writing. In this article, the authors focus on two trends in their results: 1. the perceptions of the most important roles and responsibilities of the first-year composition director and 2. how administrative responsibilities differ among WPAs with tenure, WPAs without tenure but on the tenure track, and those WPAs who hold non-tenure-track administrative lines. What Rose, Mastrangelo, and L’Eplattenier note in their results is that, compared to Olson and Moxley’s 1989 study, the responsibilities that WPAs take on – hiring and training teaching staff, determining curriculum, developing assessment models, writing policy statements, and managing student/grade/personnel issues – are more often shared and negotiated among several people (most notably the chair and other members of a faculty council) depending the particular contexts of the institution, department, and the WPA herself (especially in regards to whether or not the WPA has tenure.) The authors argue that the WPA is not a powerless position (as Olson and Moxley contend); rather, through both new articulations of WPA theory through postmodern and feminist lenses as well as the growth of the discipline in the past 25 years, the WPA position has become more situated, negotiated, and nuanced.

Notable Notes

NTT WPAs (those not on the tenure track) are often given roles “related to management and supervision” like supervision and hiring of teaching staff, scheduling and staffing, establishing common syllabi, handling disputes and political problems (61-62)

not-yet-tenured WPAs are often given responsibilities that are “clearly pedagogical rather than political in focus,” probably out of a desire to protect new faculty pre-tenure and because many are fresh out of graduate school with a current understanding of comp theory and pedagogy (60).

as compared to the 1989 Olson and Moxley survey, many respondents noted curriculum and assessment as WPA responsibilities, probably due to pressures on higher education and accreditation (55)

most important responsibility of the first-year writing director (as noted by chairs in the 1989 survey, chairs in the 2012 survey, and 2012 directors of first-year writing) is communicating well (which includes staying in touch with the chair, being accessible, etc.) (53)

explains definitions of power, authority, and influence described by David V.J. Bell and used by Thomas Ambrose in his article “WPA Work at the Small College or University.” (51)

interesting power dynamic present in many of the responses: female WPA/male chair

limitations – very few (5) responses from two-year schools, which further emphasizes the invisibility of the 2-year college WPA in our scholarship (47)

WPAs as “middle management” (45).

Quotable Quotes

“Although Olson and Moxley defined power in the duties of a writing program director and concluded that composition directors were relatively powerless, respondents to our survey suggest that our understanding of the situated and strategic negotiation of WPA agency has become more nuanced, accounting for the agency of others with whom we work as well as our own” (63).

“Our discipline’s understanding of power, especially as it relates to writing program administration, and how it functions has shifted dramatically in the last quarter of a century due to feminist, Foucauldian, and post-Foucauldian theory, as well as our own maturing as a discipline. THe power of writing program directors, whether they are first-year program directors or other program directors, continues to be a topic of interest to composition studies scholars because power itself is so fluid and complicated” (63).

“The WPA’s job is now recognized as collaborative and inter relational, with the WPA observing and interacting daily with constituencies who have multiple – and sometimes contradictory – agendas” (50).

“We draw from the survey results, respondents free-text comments, and the literature to suggest that a more useful method of thinking about WPA’s agency is to recognize that these different political instruments are always negotiated, that they are consistently and constantly changing, and that the rhetorical situation in all of its complexity always impacts a WPA’s ability to make change. A rhetorically and politically astute WPA can examine which political instrument – influence, power, or authority – would have the greatest impact, as well as the compromises and negotiations she or he is willing to make to accomplish his or her long- and short-term goals” (51-52).

“A WPA’s activities create cultural capital that determines his or her role within the institution” (45).

October 14, 2013

McLaughlin and Moore, Integrating Critical Thinking into the Assessment of College Writing

McLaughlin, Frost and Miriam Moore. “Integrating Critical Thinking into the Assessment of College Writing.” Teaching English in the Two-Year College 40.2 (December 2012): 145-162.

McLaughlin and Moore explain their study of how to assess critical thinking in college student essays. They developed a writing rubric intended to assess student writing across the disciplines, and then asked participants at the March 2011 Symposium on Thinking and Writing at the College Level to use the rubric to evaluate two student papers (both essays were written in response to a prompt that asked the student to define a term.) The results of the assessment surprised McLaughlin and Moore, as they assumed that one of the student essays was markedly stronger than the other. What they found was that the evaluators (80% of whom taught first-year writing in a variety of contexts) valued different attributes in student writing. McLaughlin and Moore argue that it is simpler to assess student writing based on attributes like “correctness” or “voice” instead of characteristics that point to critical thinking, like thoughtfulness, logical development, and consideration of alternative perspectives. They contend that the writing tasks students are given in K-12, which emphasize creative writing and the development of a strong, emotive voice, are distinctly different from the careful, reasoned academic writing (a very specific voice) that is hallmark of “college-level writing” and which is expected in first-year composition writing tasks.

Notable Notes

based the construction of their critical thinking in writing rubric (CTWR) on other rubrics designed by other institutions (Washington State University) and Bloom’s Revised Taxonomy (147)

categories of the CTWR: Focus, Logic (both of these first two categories contain language that incorporates elements of critical thinking), Content, Style, Correctness, Research (150).

keywords that point to critical thinking in these first two rubric categories: thoughtful, interpret evidence, draws warranted conclusions, analyzes alternative perspectives, evaluates when appropriate (150).

overemphasis on the construction of voice (155) – emotional voice (pathos) can mislead a reader where there is no logical, critical thought

college-level writing is mostly expository – requires a “drier” academic voice (156).

personal narrative v. critical analysis – writing tasks students are given in high school, college

the difficulty of capturing elements of critical thinking in a rubric – rubrics simplify writing, often assess what’s easy to assess instead of what’s the most important element (146-147).

Quotable Quotes

“College-level writing, it seems, values the well-reasoned point over its dramatic rendering. Perhaps reasoning, then, is a salient feature of college-level writing. Whether it is as important in high school writing is certainly worth examining in greater detail in the future” (157).

“In conclusion, the assessment of critical thinking takes time and often complicates the act of writing assessment.  Sometimes the most highly detalied and interesting student writing is not the product of complicated thinking but rather of strong feeling. Yet voice is not a substitute for thinking, though it can certainly enhance the expression of thought” (157).

“Without open-minded thinking as a basis of approaching the writing task – the thinking that prompts the writer to consider alternative approaches and possible outcomes – the writer may not achieve the level of reasoning that we expect in freshman writing. This thoughtful, fair-minded approach with its resulting careful reasoning, often expressed in a clear but neutral tone, may well be one of the distinguishing features of ‘college-level’ thinking and writing” (158).

 

October 4, 2013

Gere et al, Local Assessment: Using Genre Analysis to Validate Directed Self-Placement

Gere, Anne Ruggles, et al. “Local Assessment: Using Genre Analysis to Validate Directed Self-Placement.” College Composition and Communication 64.4 (June 2013): 605-633.

Gere et al describe the revised Directed Self-Placement (DSP) system used by the Sweetland Center for Writing at the University of Michigan, arguing that the locally-developed and administered assessment achieves validity based on a study of placement essay that uses rhetorical move analysis and corpus-based text analysis.

The study of students’ placement essays shows that there are key textual and rhetorical differences between the essays written by students who self-selected into the FYW program instead of the credit-bearing PREP preparatory program. By coding the introductory paragraphs of the placement essays, the researchers determined both what constituted a “prototypical” introduction to an academic essay that articulated an argumentative stance in response to a text and what rhetorical and linguistic strategies are used by undergraduate FYW writers (as opposed to those writers less prepared for “college-level” writing.)

This study shows the benefits of using research and methodologies from linguistics in order to develop and evaluate local writing assessments. This essay also helps articulate more precisely what it means to say that undergraduate students are “good college writers” or have “rhetorical knowledge,” a goal stated in the Frameworks for Success in Postsecondary Writing document. In the end, this study also demonstrates what good local assessment looks like: a dynamic feedback loop that impacts instruction and a writing program’s definition of good writing.

 

Notable Notes

good argumentative writing has a “critical distance” that can be gleaned from the rhetorical and linguistic moves the student writer makes (623)

the revision of the DSP program in 2009 based on ten years of data (1998-2008). Their revision was based on three areas of research: research on writing prompts/assignments (resulted in giving students a reading and a specific prompt to create an academic argument, with explanations of what that means); research on rhetorical genre studies (influenced b Carolyn R. Miller’s ideas of genre as social action – genre not as fixed form but flexible and purposeful); text analysis methods used by ESP/linguistics, including corpus-based text analysis.)

attention to the “meso-level rhetorical actions” and the “micro-level linguistic resources” students bring to their writing (612).

three regularly occurring moves in text-based argument introductions: 1. establishing a background (not always there, so non-prototypical); 2. reviewing the article (either a Review-Summary or a Review-Evaluation); and 3. taking a stand (616). Gives examples from the student placement essays of these three rhetorical moves (617-619).

Used a software program (AntConc) to identify linguistic moves:

  1. “References to and citations from the source text
  2. Code glosses (e.g., in other words; in fact)
  3. Evidentials of deduction (e.g. therefore)
  4. Reporting verbs focused on processes of argumentation (e.g. argues, claims, asserts)
  5. Contrastive connectors (e.g. However, nevertheless) and denials (it is not...)
  6. Specific hedging devices associated with academic registers (e.g., perhaps, likely)…
  7. Self mentions (e.g. I and my), personalized stances (e.g. I agree)
  8. Boosters (e.g. clearly, certainly)”

FYW writers used more of #1-6 than PREP writers; FYW writers were less likely to use #7 and #8 (619-620)

PREP writers more likely to use “says, believes, thinks”; FYW writers more likely to use “argues, discusses, claims, asserts” (620) – reporting verbs

Sample coded FYW and PREP introduction in the appendix

tables of frequencies of certain linguistic features/moves (620-622).

push for genre-based pedagogies, teaching students to use genres as “guideposts” that help them solve rhetorical problems (625).

 

Quotable Quotes

“What our methods have helped us to do, however, is to tease out several linguistic features that, in this context, help to differentiate between students who are more and less at ease with projecting a novice academic stance” (623).

“By ‘meso-level rhetorical actions’ we mean the collections of communicative purposes in smaller sections of a text – larger than the sentence – that together construct the text’s overall pragmatic value as a message” (612).

“Often underconceptualized by those who create them, assignments play a significant role in students’ ability to perform well on a given writing task and therefore merit special attention in assessment” (610).

“Writing an evidence-based argument in response to a prompt like this requires not just arguing for one’s own opinion, but also identifying important propositions in the reading and then summarizing, analyzing, evaluating, and arguing for or against these propositions for using textual and other sources of evidence. Constructing such an argument also requires control of the necessary discursive resources for building an effective argumentative stance” (615).

“stance-taking” (615).

 

July 29, 2013

Mullen, Students’ Rights and the Ethics of Celebration

Mullen, Mark. “Students’ Rights and the Ethics of Celebration.” Writing Program Administration 36.2 (Spring 2013): 95-116.

Mullen questions the ethics of “student celebrations of writing,” culminative activities for many first-year writing programs which are used for a variety of purposes, including programmatic assessment and as a way to argue for the “authenticity” of first-year writing.  Mullen connects student celebrations of writing to the 1974 Students’ RIght to Their Own Language statement, arguing that students’ rights are violated when their participation in student celebrations of writing is mandated and when their written assignments and course work are co-opted and used by faculty and administrators.  Mullen suggests that student celebrations of writing move from generic promotions of writing, which he describes as having a “whiff of desparation” about them, towards more pedagogically-oriented events that target specific aspects of writing (i.e. research and writing or public writing) and that engage students in the planning of the activities instead of using them as the subject of the celebrations. He also argues that CCCC and NCTE need to engage in the conversation about the ethics of student celebrations of writing and SRTOL in general.

Notable Notes

uses Eastern Michigan University’s Celebration of Student Writing as an example

also criticizes required student-writing anthologies (like the one at UMass Amherst): “I hope I’m not the only one to see something a little problematic in the repackaging of uncompensated work that students were, after all, required to produce in order to create a product that other students are required to buy” (97).

student celebrations of writing emerge from a move toward student-centered pedagogies and valuing of “public” or “authentic” writing assignments (97)

students and student writing become “exhibits” that we use as faculty and administrators to promote the teaching of writing – for our own purposes (102). Who owns the student work produced in our courses? Who benefits from it?

2 central questions: who is responsible for the work produced in our writing classroom (students or teachers)? Who gets to speak for the writing? (104)

the myth of “authentic” writing (104-105)

the desire of teachers to negate their own influence over their students (104)

Quotable Quotes

“The problem with the current emphasis on celebration – evident in the examples I have sketched above – is that in our enthusiasm to celebrate the writing (or the student, or the research…) we seem to check our critical faculties at the door. I cannot emphasize too strongly that I am not charging celebration organizers with some kind of malign agenda. It is, in fact, precisely due to celebration’s appearance as an unadulterated good – what harm could possibly be done by a celebration? – that the celebration of student writing is an ethical minefield” (97).

“Our celebratory practices deserve scrutiny not least for the fact that what we as teachers of writing seem to end up celebrating most often is actually not the student or their writing but, as I will show, our teaching and ourselves – even, paradoxically, in the act of denying the influence of our teaching” (97)

“Moreover, if we really believe that the students’ right to their own language includes the full spectrum of languages they invent, nuture, protect, hide, manipulate, fake, mangle, and abandon in our classes, then one of the most problematic areas of our practice becomes the celebration of student writing” (103).

“To what degree are our celebrations implicated in the various educational movements that insist that learning can be reduced to externalized, immediately measurable demonstrations of outcomes? In a troubling irony, our fixation on an unreflective celebration of authenticity may reinforce the same reductive, systemic, consumption-driven view of writing that so many of our celebrations are attempting to overcome” (111).

Next Page »

Create a free website or blog at WordPress.com.