Lau,
A. (2013). "Timed Writing Assessment as a Measure of Writing Ability: A
Qualitative Study." Discussions, 9(2). Retrieved from http://www.inquiriesjournal.com/a?id=798
Timed Writing Assessment as a Measure of Writing
Ability: A Qualitative Study
By Arthur Lau
2013, Vol. 5 No. 11
Throughout
the American education system, the assessment of writing skill and general
academic performance through timed essay examinations has become increasingly
pervasive, contributing to the determination of grades and course placements
and ultimately affecting college admissions through their use in standardized
tests. In March 2005, the College Board introduced a new writing section for
the SAT that incorporates a 25-minute impromptu essay component, as well as
traditional multiple-choice questions on grammar and usage (Hass; “SAT Test
Sections”). Likewise, timed writing assessment holds a prominent position in
the ACT, which features an optional 30-minute essay section that is mandatory
for students applying to some institutions, and in the College Board’s Advanced
Placement program, whose English Literature examination requires three essays
written over a two-hour period (“The ACT Plus Writing”; “English Literature:
The Exam”). As Nancy Hass reports in the New York Times, the
introduction of timed writing in the SAT has generated substantial public
controversy, with many colleges deciding not to consider the essay scores in
the admissions process. At the same time, a number of universities have elected
to utilize the essay section results, not only for admissions, but also for the
determination of placement in composition courses, sometimes provoking
passionate opposition from their own writing faculty members (Isaacs and Molloy
518-20).
Employing
the SAT essay section as an illustration of the debate surrounding timed essay
examinations, this paper seeks to investigate the accuracy, instructional
usefulness, and social implications of the widespread use of timed writing
assessment as a measure of writing ability at the high school and collegiate
levels. To supplement a review of the published literature, this study
integrates material from interviews conducted by the author with five
experienced instructors in composition and literature programs at Stanford
University and the University of California (UC), Davis. Both in standardized
examinations and in the classroom setting, timed writing assessment can offer a
rough and imprecise, but most often fairly accurate, prediction of a student’s
performance on longer, traditional writing assignments. Nevertheless, as this
paper will attempt to demonstrate, the imposition of severe time constraints
induces an altogether different mode of writing, called an “assessment genre”
by one of the instructors, that renders questionable the comparability of the
writing skills displayed in timed and untimed contexts. Given this finding,
teachers and institutional administrators should carefully consider the
potentially objectionable social values and attitudes toward writing
communicated by the choice of timed writing as an assessment technique,
especially when used to identify excellence rather than to certify basic
competence.
In
recent decades, the accuracy and appropriateness of timed writing assessment as
a measure of writing ability have been subject to progressively rising doubts
from English instructors and scholars of composition. As Kathleen Yancey
discusses in an article on the history of writing assessment, in the period between
1950 and 1970, the evaluation of writing was frequently conducted through
‘objective,’ multiple-choice tests on grammar and vocabulary (485). In the
1970s, trends in standardized writing assessment gradually shifted to the
holistically scored essay test, and by the 1980s and 1990s, many universities
had again changed their evaluation methods to adopt broader, multi-element
writing portfolios for such purposes as assigning course placement (Yancey
487-94). Yancey observes that beneath these fluctuations were evolving views on
the relative importance of reliability and validity, the two central concepts
of psychometric theory. Reliability is associated with objective assessments,
while validity is associated with ‘direct’ tests of writing skill that provide
samples of actual writing (487-94). An assessment is reliable to the extent
that it yields consistent scores across readers and test administrations, while
it is valid insofar as it successfully measures the particular characteristic,
in this case writing ability, that it purports to measure (Yancey 487; Moss 6).
Current
scholarship opposing the use of timed writing assessment continues to voice the
same concerns about the validity of essay tests originally raised by those
advocating the introduction of portfolios. For example, in her 1991 paper
encouraging the further spread of the newly developed portfolio assessment
method, Sarah Freedman argues that timed essay examinations involve “unnatural”
writing conditions, with students producing work that has no function except
for external evaluation, on topics that may not interest them (3). Similarly,
as Ann Del Principe and Jeanine Graziano-King contend in their 2008 article,
the testing environment created by timed writing assessment undermines the
authenticity of essay examinations because it inhibits the complex processes of
thinking and articulation that enable students to produce quality writing
(297-98). Of course, many more specific arguments can be adduced under the
general heading of authenticity in assessments. For Kathy Albertson and Mary
Marwitz, the dangers of high- stakes standardized writing examinations are
dramatically exemplified by students who seek security in uninspired, formulaic
essays and students who unknowingly imperil their prospects of reader approval
by engaging with challenging topics through more sophisticated pieces that they
lack the time to complete (146-49). Finally, in an inversion of the usual call
for more valid assessment techniques, Peter Cooper reprises the common contention
that a single writing sample on a particular topic cannot fully represent a
student’s abilities, only to present this consideration as evidence in support
of multiple-choice writing tests (25).
In
defense of timed writing assessment, noted composition theorist Edward White
invokes the historical attractions of objective tests of writing, which remain
in use in a large proportion of colleges (32). As he writes in his widely
referenced article “An Apologia for the Timed Impromptu Essay Test,” the scoring
of timed essays provides significant cost savings relative to the labor-
intensive evaluation of portfolios, allowing institutions that would otherwise
employ even less expensive multiple- choice assessments to include some form of
direct writing evaluation in reviewing student performance (43-44). He also
remarks on the utility of timed in-class writing in preventing plagiarism and
in helping students to focus on the task of composition (34-36). Importantly,
in response to concerns about a lack of opportunities for revision, White
asserts that, even though an impromptu essay test may encourage first- draft
writing, a first draft still constitutes a form of writing: thus the use of
timed writing rather than multiple-choice tests emphasizes the value of writing
(35-38). Extending this reasoning further, Marie Lederman argues that, despite
the rightful focus on revision and the writing process in the curriculum, only
the final product of that process holds any significance or communicative
potential for the reader, lending legitimacy to the product-oriented nature of
timed writing assessment (40-42).
A
second major line of thought in favor of timed essay tests, separate from the
pragmatic and conceptual arguments surveyed above, relates to their empirical
capacity to predict the future academic performance of students. College Board
researchers claim that, of all the components of the SAT, the writing section
most accurately predicts students’ grade point average in the first year of
college, demonstrating its validity as an assessment instrument (Kobrin et al.
1, 5-6). A crucial point of weakness in this argument, however, is the idea
that a strong correlation between scores and later academic success in
isolation can show the validity of a given assessment. For as Rexford Brown
insisted as early as 1978, in the course of his opposition to objective writing
tests, the fact that parental income and education might also correlate with
writing ability and predict performance in college does not mean that they
should form the basis for judging students’ aptitude (qtd. in Yancey 490-91).
Validity requires that an assessment measures what it is intended to measure,
so the question remains whether timed writing examinations truly reflect
writing ability.
In order to explore this issue in greater detail, one can
turn to the material gathered in interviews with Brenda Rinard, a member of the
University Writing Program at UC Davis, and postdoctoral fellows Roland Hsu,
Barbara Clayton, Patricia Slatin, and Jeffrey Schwegman, all teaching in
Stanford’s Introduction to the Humanities (IHUM) program. Though all
interviewees had experience with timed essay tests in their respective courses,
it is a limitation of this study that the four participating IHUM fellows,
unlike Dr. Rinard, were seeking to evaluate student examinations not explicitly
in terms of writing quality but rather in terms of content. All of these
instructors nevertheless offered valuable information while answering questions
regarding the accuracy and social implications of timed writing assessment and
their motivations for using it in their courses (see the Appendix).
The
interviewees were first asked about the accuracy of timed writing assessment as
a measure of writing ability, where the standard for writing skill is assumed
to be students’ performance in producing traditional argumentative papers. All
subjects reported that timed essay tests generally provided a fairly accurate
indication of students’ writing ability as demonstrated in regular paper
assignments, although they all mentioned some exceptions or variations in
accuracy as well. In particular, Dr. Clayton stated that students who had
previously submitted papers of lower quality would sometimes show a surprising
level of proficiency on essay tests, perhaps on account of additional
preparation for the examination. In contrast, Dr. Schwegman noted that the
students most skilled in composing extended papers would not usually produce
the highest-quality timed essays in the class. Dr. Rinard emphasized the adverse
effects of the testing environment for students with test anxiety and students
for whom English was not the first language. Interestingly, Dr. Slatin and Dr.
Rinard both affirmed, when asked, that timed writing assessment could offer
only an imprecise measurement of writing ability, one that would not
accommodate fine distinctions in skill or provide for the display of the full
range of variation in writing ability.1 Their observations agree in
this respect with the conjecture of Leo Ruth and Sandra Murphy that “short,
timed writing tests are likely to truncate severely the range of performance
elicited,” as suggested by surveys indicating that more sophisticated writers
often consider time allocations inadequate due to their use of a greater amount
of time for planning their work (151-54).
At
this point, given that timed writing assessment does not seem grossly
inaccurate in evaluating broader writing skill, one might be inclined to accept
White’s contention that the use of standardized essay tests is justified by
their practical efficiency and the fact that they at least require first- draft
writing. Once again, however, this conclusion can be warranted only by a
demonstration of the validity of timed writing examinations in measuring the
same sort of writing ability that manifests itself in regular paper
assignments, not simply by a correlation between the two forms of writing. From
this standpoint, the true importance of the notion that timed writing is
first-draft writing becomes evident: it embodies the idea that timed writing is
fundamentally similar to the writing involved in extended composition. Only if
timed writing is sufficiently continuous with, and therefore comparable to,
writing without such time constraints can the validity of timed writing
assessment be maintained. Indeed, as Murphy notes, assessment specialist
Roberta Camp has argued that standardized writing tests implicitly assume that
timed, impromptu writing can be considered representative of writing in general
and that writing involves a uniform set of skills regardless of its purpose or
circumstances (Murphy 38). One of the criticisms offered by Dr. Rinard
challenges the core assumptions underlying the use of timed essays to determine
writing ability. In particular, she believes that the timed writing on
standardized examinations constitutes a distinct “assessment genre” with its
own unique rhetorical situation, implying that judgments of writing skill
obtained using timed writing may not be generalizable to writing in other
contexts.
One
must now resolve the question of whether the timed writing should be regarded
as representative of all academic writing or should instead be classified as a
narrow and artificial “assessment genre.” Insight on this topic is supplied by
the other interviewees’ remarks on their motivations for employing timed essay
examinations. With a notion of timed writing as essentially continuous with
other forms of writing, one might expect that they would conceive of essay
examinations as simply compressed versions of regular papers, assigned because
they require less time to grade and offer greater protection against
plagiarism. To the contrary, in fact, the four IHUM fellows tended not to
express any of these practical motivations for using timed essay examinations.
The exceptions to this trend were Dr. Hsu, who cited the necessity of ensuring
that work submitted was a student’s own, and Dr. Schwegman, who briefly
remarked on the issue of time available for grading, but even these two
instructors spoke at length about other reasons for employing the essay test
format. Dr. Hsu, for instance, contended that timed essays were useful for encouraging
students to construct a “less developed synthesis” of the material, meaning, as
he explained, that they would not be influenced by the interchange of ideas
with the teacher or other students and would therefore need to “take ownership”
of their work in a way not facilitated by traditional papers. On the other
hand, Dr. Slatin emphasized the importance of timed essay examinations as
another mode of evaluation different from longer paper assignments,
contributing to the diversity of assessment measures and thus ensuring fairness
to all students in grading. Likewise, Dr. Schwegman found his principal
motivation in the idea of achieving fairness by employing a broad spectrum of
assessment methods, each engaging a distinct skill set and a different type of
ability. All of these perspectives on the utility of timed writing assessment
crucially presuppose a fundamental dissimilarity between timed writing and the
extended composition demanded by regular papers.
Moreover,
a majority of the interviewees indicated that the writing produced on the essay
tests that they had used generally failed by a large margin to satisfy the
standards of a decent first draft for any other assignment. This finding, in
addition to the previously developed suggestion of a divergence in the skills
and processes involved in timed writing and other forms of writing, further
challenges White’s assertion that timed writing should be regarded as
first-draft writing. Dr. Schwegman, for instance, freely admitted that the
writing submitted for final examinations was often “atrocious” in quality, and
Dr. Clayton related that she would expect students to spend far more time than
that permitted in essay examinations on the first draft of even a short paper.
For a piece comparable to one on the AP tests, where a student might receive
approximately 40 minutes per essay, Dr. Rinard estimated that a student might
require anywhere from one to three hours to produce a draft of reasonable
quality.
On
the basis of these observations, one could justifiably conclude that timed
writing of the sort used on standardized tests is not equivalent to first-draft
writing in almost any other setting. In this way, one can recognize how the
information collected in this study might begin to confirm the statement of Luna,
Solsken, and Kutz that “a standardized test represents a particular, situated
literacy practice” with its own distinctive norms and conventions (282). As Dr.
Rinard first suggested, however, if timed writing is understood as comprising
its own genre, the genre of standardized assessment, the claim that
standardized essay tests provide a valid measurement of writing skill becomes
suspect.
Indeed,
Brian Huot criticizes the reliance of traditional testing practices on a
“positivist epistemology” assuming that writing ability is a fixed trait that
can be measured independently of context (549-52). Articulating a new theory of
writing assessment, Huot argues instead that any acceptable measurement of
writing ability must be informed by a clear conception of the sociocultural
environment and academic discipline in which it is applied (559-64). As he
contends, a valid assessment instrument must be designed to generate a
rhetorical situation consonant with the purposes for which the assessment
results will be used (560), and on this criterion, large-scale timed essay
tests appear markedly deficient, precisely because they are standardized across
a vast array of institutions and disciplines.
One
might object to this line of reasoning, nevertheless, on the grounds that
standardized essay examinations offer the greatest validity among all the forms
of writing evaluation that many institutions have sufficient resources to
employ. Alternatively, one might even acknowledge a complete dis-analogy
between timed writing and the type of writing required by regular papers, yet
maintain, as Dr. Schwegman proposed, that writing at speed might constitute an
independently valuable and significant form of writing in its own right. At
this juncture, however, the analysis of the validity of timed writing
assessment must confront the issue of the social values that are communicated
to students and the larger educational community by the choice of a particular
assessment technique. For as White perceptively notes, “Every assessment defines
its subject and establishes values” (37): each method of judging student
achievement necessarily contributes to the delineation of the knowledge and
capacities in which the subject of assessment consists. Furthermore, an
assessment simultaneously conveys and reinforces a society’s normative
commitment to a particular conception of what distinguishes greater and lesser
ability in the relevant subject and of how proficiency in the subject can be
gained or improved. Hence, in the words of John Eggleston, examinations may be
considered “instruments of social control,” by the fact that “the examination
syllabus, and the student’s capacity to respond to it, becomes a major
identification of what counts as knowledge” (22). Lest this claim seem
excessively abstract as a basis for scrutinizing the legitimacy of timed
writing assessment, David Boud enumerates some concrete effects of evaluation
methods on the educational process. Research has shown, he reports, that
students concentrate on the topics that are assessed as opposed to other
aspects of a course, that the types of tasks involved in the assessment
influence their learning strategies, and that effective students watch
carefully for instructors’ indications of what material will be tested (103-4).
Once
alert to the symbolic power exerted by assessment mechanisms, one might be
troubled by some of the values and ideals that timed essay examinations seem to
be propagating in the experience of the interviewees. Dr. Hsu remarked that the
timed writing environment detracts from the significance of the “invention”
process by which students discover and refine new ideas through revision. For
Dr. Slatin, furthermore, timed essay tests entirely omit any emphasis on the
value of creativity as an element of successful writing, replacing it with an
unyielding focus on the “scientific” attitudes of analysis and criticism. As
Dr. Schwegman commented, the essay examination format signals the importance of
content knowledge at the expense of practicing skills, while Dr. Clayton
observed that essay tests frame students’ writing as a response to a
predetermined question rather than an avenue for exploring questions of their
own devising. Finally, adopting the most critical stance of any of the
instructors, Dr. Rinard explained that high-stakes timed writing examinations
underline above all else the value of speed, in stark contrast to the ideal of
thoughtful contemplation historically associated with effective writing. In her
opinion, timed writing assessments test performance instead of revealing a
student’s potential and encourage a “reductive” and formulaic mode of writing
that prevents the development of nuanced points of view in a composition.
Except in Dr. Rinard’s case, these features of timed writing assessment were
not necessarily considered negative; they were mentioned as factors supporting
the capacity of an examination to fulfill its purpose of testing knowledge.
Nevertheless, if essay tests are employed to measure writing ability in
particular, as with the SAT, then the fact that timed writing rewards qualities
such as speed might become problematic when considering that these qualities
could be mistakenly assumed to be definitive of writing skill in general,
outside of the testing context.
To
illustrate the manner in which the construct of writing ability peculiar to
timed writing assessment might begin to insinuate itself into broader
conceptions of writing as a practice, one can turn to the theory of orders of
simulacra, developed by the sociologist Jean Baudrillard and applied to the
field of educational testing by F. Allan Hanson. This theory, as Hanson writes,
describes three ways in which a signifier, such as the result of a test, can
represent the object that is signified, such as the underlying skill or capability
of which the test gives an indication (68). At the first order of simulacra,
the signified is conceived as prior to the signifier, which reproduces or
resembles it in some way (68), just as an archaeological artifact precedes the
copy placed in a museum, which is judged valuable insofar as it faithfully
imitates the original. At the second order, the signifier serves as the
“functional equivalent” of the signified, with Hanson’s example being the
robotic machinery that replaces human workers, the signified, in a factory
(68). In the final stage of this progression, at the third order, the signifier
is a formula or blueprint for the signified and holds priority over it, just as
DNA encodes the attributes of an organism and guides its development (68). Although
tests are often understood as simple measurements of preexisting
characteristics in the subject, operating at the first order of simulacra,
Hanson argues that they commonly act as second-order signifiers, as when a test
score substitutes for an individual’s intelligence or ability in college
admission decisions (68-71). Advancing to the level of third-order signifiers,
tests can “literally construct human traits,” he asserts, by altering the
course of a person’s educational experience and even by incentivizing students
to cultivate the cognitive characteristics favored by standardized examinations
(71-74).
Returning
to the topic of timed writing specifically, one could contend that an essay
test’s ascription of certain degrees of skill to examinees assumes the function
of a second-order signifier as students and teachers begin to conceptualize
writing ability in terms of the values that the test is perceived as
communicating. A writing examination approaches the third order of simulacra
when the widespread adoption of the system of values defining writing skill
from the perspective of the test precipitates tangible changes in the modes of
writing within a community. Indeed, evidence for this shift can be uncovered:
Dr. Clayton related that students would occasionally seem to be composing their
regular papers in the style that they were accustomed to use for examinations,
with deleterious effects on the quality of those papers. As she stated,
I do find that I think students are having more problems
with traditional writing assignments than in the past because they are relying
more upon what they’ve been taught, and I’ve had to say to students, “Do not
treat this paper assignment as though it were an exam.” So I find that ... if
anything their exams are better, but their papers worse, because I think...
they’re confusing the two things.
These
effects are aggravated if timed writing examinations are meant to provide an
exact indication of a student’s writing ability instead of merely ascertaining
basic proficiency, especially considering that essay tests offer only a rough
estimate of ability. In her book on the social history of educational
assessment, Patricia Broadfoot observes that assessments fulfill the distinct
functions of selecting candidates for excellence, on the one hand, and of
certifying the possession of essential competencies on the other (26- 33).
Meanwhile, Eggleston discusses the social processes by which examinations
contribute to determining the level of esteem granted to a given body of knowledge,
and by which different disciplines compete for the validation of their own
expertise as high-status (25-31). Synthesizing these concepts, one can
understand how the role of a certain assessment in selecting for excellence
rather than certifying basic competency might grant privileged status to the
qualities and values that are publicly perceived as enabling success on that
assessment. Such a role is in fact occupied by the SAT and AP examinations in
the admissions systems of elite universities, greatly amplifying the capacity
of timed writing assessment to influence the complex of social values attached
to the concept of writing ability.
What
this investigation has found, then, is that the timed essay examination, as an
“assessment genre,” tests a particular species of writing ability
distinguishable from the sort of skill demonstrated by the writing of longer
papers and consequently disseminates a different set of values and a different
understanding of writing as a practice. Especially when timed writing is
employed for the specific purpose of revealing fine distinctions among
individuals in the upper range of writing skill, the conception of writing
ability constructed by timed writing assessment may even begin to supplant the
social values undergirding traditional academic composition. Thus, in electing
to use timed writing assessment as a measure of writing ability, instructors
and administrators should take care to consider the potential consequences for
the culture of writing among their students and to recognize that the
representation of student abilities offered by such an assessment may not be
fully generalizable to other contexts. Otherwise, the results of this study
suggest, they may be inadvertently encouraging a reductive mode of writing and
elevating the importance of speed at the expense of thoughtfulness and
creativity.
Acknowledgements
This
paper was written for a class taught by Prof. John Lee. I gratefully
acknowledge his support and advice throughout the course of my research. I would
also like to thank my interviewees, without whose amicable participation and
insightful contributions this project could not have been completed: Dr. Brenda
Rinard, Dr. Roland Hsu, Dr. Barbara Clayton, Dr. Patricia Slatin, and Dr.
Jeffrey Schwegman.
References
Albertson,
Kathy, and Mary Marwitz. “The Silent Scream: Students Negotiated Timed Writing
Assessments.” Teaching English in the Two Year College 29 (2001):
144-153. 6 May 2012 .
Boud,
David. “Assessment and the Promotion of Academic Values.” Studies in Higher
Education 15 (1990): 101-11. 6 May 2012 .
Broadfoot,
Patricia M. Education, Assessment, and Society. Buckingham, Eng.: Open
University P, 1996.
Cho,
Yeonsuk. “Assessing Writing: Are We Bound by Only One Method?” Assessing
Writing 8 (2003): 165-91. 6 May 2012 .
Clayton,
Barbara. Personal interview. 2 May 2012.
Cooper,
Peter L. “The Assessment of Writing Ability: A Review of Research.” Educational
Testing Service Research Report 84-12. May 1984. 6 May 2012 .
Del
Principe, Ann, and Janine Graziano-King. “When Timing Isn’t Everything:
Resisting the Use of Timed Tests to Assess Writing Ability.” Teaching
English in the Two Year College 35 (2008): 297-311. 15 Apr. 2012 .
Eggleston,
John. “School Examinations--Some Sociological Issues.” Selection, Certification,
and Control: Social Issues in Educational Assessment. Ed. Patricia
Broadfoot. London: Falmer, 1984. 17-34.
“English
Literature: The Exam.” 2012. College Board. 6 May 2012 .
Freedman,
Sarah Warshauer. Evaluating Writing: Linking Large- Scale Testing and
Classroom Assessment. Berkeley, CA: National Center for the Study of
Writing, 1991.
Hanson,
F. Allan. “How Tests Create What They Are Intended to Measure.” Assessment: Social
Practice and Social Product. Ed. Ann Filer. London: Routledge Falmer, 2000.
67-81.
Hass,
Nancy. “The Writing Section? Relax.” New York Times 5 Nov. 2006. Proquest
Historical Newspapers. 15 Apr. 2012 .
Hsu,
Roland. Personal Interview. 1 May 2012.
Huot,
Brian. “Toward a New Theory of Writing Assessment.” College Composition and
Communication 47 (1996): 549-566. JSTOR. 15 Apr. 2012 .
Isaacs,
Emily, and Sean A. Molloy. “Texts of Our Institutional Lives: SATs for Writing
Placement: A Critique and Counterproposal.” College English 72 (2010): 518-38. Proquest
Research Library. 6 May 2012 .
Kobrin,
Jennifer L., et al. “Validity of the SAT for Predicting First-Year College
Grade Point Average.” College Board Research Report 2008-5. 2008. 15 Apr. 2012
.
Lederman,
Marie Jean. “Why Test?” Writing Assessment: Issues and Strategies. Ed.
Karen L. Greenberg, Harvey S. Wiener, and Richard A. Donovan. New York:
Longman, 1986. 35-43.
Luna,
Catherine, Judith Solsken, and Eleanor Kutz. “Defining Literacy: Lessons from
High-Stakes Teacher Testing.” Journal of Teacher Education 51 (2000):
276-88. Sage Journals. 6 May 2012 .
Moss,
Pamela A. “Can There Be Validity Without Reliability?” Educational
Researcher 23 (1994): 5-12. JSTOR. 6 May 2012 .
Murphy,
Sandra. “Some Consequences of Writing Assessment.” Balancing Dilemmas in
Assessment and Learning in Contemporary Education. Ed. Anton Havnes and Liz
McDowell. New York: Routledge, 2008. 33-49.
Rinard,
Brenda. Telephone Interview. 28 Apr. 2012.
Ruth,
Leo, and Sandra Murphy. Designing Tasks for the Assessment of Writing.
Norwood, NJ: Ablex, 1988.
“SAT Test Sections.” 2012. College Board. 6 May 2012
.
Schwegman,
Jeffrey. Personal Interview. 4 May 2012. Slatin, Patricia. Personal Interview.
3 May 2012.
“The
ACT Plus Writing.” 2012. ACT, Inc. 6 May 2012 .
White, Edward M. “An Apologia
for the Timed Impromptu Essay Test.” College Composition and Communication 46
(1995): 30- 45. JSTOR. 15 Apr. 2012 .
Yancey,
Kathleen Blake. “Looking Back as We Look Forward: Historicizing Writing
Assessment.” College Composition and Communication 50 (1999): 483-503.
JSTOR. 15 Apr. 2012 .
Endnote
1.
This was not one of the standard questions posed to all interviewees, but one
that occurred as the conversations progressed. All the other instructors either
were not asked for their opinion on this subject or did not oppose the position
that timed writing assessment would yield only a somewhat crude method of
determining skill levels.
Appendix
In
the interviews conducted for this project, the course of the conversation and
the phrasing of the questions varied in each instance, but all the instructors
were asked a series of five basic questions modeled on the following.
- How accurately, in your experience, does timed writing
assessment reflect students’ broader academic writing ability? Does the
timed assessment environment emphasize certain aspects of writing skill at
the expense of others?
- What effects does the presence of timed writing
assessment in a course have on your own instructional techniques? Do you
recognize any influences on student writing patterns from the prevalence
of timed writing assessment throughout high school and college?
- What factors motivate you to employ timed writing
assignments in place of, or in addition to, regular papers? To what extent
do practical considerations such as plagiarism concerns or grading time
affect the decision to use timed writing assessment?
- What social values and attitudes toward writing, and
communication in general, are projected by the importance of timed writing
assessment in education?
- Would you consider the assessment environment of timed
writing to be more or less fair, or equitable, in comparison to the
evaluation of regular papers, given that timed writing assessment ensures
that exactly the same resources and amount of time are available to each
student?
Comments
Post a Comment