Название: Antiracist Writing Assessment Ecologies
Автор: Asao B. Inoue
Издательство: Ingram
Жанр: Учебная литература
Серия: Perspectives on Writing
isbn: 9781602357754
isbn:
The bottom line is we cannot separate race, our feelings about the concept or particular racial formations, which includes historical associations with particular racialized bodies in time and space, from languages, especially varieties of English in the U.S. This makes language, like the dominant discourse, racialized as white (I’ll say more about this later in this chapter). More important, as judges of English in college writing classrooms, we cannot avoid this racializing of language when we judge writing, nor can we avoid the influence of race in how we read and value the words and ideas of others. Lisa Delpit offers a poetic way to understand language and its connection to the body, which I read with racial undertones: “[o]ur home language is as viscerally tied to our beings as existence itself—as the sweet sounds of love accompany our first milk” (2002, p. xvii). Freire has another way of pointing out the power of language in our lives, the power it has in making our lives and ourselves. He says, “reading the world always precedes reading the word, and reading the word implies continually reading the world” (1987, p. 23). When we read the words that come from the bodies of our students, we read those bodies as well, and by reading those bodies we also read the words they present to us, some may bare stigmata, some may not.
The Function of Race in the EPT Writing
Assessment
I’ve just made the argument that race generally speaking is important to English as a language that we teach and assess in writing classrooms. But how is race implicated in writing assessments? How does race function or what does it produce in writing assessments?
One way to consider the function of race in writing assessment is to consider the consequences of writing assessments. Breland et al. (2004) found differences in mean scores on the SAT essay among Asian-American, African-American, Hispanic, and white racial formations, with African-Americans rated lowest (more than a full point on an 8 point scale) and Hispanic students rated slightly higher (p. 5), yet when looking for differences in mean SAT essay scores of “English first” (native speakers) or “English not first” (multilingual) students, they found no statistically significant differences (p. 6) —the mean scores were virtually identical in these two groups. I don’t know how Breland and his colleagues determined native speaking proficiency, but my guess is that it may fall roughly along racial lines. These findings have been replicated by others (Gerald & Haycock, 2006; Soares, 2007), who found that SAT scores correlate strongly to parental income, education, and test-takers’ race. Similarly, in Great Brittan, Steve Strand (2010) found that Black Caribbean British students between ages 7 and 11 made less progress on national tests than their white British peers because of systemic problems in schools and their assessments. These patterns among racial formations do not change at Fresno State, in which African-American, Latino/a, and Hmong students are assessed lower on the EPT (see Inoue & Poe, 2012, for historical EPT scores by racial formation) than their white peers and attain lower final portfolio scores in the First Year Writing (FYW) program readings conducted each summer for program assessment purposes (Inoue, 2009a; 2012, p. 88). Race appears to be functioning in each assessment, producing similar racialized consequences, always benefiting a white middle class racial formation.
Between 2011 and 2014, I directed the Early Start English and Summer Bridge programs at Fresno State. All students who were designated by the English Placement Test (EPT), a state-wide, standardized test with a timed writing component, as remedial must take an Early Start or Bridge course in order to begin their studies on any California State University campus. Even a casual look into the classrooms and over the roster of all students in these programs shows a stunning racial picture. These courses are ostensibly organized and filled by a test of language competency, however, each summer it is the same. The classes are filled with almost exclusively students of color. Of all the 2013 Bridge students, there were only four who were designated as white by their school records—that’s 2% of the Bridge population. And the Early Start English program is almost identical. So at least in this one local example of a writing assessment (the EPT), when we talk about linguistic difference, or remediation (these are synonymous in many cases), we are talking about race in conventional ways.7
The remediation numbers that the EPT produces through blind readings by California State University (CSU) faculty readers also support my claims. In fall of 2013, as shown in Table 1, all students of color—it doesn’t matter what racial formation or ethnic group we choose—are designated by the EPT as remedial at dramatically higher rates than white students. The Asian-American category, which at Fresno State is mostly Hmong students, are the most vulnerable to this test, with 43.9% more of the Asian-American formation being designated as remedial in English than the white formation.8 How is it that these racially uneven test results are possible, and possible at such consistent rates? How is it that the EPT can draw English remediation lines along racial lines so well?
Table 1. At Fresno State, students of color are deemed remedial at consistently higher rates than white students by the EPT (California State University Analytic Studies, 2014)
Race | No. of First-Year Students | No. of Proficient in English | % of Designated as Remedial |
African- American | 119 | 61 | 48.7% |
Mexican- American | 1,298 | 593 | 54.3% |
Asian- American | 495 | 161 | 67.5% |
White Non-Latino | 601 | 459 | 23.6% |
Total | 2,965 | 1,548 | 47.8% |
While my main focus in this book is on classroom writing assessment, the way judgments are formed in large-scale ratings of timed essays are not much different from a single teacher reading and judging her own students. In fact, they show how language is connected to the racialized body. The processes, contexts, feedback, and consequences in a classroom may be different in each case, but how race functions in key places in classroom writing assessment, such as the reading and judgment of the teacher, or the writing construct used as a standard by which all performances are measured, I argue, are very similar. And race is central to this similarity because it is central to our notions of language use and its value.
To be fair, there are more things going on that produce the above numbers. There are educational, disciplinary, and economic structures at work that prepare many students of color in and around Fresno in uneven ways from their white peers. Most Blacks in Fresno, for example, are poor, go to poorer schools because of the way schools are supported by taxes, which are low in those parts of Fresno. Same goes for many Asian-American students. But why would the Mexican-American students have twice as frequent remediation rates as white students? There is more going on than economics and uneven conditions at local schools.
Within the test, there are other structures causing certain discourses to be rated lower. Could the languages used by students of color be stigmatized, causing them to be rated lower, even though raters do not know who is writing individual essays when they read for the EPT? Consider the guide provided to schools and teachers in order to help them prepare their high school students to take the EPT. The guide, produced by the CSU Chancellor’s Office, gives the rubric used to judge the written portion of the test. Each written test can receive from 1 to 6, with 6 being “superior” quality, 4 being “adequate,” 3 being “marginal,” and 1 being “incompetent” СКАЧАТЬ