Название: Formative Assessment & Standards-Based Grading
Автор: Robert J. Marzano
Издательство: Ingram
Жанр: Учебная литература
Серия: Classroom Strategies
isbn: 9781935542438
isbn:
Table 1.3 Achievement Gain Associated With Number of Assessments Over Fifteen Weeks
Number of Assessments | Effect Size | Percentile Point Gain |
0 | 0 | 0 |
1 | 0.34 | 13.5 |
5 | 0.53 | 20 |
10 | 0.60 | 22.5 |
15 | 0.66 | 24.5 |
20 | 0.71 | 26 |
25 | 0.78 | 28.5 |
30 | 0.82 | 29 |
Note: Effect sizes computed using data reported by Bangert-Drowns, Kulik, and Kulik (1991).
The second category in table 1.2, general effects of assessment, is the broadest and incorporates a variety of perspectives on assessment. Again, many of the specific findings in these studies manifest as the recommendations in subsequent chapters. Here it suffices to note that in the aggregate, these studies attest to the fact that properly executed assessments can be an effective tool for enhancing student learning.
The third category in table 1.2 deals with providing assessment feedback to teachers. Lynn Fuchs and Douglas Fuchs (1986) found that providing teachers with graphic representations of student progress was associated with an ES of 0.70, which translates into a 26 percentile point gain. This is quite consistent with a set of studies conducted at Marzano Research (since renamed Marzano Resources) in which teachers had students chart their progress on specific learning goals (Marzano Resources, 2009). The results are depicted in table 1.4.
Table 1.4 Studies on Students Tracking Their Progress
Study | Effect Size | Percentile Gain |
1 | 2.44 | 49 |
2 | 3.66 | 49 |
3 | 1.50 | 43 |
4 | -0.39 | -15 |
5 | 0.75 | 27 |
6 | 1.00 | 34 |
7 | 0.07 | 3 |
8 | 1.68 | 45 |
9 | 0.07 | 3 |
10 | 1.20 | 38 |
11 | -0.32 | -13 |
12 | 0.43 | 17 |
13 | 0.84 | 30 |
14 | 0.63 | 24 |
Average | 0.92 | 32 |
Table 1.4 reports the results of fourteen studies conducted by K–12 teachers on the effects of tracking student progress. The average ES of these fourteen studies was 0.92, which translates into a 32 percentile point gain. Taking these findings at face value, one would conclude that learning is enhanced when students track their own progress.
Note that in studies 4 and 11, tracking student progress had a negative effect on student achievement (indicated by the negative ES). As is the case with all assessment (and instructional) strategies, this strategy does not work equally well in all situations. Effective assessment requires ascertaining the correct way to use a strategy. In subsequent chapters, we make recommendations as to the correct way to track student progress.
Formative Assessments
Formative assessment has become very popular in the last decade. It is typically contrasted with summative assessment in that summative assessments are employed at the end of an instructional episode while formative assessments are used while instruction is occurring. As Susan Brookhart (2004, p. 45) explained, “Formative assessment means information gathered and reported for use in the development of knowledge and skills, and summative assessment means information gathered and reported for use in judging the outcome of that development.”
Formative assessments became popular after Paul Black and Dylan Wiliam (1998a) summarized the findings from more than 250 studies on formative assessment. They saw ESs in those studies that ranged from 0.4 to 0.7 and drew the following conclusion:
The research reported here shows conclusively that formative assessment does improve learning. The gains in achievement appear to be quite considerable, and as noted earlier, among the largest ever reported for educational interventions. As an illustration of just how big these gains are, an effect size of 0.7, if it could be achieved on a nationwide scale, would be equivalent to raising the mathematics attainment score of an “average” country like England, New Zealand, or the United States into the “top five” after the Pacific rim countries of Singapore, Korea, Japan, and Hong Kong. (p. 61)
In effect, Black and Wiliam were saying that an ES of 0.70 (the largest ES reported in the studies they summarized), when sustained for an entire nation, would dramatically enhance student achievement. Indeed, consulting the table in appendix B (page 155), we see that an ES of 0.70 is associated with a 26 percentile point gain in student achievement. The reporting of these findings captured the attention of U.S. educators.
The Black and Wiliam study is sometimes referenced as a meta-analysis of some 250 studies on formative assessment. As described in appendix B of this book, a meta-analysis is a quantitative synthesis of research in a specific area. When performing a meta-analysis, a researcher attempts to compute an average ES of a particular innovation (in this case, formative assessment) by examining all of the available studies. While Black and Wiliam certainly performed a rigorous analysis of the studies they examined, they did not conduct a traditional meta-analysis. In fact, in a section of their article titled “No Meta-Analysis,” they explain, “It might seem desirable, and indeed might be anticipated as conventional, for a review of this type to attempt a meta-analysis of the quantitative studies that have been reported” (1998a, p. 52). They go on to note, however, that the 250 studies they examined were simply too different to compute an average ES.
It is important to keep two things in mind when considering the practice of formative assessment. The first is that, by definition, formative assessment is intimately tied to the formal and informal processes in classrooms. Stated differently, it would be a contradiction in terms to use “off the shelf” formative assessment designed by test makers. James Popham (2006) has harshly criticized the unquestioning use of commercially prepared formative assessments. He noted:
As news of Black and Wiliam’s conclusions gradually spread into faculty lounges, test publishers suddenly began to relabel many of their tests as “formative.” This name-switching sales ploy was spurred on by the growing perception among educators that formative assessments could improve their students’ test scores and help schools dodge the many accountability bullets being aimed their way. (p. 86)
To paraphrase Popham (2006), externally developed assessments simply do not meet the defining characteristics of formative assessment. Lorrie Shepard (2006) made the same point:
The research-based concept of formative assessment, closely grounded in classroom instructional processes, has been taken over—hijacked—by commercial test publishers and is used instead to refer to formal testing systems called “benchmark” or “interim assessment systems.” (as cited in Popham, 2006, p. 86)
A similar criticism might be leveled at many district-made “benchmark” СКАЧАТЬ