The New Art and Science of Teaching. Robert J. Marzano
Чтение книги онлайн.

Читать онлайн книгу The New Art and Science of Teaching - Robert J. Marzano страница 11

СКАЧАТЬ the teacher assigns an overall score.

      Source: Marzano Research, 2016o.

       Figure 2.2: Assessment with three sections.

      Student-generated assessments are those that individual students propose and execute. This particular strategy provides maximum flexibility to students in that they can select the assessment format and form that best fit their personality and preferences.

      Probably the most unusual strategy in element 5—response patterns—involves different ways of scoring assessments. To illustrate this strategy, consider figure 2.3.

      Source: Marzano Research, 2016o.

       Figure 2.3: The percentage approach to scoring assessments.

      Figure 2.3 depicts an individual student’s response pattern on a test that has three sections: (1) one for score 2.0 content, (2) one for score 3.0 content, and (3) one for score 4.0 content. The section for score 2.0 content contains five items that are worth five points each for a total of twenty-five points. The student obtained twenty-two of the twenty-five points for a score of 88 percent, indicating that the student knows score 2.0 content. The student acquired 50 percent of the points for score 3.0 content and only 15 percent of the points for score 4.0 content. This pattern translates into an overall score of 2.5 on the test indicating knowledge of score 2.0 content on the proficiency scale and partial knowledge of score 3.0 content.

      When the strategies in this element produce the desired effects, teachers will observe the following behaviors in students.

      • Students can explain what the score they received on an assessment means relative to a specific progression of knowledge.

      • Students can explain what their grades mean in terms of their status in specific topics.

      • Students propose ways they can demonstrate their level of proficiency on a scale.

      The design question pertaining to using assessments is, How will I design and administer assessments that help students understand how their test scores and grades are related to their status on the progression of knowledge they are expected to master? The two elements that pertain to this design area provide specific guidance regarding this overall design question. Teachers can easily turn these elements into more focused planning questions.

      • Element 4: How will I informally assess the whole class?

      • Element 5: How will I formally assess individual students?

      The teacher can address the planning question for element 4 in an opportunistic manner in that he or she might simply take advantage of situations that lend themselves to informal assessments of the whole class. For example, a teacher is conducting a lesson on level 2.0 content. She decides to employ electronic voting devices to keep track of how well students are responding to the questions. As the lesson progresses, she notices that more and more students are responding correctly to questions. She uses this information as an opportunity to celebrate the apparent growth in understanding of the class as a whole. While she could have planned for this activity, the opportunity simply presented itself, and she acted on it.

      The planning question for element 5 generally requires more formal design as to the assessments teachers will administer over the course of a unit or set of related lessons. Typically, teachers like to begin a unit with a pretest that addresses scores 2.0, 3.0, and 4.0 content in the proficiency scale. They must plan for this. It is also advisable to plan for a similar post-test covering the same content but using different items and tasks. Although teachers may plan for one or more other tests to administer to students in between the pre- and post-tests, it is also advisable for the teacher to construct assessments as needed and administer them. As long as they score all assessments using the 0–4 system from the proficiency scale, teachers can compare all scores, providing a clear view of students’ learning over time.

      The major change this design area implies is a shift from an assessment perspective to a measurement perspective. This is a veritable paradigm shift that has far-reaching implications. Currently, teachers view assessment as a series of independent activities that gather information about students’ performance on a specific topic that has been the focus of instruction. Teachers score most, if not all, of these assessments using a percentage score (or some variation thereof). At some point, teachers combine all students’ individual scores in some way to provide an overall score for the students on each topic. Usually, teachers use a weighted average, with scores on some tests counting more than others. They then translate the overall score to some type of overall percentage or grade.

      This process tells us very little about what specific content students know and don’t know. In contrast, scores teachers generate from a measurement perspective provide explicit knowledge about what students know and don’t know. This is because a measurement approach translates scores on assessments into scores on a proficiency scale. No matter what type of assessment a teacher uses, it is always translated into the metric of a scale. For example, a teacher uses a pencil-and-paper assessment and assigns a score of 2.0 on the proficiency scale. A few days later, the teacher has a discussion with the student about score 3.0 content and concludes that the student has partial knowledge of that content. The teacher assigns a score of 2.5 on the proficiency scale based on that interaction. A week later, the teacher administers a test on the 3.0 content and concludes that the student demonstrates no major errors or omissions. Based on this assessment, the teacher assigns a score of 3.0 on the proficiency scale. This process employs a measurement perspective like that shown in figure 2.4.

      Figure 2.4 indicates that assessments can take many forms, including tests, discussions, student-generated assessments, and so on. These different types of assessment might have their own specific format scores. For example, a teacher might initially score 2.0 content on a percentage basis. This percentage score is a format-specific score. Teachers can then translate format-specific scores into a score on a proficiency scale. This is the essence of the measurement process—assessments of differing formats and scoring protocols are always translated into a score on a proficiency scale. Measurements over time provide a picture of students’ status at a particular time and students’ growth. I believe this process allows teachers to gather more accurate, more useful information about students’ status and growth than the current practice of averaging test scores.

      Source: Adapted from Marzano, Norford, Finn, & Finn, in press.

       Figure 2.4: The measurement process.

Image

      CHAPTER 3

      Conducting СКАЧАТЬ