The Science of Reading. Группа авторов
Чтение книги онлайн.

Читать онлайн книгу The Science of Reading - Группа авторов страница 34

Название: The Science of Reading

Автор: Группа авторов

Издательство: John Wiley & Sons Limited

Жанр: Языкознание

Серия:

isbn: 9781119705130

isbn:

СКАЧАТЬ The goal is to identify essential properties of word recognition, not to simulate as many effects as possible, which can include ones that are artifactual or unrepresentative. However, the focus on “critical phenomena” resulted in dropping numerous effects from consideration (e.g., pseudohomophone effects, position of irregularity effects), allowing the researchers to focus on improving the treatment of consistency, regularity, and nonword pronunciation. Thus, the models are not “nested” with respect to coverage of the data.

      Finally, Perry et al. (2007) embraced Coltheart et al.’s (2001) strategy of evaluating models using a benchmark study of each phenomenon. Why studies such as Paap and Noel (1991) and Weekes (1997) are treated as “gold standards” is unclear. Their methods were not more advanced than in other research, and their results were not highly representative. The “gold standard” approach also obviates the requirement to report other simulations that fail, the “file‐drawer” issue again. These weak criteria for assessing model performance also vitiate the importance assigned to the number of phenomena simulated.

      Philosophy aside, how well does the CDP+ model perform? We have conducted numerous simulations with it that can be repeated using publicly available data (see archive). The picture is mixed. The model produces consistency effects for words, whereas the DRC model did not. That is an advance. It produces the consistency effect in their “gold standard” study (an experiment by Jared, 2002), but missimulates other studies, including the Jared (1997) study that Coltheart et al. (2001) took as their benchmark study. The model performs much better on nonwords than the DRC, reproducing the nonword consistency effects from Glushko (1979); see Pritchard et al. (2012) for other concerns, however. Like DRC, CDP+ produces an overall length effect for nonwords but not words, but misses the effect for lower frequency words.

      This analysis of the lexical route as a placeholder for the orthography➔semantics➔phonology side of the triangle gains additional support from research by Perry et al. (2019). This implementation of the CDP+ model employed a simpler orthography➔phonology architecture than other CDP+ models: It is a two‐layer network with direct connections between orthography and phonology and no hidden layers. With reduced capacity this network can encode simple mappings but not more complex ones, increasing dependence on the lexical system. Perry et al. (2019) related this reduction in capacity to developmental dyslexia.

      This is again the division of labor account from the triangle theory. Seidenberg and McClelland (1989) and Harm and Seidenberg (1999) reduced the capacity of the orthography➔phonology network by decreasing the number of hidden units rather than wholly eliminating them. The network is then limited to learning relatively simple spelling‐sound correspondences, requiring additional input from orthography➔semantics➔phonology. Plaut et al. (1996) provided simulations and formal analyses of these effects. The most important difference is that the division‐of‐labor account specifically predicts the greater role of semantics in reading aloud words such as yacht, aisle, and chef, as confirmed in behavioral and neuroimaging experiments cited above. Perry et al.’s (2019) use of a phonological lexicon does not.

      In summary, the major continuity here is with the triangle framework and its account of the division of labor between pathways. Having implemented a version of the orthography➔phonology part of the triangle, Perry, Ziegler, and colleagues could complete the evolution of their approach by dropping the lexical route in favor of the orthography➔semantics➔phonology parts of the triangle, which are needed for independent reasons.

      Recent work using traditional dual‐route models has focused on reading acquisition (Pritchard et al., 2018; Perry et al., 2019), an important area where additional computational modeling could be very informative. However, this research inherits the limitations of the models of adult performance on which it is based.

      The Self Teaching‐DRC model (Pritchard et al., 2018) attempts to show how children learn grapheme‐phoneme correspondence rules and add words to the lexicon, using Share’s (1995) “self‐teaching” mechanism. As the authors noted, “The [ST‐DRC] model uses DRC’s sublexical route and the interactivity between lexical and sublexical routes to simulate phonological recoding.” This is the same mechanism that failed to simulate skilled performance adequately. The paradox, then, is that if the researchers successfully simulate the acquisition of this knowledge, they will arrive at the model that missimulates adult performance. Note that the consistency effects that confounded the DRC model are observed in readers as young as 6–7 years old (Backman et al., 1984; Treiman et al., 1995).

      The dual‐route approach nonetheless remains influential in education. The intuition that reading English requires learning pronunciation rules and memorizing irregular words is a premise of phonics instruction dating from the nineteenth century (Seidenberg, 2017) and the rules‐and‐exceptions approach retains its intuitive appeal. However, there is little agreement among researchers or educational practitioners about either part. On the rule side, there are widely varying proposals about what the rules are and how they should be taught. On the lexical side, reading curricula disagree about which words have irregular pronunciations; many hold that higher frequency words need to be memorized, not simply exceptions, but differ regarding the number of words involved. (See articles in Reading Research Quarterly, 2020, volume 55, S1, for discussion.) In our view, the lack of convergence on these issues arises from the mistaken assumption that the system consists of rules with exceptions.

      If, as other evidence has suggested, the dual‐route approach is not an adequate account of written English, that may undermine the effectiveness of pedagogical practices based on it. The idea that spelling‐sound correspondences are quasiregular and learned via a combination of implicit statistical learning and explicit instruction has not penetrated very far in education, probably because it is not intuitive and requires background knowledge that most educators and educational researchers lack. If this is a more accurate characterization of this knowledge and how it is learned, it may provide the basis for more effective СКАЧАТЬ