Название: The Science of Reading
Автор: Группа авторов
Издательство: John Wiley & Sons Limited
Жанр: Языкознание
isbn: 9781119705130
isbn:
Visual word recognition has been the focus of an enormous amount of research because of its complexity and importance, and because most of what is involved would otherwise be hidden from awareness (for reviews, see Rastle, 2016; Cohen‐Shikora & Balota, 2016). The goal of this research is to develop theories that explain its many aspects: the knowledge and processes that underlie word recognition; the linguistic, cognitive, and perceptual capacities recruited for the purpose; how the skill develops, the bases of individual differences, and how the brain makes it all happen, among other topics. Theories are often expressed as “models” that provide detailed accounts of important components of the word recognition system. Although the use of such models dates from the nineteenth century, progress was greatly accelerated by two developments from the 1970s–1980s. The first was Marshall and Newcombe’s (1973) formulation of what came to be known as the “dual‐route” model of reading (Coltheart, 1978). The model was an account of impairments in reading aloud observed in patients following brain injury; Coltheart and colleagues later applied it to unimpaired reading and learning to read. Much of the subsequent research in this area can be seen as following from this pioneering work. The second was the creation of a “connectionist” computational model of reading, again focused on reading aloud, by Seidenberg and McClelland (1989; hereafter SM89). This work was important because it challenged the core assumptions underlying the dual‐route approach and introduced a new theoretical framework for visual word recognition and other types of lexical processing, based on the PDP framework developed by Rumelhart et al. (1986). Coltheart and colleagues subsequently developed several computational models of the dual‐route theory, collectively known as the dual‐route cascade (DRC) model (Coltheart et al., 1993; Coltheart et al., 2001).
An enormous amount has been learned since then. Visual word recognition is one of the great success stories in modern cognitive science and neuroscience. For much of this period, the existence of two competing theoretical approaches – dual‐route and connectionist – accelerated research progress. These theories provided frameworks for investigating numerous aspects of reading and greatly expanded the scope of research in English and other languages. The theories also stimulated the development of computational models of specific types of information (e.g., orthography, semantics) and related phenomena (e.g., morphology: Seidenberg & Gonnerman, 2000; Seidenberg & Plaut, 2014). Visual word recognition also became a domain in which to explore contrasting approaches to computational modeling of cognitive phenomena (Coltheart, 2005; Seidenberg & Plaut, 2006), and methods for studying brain structure and function (e.g., Cox et al., 2015; Woollams et al., this volume). Given the sustained interest in the topic over many years, visual word recognition represents an important case study illustrating what modern cognitive science and neuroscience has achieved.
The purpose of this chapter is to provide a critical perspective on this long endeavor, focusing on the role of computational modeling. Computational models of cognition serve two essential, interacting functions. One is methodological. Modeling requires theoretical claims to be specified at a level that allows them to be implemented as working simulations. A theory’s validity can then be assessed by determining if a model incorporating its main assumptions can reproduce the phenomena the theory is meant to explain. This method has been widely embraced as an advance over the informal models of the “box‐and‐arrow” era in which the dual‐route approach originated (Seidenberg, 1988).
The second function is theoretical. Models are implemented within theoretical frameworks such as production systems (Anderson, 1983), connectionist networks (Thomas & McClelland, 2008), and Bayesian approaches (Griffiths et al., 2010) that introduce novel ways to conceptualize behavior. Applying such frameworks to phenomena such as reading can yield theories that are genuine departures from previous thinking. Comparing a model’s behavior to people’s then leads to accepting, adjusting, or abandoning the theoretical account, and generates new questions to investigate. This feedback loop between model and theory, each grounded by empirical evidence, is a powerful approach to investigating complex phenomena (Figure 2.1).
Figure 2.1 Theory development and evaluation using computational models. Theoretical frameworks are used to develop theories of particular phenomena. Models that implement core parts of the theory are intended to simulate target data. Model performance feeds back on theory development and generates new hypotheses and empirical tests.
With the benefit of 30‐some years of hindsight we can ask: Did computational models of reading yield the expected benefits? Did they indeed provide a basis for assessing competing theories? Did they yield new theoretical insights? In short, given the promise of the approach and several decades of modeling research, what have we learned?
Like many others, we think that computational modeling proved to be an invaluable tool in both methodological and theoretical respects. Taken as a method for testing theories, attempts to implement models based on the dual‐route theory revealed apparently intractable limitations of the approach. Researchers were unable to implement models that reproduced basic behavioral phenomena concerning the pronunciation of regular and irregular words and nonwords that the dual‐route theory was developed to explain.
Models based on the connectionist framework reproduced these effects, as well as additional phenomena that were not predicted by the dual‐route theory and were not simulated correctly within it. That approach is limited by the core assumption that pronunciations are either rule‐governed or exceptions. This dichotomy overlooks the fact that spelling‐sound correspondences exhibit varying degrees of consistency (Table 2.1). Regular (rule‐governed) words and exceptions occupy different points on this consistency continuum. Importantly, this account also predicts that words and nonwords can exhibit intermediate degrees of consistency. Consistency effects have been observed in numerous studies dating from Glushko (1979). Connectionist models could reproduce regularity, consistency, and other effects because they encode spelling‐sound correspondences as statistical dependencies rather than as rules and exceptions. The connectionist models also advanced theorizing by showing how concepts and computational mechanisms from the PDP framework could provide new insights about complex behavior.
Table 2.1 Regularity versus Consistency: What’s the Difference?
Categories of Words in Dual‐Route Theory | |||
Regular/Rule‐governed | Irregular/Exception | ||
MUST CHAIR DIME BOAT | HAVE DONE SAID PINT | ||
Exceptions = words whose pronunciations are not correctly generated by rules. | |||
Glushko Inconsistent Words | |||
Regular but Inconsistent | |||
СКАЧАТЬ
|