Which lobe of the brain is responsible for recognizing print letters and letter patterns?

Current Issues in the Education of Students with Visual Impairments

Mackenzie E. Savaiano, ... Deborah D. Hatton, in International Review of Research in Developmental Disabilities, 2014

4.2 Word Identification Level Processes

One way the phonological processor compensates for a slowing down of orthographic processing is through subvocalization. This strategy maintains the stimulation of previous letters/characters in a word as it is being read. It is possible this strategy is used by braille readers to compensate for the slow orthographic processing resulting from the single unit of recognition.

Millar (1990) disrupted this compensatory process by suppressing subvocalization. Not surprisingly, this significantly affected comprehension processes. These results have also been found in proficient print readers (e.g., Daneman & Newson, 1992; Slowiaczek & Clifton, 1980), supporting Steinman et al.'s (2006) belief that braille readers and print readers may experience similar stages of reading. Millar's second study was less conclusive; she found that beginning braille readers were affected significantly more than proficient braille readers (1990, Study 2). One interpretation is that beginning readers were differentially affected by suppression: students in Group 4 were unable to read the story when required to make unrelated speech sounds and were, therefore, unable to answer comprehension questions, whereas students in Group 1 did not perform significantly different than in other conditions. This difference between groups suggests that beginning readers are relying on subvocalization more than proficient readers when reading for meaning. An alternate explanation for this finding could be that proficient braille readers have incorporated subvocalization as a fundamental strategy, making it more difficult to suppress. It is also possible that one or more of the five participants in Group 4 did not subvocalize. With such small groups, the mean would have been substantially affected by the performance of one student.

Contracted braille has no English print equivalent, so there is no comparison to print reading when researchers examined differences between uncontracted braille and contracted braille. Although Harley and Rawls (1970) reported that contracted braille was better than uncontracted or phonemic braille materials when using an analytic or synthetic approach to teaching reading, the tests administered at the end of the year were transcribed in contracted braille for the contracted and phonemic groups, possibly providing an advantage to the contracted group. Because the uncontracted group was significantly different from the other two groups, data from the uncontracted group were not used in the analysis. Therefore, the results only compared contracted braille with phonemic braille. Also, by only reporting significant findings, it is not known whether researchers tested whether the contracted-synthetic approach was superior to the contracted-analytic approach.

Hong and Erin (2004) compared students who learned to read using different kinds of braille, and their results seem contrary to the ABC braille study finding that students using uncontracted braille had poorer vocabulary and lower reading levels than participants in similar grades using contracted braille (Wall Emerson et al., 2009). However, all of Hong and Erin's (2004) participants lost their vision before entering school and used contracted braille at the time of the study. Their groups were based on which code they learned first, suggesting that students who read contracted braille perform comparably, as long as they learned to read using braille.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124200395000046

Social Cognitive Neuroscience, Cognitive Neuroscience, Clinical Brain Mapping

C.J. Price, in Brain Mapping, 2015

What Is Unique About Reading in Functional Terms?

As described in the preceding text, reading accesses the language system from the visual input. So let us start by considering how it differs from auditory language processing. From this perspective, reading is unique because it requires visual processing, orthographic knowledge, and the ability to link written text to both semantics and phonology. We can also compare reading to object naming, which is another language task that accesses semantics and phonology from visual inputs. Reading differs from object naming in the following ways. First, it requires orthographic knowledge. Second, by virtue of this orthographic knowledge, it is possible to access phonology from unfamiliar words (like ‘trank’) that do not have semantic associations but do have phonological cues in their parts (i.e., /t/r/a/n/k). Put another way, reading can access phonology either directly (orthography to phonology) or indirectly (orthography to semantics–orthography), whereas object naming must rely on the indirect semantic route because the visual parts of objects do not carry phonological cues. Third, reading can only access semantics at the whole-word (lexical) level because the parts of words (sublexical level) do not carry semantic cues, as is the case for many objects. This in turn increases the demands on the links between orthography and phonology even when the text has meaning because access to lexical phonology helps access to the meaning of the word (phonology to semantics). Put another way, access to meaning during reading can occur either directly via orthography to semantics or indirectly from orthography to phonology to semantics. Fourth, many different word types can be read (not just object names) and the words can be combined into sentences and stories. This makes reading much more useful and interesting than object naming. Therefore, we do it much more, with practice, leading to highly proficient speeds that are not attainable for object naming.

To summarize, the comparison of reading to auditory language processing and object naming reveals two functions that are unique to reading: (A) orthographic knowledge and (B) the direct translation of visual inputs to phonology, in the absence of semantic clues. Other reading functions overlap with those used for object naming (e.g., accessing semantics from vision), auditory comprehension (e.g., accessing semantics from phonology), and speech (e.g., rapid production of sentences). We will now focus on the two functions we have identified here as unique to reading.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780123970251003572

Functional Components of Reading with Reference to Reading Chinese

Che Kan Leong, in Cognition, Intelligence, and Achievement, 2015

Summary and Conclusions

In this chapter I have reviewed evidence of the roles of the components of phonology, orthography, and morphology and their interrelation in reading Chinese characters and words. Many topics are not covered, such as syntactic processing, reading comprehension, the contribution of verbal working memory, and reading fluency. On the whole, phonological awareness, subcharacter processing, orthographic processing, and morphological processing explain well the nature of learning to read Chinese characters and words.

Tong and McBride-Chang (2010) found these four components to underpin reading Chinese words in their detailed confirmatory factor analytic study of Chinese kindergarten, second, and fifth graders. A two-factor model of the latent constructs of oral language metalinguistic skills and orthographic processing was shown to fit the kindergartners’ data. The four-component model of phonological awareness, subcharacter processing, orthographic processing, and morphological processing was the best fitting one for the second graders. For the fifth graders, Chinese word reading was increasingly driven by meaning, and phonological processing was shown to be separated from the subcharacter, orthographic, and morphological processing components. Tong and McBride-Chang suggested these results showed a movement from print knowledge to general lexical knowledge. From a tiered intervention perspective, Ho et al. (2012) found slightly different core components in reading Chinese. They proposed four core components—oral language, orthographic skills, morphological awareness, and syntactic skills—as important for learning and teaching Chinese. From both research and intervention perspectives, there is convergence between the Tong and McBride-Chang and the Ho et al. results. Discussion in this chapter has provided further details of the role of the interrelated components.

Continued investigation of the involvement of phonology, orthography, and morphology in reading Chinese words adds to and goes beyond the rich results found with alphabetic writing systems (Share, 2008). Comparative reading research across writing systems helps to discover universal and specific principles in reading and moves us toward a reading science that is universal (Perfetti, 2011; Perfetti et al., 2013).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124103887000099

Reading, Neural Basis of

J.A. Fiez, in International Encyclopedia of the Social & Behavioral Sciences, 2001

2 The Representation of Orthography

The organization of the visual system has been a topic of longstanding interest in the neuroscience and cognitive neuroscience research communities. One question that is highly relevant for studies of reading is whether different regions of the visual object recognition system (located in the ventral occipital and temporal cortex) are specialized for processing different types of stimuli, such as faces or text. In the domain of reading, the search for a region that selectively processes text has generally involved two types of comparisons. The first has been comparisons of words and nameable pictures. This allows regions that are specifically involved in orthographic processing to be teased apart from regions that are more generally involved in the naming of meaningful stimuli. The second approach has been comparisons between different types of orthographic stimuli. Most often, words and pseudowords (pronounceable nonwords, such as ‘floop’) have been compared to nonword stimuli that are visually similar to words (e.g., stimuli composed of artificial letters, or ‘false fonts,’ or random consonant strings). This allows regions that are specifically involved in the representation of words to be dissociated from regions that are involved in lower-level processes, such as letter recognition and the detection of line segments.

Several regions that may mediate orthographically specific processing have been proposed. The most promising ‘word form area’ is in the left middle fusiform gyrus, near the border between the temporal and occipital cortex. Activation in the fusiform gyrus is found reliably when subjects read words aloud (Fig. 1), and one reason to suspect that it may be involved in orthographic processing is the fact that it lies near areas associated with simple visual processing and visual object recognition. More direct evidence comes from studies that have found responses in the fusiform to words and pseudowords, but not visually similar nonword stimuli, such as random letter strings (Fiez and Petersen 1998). Evidence that this area is influenced by the degree to which an orthographic stimulus functions as a whole word (lexical) unit comes from studies in Japanese readers. The Japanese writing system contains two types of characters: Kana characters represent syllables, while Kanji characters represent entire words. Significant activation in the fusiform is found when subjects read words aloud written with Kanji versus Kana characters, and when subjects are given a task that requires them to access the orthographic form of a Kanji character, such as writing the Kanji equivalent of a word written in Kana (Nakamura et al. 2000, Sakurai et al. 2000).

Data from neuropsychology also point toward the important role of occipitotemporal areas, such as the fusiform gyrus, in orthographic processing. Subjects with brain damage in this area can exhibit the syndrome of pure alexia (Patterson and Lambon-Ralph 1999). In pure alexia, the ability rapidly to access or represent the orthographic form of an entire word, or the ability to use orthographic information to access phonological and semantic information, is lost. As a result, subjects use a laborious letter-by-letter strategy to identify and read aloud a word. Subjects with pure alexia generally are able to perform simple visual perceptual tasks, recognize and name visual objects, and write words normally. Japanese subjects can also exhibit alexia following damage to the fusiform, but long-lasting impairments are typically limited to the reading and writing of Kanji characters (Iwata 1984).

Results from ERP and MEG studies indicate that there are differences in the electromagnetic current evoked by words and wordlike stimuli, as compared to nonword stimuli such as letter strings (Cohen et al. 2000; Tarkiainen et al. 2000). The temporal information gained from these studies indicates that orthographically specific processing at the word level begins 100–200 milliseconds (ms) after the onset of a stimulus. Attempts to localize the source of the waveform differences have generally implicated the left basal temporal cortex. More specific localization has come from a study in which recordings were done using an electrode placed in the fusiform gyrus of patients prior to brain surgery. This study found neurons that selectively responded to orthographic stimuli (Nobre et al. 1994).

While there is a significant amount of accumulated evidence across multiple methodologies that points towards a role for the fusiform gyrus in orthographic processing, the precise function of this area remains a point of debate. One issue is whether domain-specific regions—‘face areas’ or ‘word form areas’—exist in the visual system. Since reading is a skill acquired very recently on the evolutionary time-scale, it is unlikely that any brain region would be innately specified to represent word forms. Rather, any specialization is most likely to occur as a result of extensive experience with a relatively unique class of visual stimuli. An alternative point of view is that the analysis of orthography places heavy demands on particular types of visual analysis, such as the ability to resolve input with high-frequency spatial information, or to represent a set of distinct objects that are perceptually very similar. Portions of the fusiform may be particularly suited for these types of analysis (either innately or through experience), and thus may be used to a greater extent for processing orthographic versus non-orthographic stimuli.

A second issue is the level of information that may be represented in the fusiform. Recent evidence indicates that letter-by-letter reading is typically accompanied by visual-perceptual difficulties below the level of the whole word. Deficits in lower-level processing could impair the access to or generation of higher-level orthographic representations (Behrmann et al. 1998). Conversely, imaging evidence reviewed more extensively below indicates that portions of the fusiform are active during linguistic tasks that are not thought to require access to orthographic information. Activation associated with orthographic processing may thus reflect more abstract lexical (whole word) or semantic information that can be accessed through either printed words or speech.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B0080430767035099

Neuroscience Bases of Learning

M.H. Immordino-Yang, K.W. Fischer, in International Encyclopedia of Education (Third Edition), 2010

Neural Networks for Reading

Another area of concentrated research interest is the study of reading development, both in typically developing and dyslexic children. Acquiring literacy skills impacts the functional organization of the brain, differentially recruiting networks for language, visual, and sound representation in both hemispheres, as well as increasing the amount of white-matter tissue connecting brain areas. Work on individual differences in the cognitive paths to reading has enriched the interpretation of the neurological research (e.g., Knight and Fischer, 1992), and helped to bridge the gap between the neuroscience findings and classroom practice (Katzir and Pare-Blagoev, 2006; Wolf and O'Brien, 2006). In dyslexic readers, progress is being made toward better understanding of the contributions of rapid phonological processing (Gaab et al., 2007), orthographic processing (Bitan et al., 2007), and visual processing to reading behaviors, as well as to thinking in other domains (Boets et al., 2008). For example, the visual field of dyslexics may show more sensitivity in the periphery and less in the fovea compared to nondyslexics, leading to special talents in some dyslexics for diffuse-pattern recognition (Schneps et al., 2007). Most recently, research looking at developmental differences in neurological networks for reading across cultures has begun to appear (e.g., Cao et al., 2009), which ultimately may contribute to knowledge about how different kinds of reading experiences shape the brain.

The neural networks for learning reading and math have important implications for education, as the most effective lessons implicitly scaffold the development of brain systems responsible for the various component skills. For example, successful math curricula help students to connect skills for calculation with those for the representation of quantity, through scaffolding the development of mental structures like the number line (Carey and Sarnecka, 2006; Griffin, 2004; Le Corre et al., 2006). While different students will show different propensities for the component skills, all students will ultimately need to functionally connect the brain systems for quantity and calculation to be successful in math.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780080448947005005

Adult and Second Language Learning

Denise H. Wu, Talat Bulut, in Psychology of Learning and Motivation, 2020

3.1 Orthographic systems

All spoken languages code meaning by using distinct units of sound called phonemes that are available in the relevant language's inventory. However, writing systems differ in terms of how they represent the smallest contrastive units in writing (graphemes) to establish correspondence between written symbol and meaning. Graphemes may represent phonemes as in English, or syllables as in Japanese Kana, or morphosyllables as in Chinese (Frost, 2005). The writing systems (orthographies) utilized across the world are classified accordingly into alphabetic, syllabic and logographic systems. For the sake of brevity, we will broadly categorize writing systems under alphabetic and non-alphabetic orthographies.

Studies of orthographic processing in alphabetic languages have generally investigated how the systematic correspondence between word forms and sounds—namely, regularity and consistency of a grapheme mapping to a phoneme—affects word recognition. Regularity depends on grapheme-to-phoneme conversion (GPC) rules, which concern typical and most common pronunciations of individual graphemes. Take the English word silk as an example: The grapheme i typically corresponds to the phoneme /ɪ/ in English, and since this typical pronunciation is followed, this word is considered to be regular. However, English has a relatively “deep” orthography among alphabetic languages (Frost, Katz, & Bentin, 1987), as an English grapheme does not always map to a single phoneme; for example, the grapheme i corresponds to the phoneme/aɪ/in the word pint. As a result, certain words have irregular pronunciations according to the GPC rules. Another way to look at the correspondence between orthography and phonology in alphabetic languages is to consider pronunciation consistency of clusters of letters (or word bodies) embedded in different words. Based on this criterion, the word pint is also inconsistent, since the pronunciation of the word body “int” in this word does not agree with the pronunciation of the same word body in other English words such as mint or hint, which are regarded as consistent words. Because many irregular words are also inconsistent, this has led to considerable confusion in the literature that aimed to pinpoint the effect of one or the other (Cortese & Simpson, 2000).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/S0079742120300013

A Common Neural Progression to Meaning in About a Third of a Second

Kara D. Federmeier, ... Danielle S. Dickson, in Neurobiology of Language, 2016

45.1.3 Auditory Word Processing

Whereas “the time locking of neural activity in the visual pathway is poor, its timing is sluggish, and its ability to follow fast transitions is limited” (Pratt, 2012), the auditory system is fast and exquisitely sensitive to timing. Nevertheless, semantic access seems to proceed with a remarkably similar timecourse in the two modalities.

One of the earliest language-related, albeit not language-specific, effects is an enhancement of the amplitude of the auditory N100 component, which peaks at approximately 100 ms and is part of the normal evoked response to acoustic onsets, offsets, and deviations. Although not acoustic onsets, word onsets in continuous speech elicit larger N100s than matched, non-onset sounds (Sanders & Neville, 2003). This effect seems to reflect increased temporally directed selective attention to less predictable information in an auditory stream (Astheimer & Sanders, 2011).

Paralleling the timecourse of orthographic processing for written words, phonological—but not yet semantic—processing of spoken words has been associated with ERP amplitude modulations between 250 and 300 ms. The Phonological Mapping Negativity (PMN) has been attributed to the detection of a mismatch between expected and realized phonological information, such as when an incoming phoneme violates a context-based or task-induced auditory/phonological expectation (e.g., Connolly & Phillips, 1994; Desroches, Newman, & Joanisse, 2009; Newman & Connolly, 2009). For example, in a task wherein participants were given an input to transform (e.g., “telk” or “hat”) and a target transformation of the initial phoneme (e.g., “m”), PMNs were larger when the onset phoneme of the probe stimulus mismatched the resulting expectation (e.g., for “melk” or “mat”). Importantly, PMN effects are obtained for both words and pseudowords (e.g., Newman & Connolly, 2009). The PMN has therefore been proposed to reflect phonological mapping processes that precede semantic access.

Effects associated with word level processing have been reported beginning at approximately 300 ms. Using a cross-modal word fragment priming task, Friedrich and colleagues (Friedrich, Kotz, Friederici, & Alter, 2004; Friedrich, Kotz, Friederici, & Gunter, 2004; Friedrich, Schild, & Röder, 2009) found larger positivities peaking at approximately 350 ms (P350) over left frontal electrode sites for words that mismatched (versus matched) the visual fragment prime. Partially mismatching targets elicited an intermediate response. The authors interpret their results as reflecting activation of abstract word forms. Effects of word repetition and word frequency have also been reported in this time window in studies using MEG (the M350; Pylkkänen & Marantz, 2003) and have been linked to sources in the left superior temporal lobe (Pylkkänen et al., 2002).

P/M350 modulations are coincident in time with (but are of opposite polarity from) effects of repetition, frequency, and phonological priming on the N400. The N400 time window is also when effects of semantic manipulations of many kinds (semantic priming, context effects, etc.) are first observed for auditory words (see review in Kutas & Federmeier, 2011). As with the visual modality, the N400 elicited by auditory stimuli is not specific to linguistically relevant inputs (syllables or words), because meaningful nonlinguistic environmental sounds also elicit N400s and N400 effects (Van Petten & Rheinfelder, 1995).

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780124077942000456

Language and Lexical Processing☆

Randi C. Martin, ... Hoang Vu, in Reference Module in Neuroscience and Biobehavioral Psychology, 2017

Neuropsychological Evidence on Lexical Processing

Cognitive neuropsychological studies with brain-damaged patients provide strong evidence on lexical processing. Evidence for independent modules (e.g., recognition vs. production, written vs. spoken) in lexical processing can be obtained from patients who can competently produce or understand some types of linguistic information but not others. For instance, pure word deafness (PWD) is a rare condition of severely impaired speech perception despite relatively intact abilities in other domains of language (e.g., writing, reading, and speaking) and recognizing non-speech sounds. These dissociations make PWD a compelling syndrome to support the independent modules view. Moreover, double dissociations between written word and spoken word comprehension have been reported in both word recognition and production. For example, some patients who show a deficit in recognizing printed words can nonetheless recognize spoken words whereas other patients show the reverse, even after controlling for their deficits in basic aspects of visual/auditory perception (e.g., some patients can recognize nonverbal materials in both modalities) or motor processes. These results demonstrate that the modality-specific deficit is specifically in the phonological or orthographic processing systems. Further evidence for the separation of phonology and orthography comes from patients who make semantic errors in only one output modality—for example, producing “pillow” as the name for a picture of a bed when speaking but producing “bed” correctly in writing. Such a pattern would not be expected if it were necessary to use the phonological form to guide spelling.

On the other hand, evidence that the same module is involved in different aspects of lexical processing (e.g., for both spoken and written words) can be obtained by showing strong correlations between the factors affecting performance on each aspect. For example, some patients have comprehension difficulties for certain words or certain semantic categories (e.g., animals or tools) irrespective of the input modality (e.g., spoken or written). Such results demonstrate that the same lexical–semantic system is involved in comprehending both spoken and written words, because if there were separate semantic representations for spoken and written words, it would be highly unlikely that the same categories of words would be affected for both modalities.

Among the neuropsychological case studies that showed significant lexical processing problems, category-specific semantic deficits stand out because they raise interesting questions concerning the nature of semantic representations and their organization in the brain. Patients with category-specific semantic deficits present with disproportionate or even selective impairments for one semantic category relative to other categories. These deficits tend to occur in the categories of animals, plants, and artifacts (i.e., man-made objects). Although more specific deficits have been observed, the majority of the patients have greater impairments for living things than nonliving things (e.g., man-made objects). It has been demonstrated that patients with category-specific semantic deficits have impairments in conceptual/semantic knowledge, as their deficits do not depend on stimuli being processed or the modality being presented (e.g., input or output). One possible explanation of category specific semantic deficits is motivated by the fact that semantic properties of an object tend to be interrelated (e.g., having eyes usually co-occurs with having a nose) and that objects in the same superordinate category (e.g., tools or fish) share properties. If constellations of shared properties are organized together in the brain, then when damage occurs to a region in which semantic properties are stored, deficits that affect certain categories will result. However, a difficulty with this account is that it does not explain why deficits tend to occur in these three categories (i.e., animals, plants, and artifacts)—that is, any constellation of shared properties should be subject to damage and we should observe patients with highly specific deficits such as a deficit for vehicles but not other artifacts.

Another possible explanation for category-specific deficits is that there are two separate semantic systems in the brain—one that represents sensory knowledge and another that represents functional knowledge (i.e., the functions that objects perform). Researchers have argued that the ability to recognize or name living things mainly depends on sensory information, whereas the ability to recognize or name nonliving things mainly depends on functional information. Consequently, damage to the sensory knowledge system results in a deficit specific to living things, whereas damage to the functional system results in a deficit specific to nonliving things. The Sensory/Functional theory is supported by some neuropsychological evidence. For instance, some patients with impairment in processing visual properties (e.g., the shape and texture of an object) have been shown to have disproportionate impairment in recognizing or naming living things. However, findings from some patients are problematic for this explanation. For example, some patients have been reported who have semantic deficits for only some subsets of living things (e.g., fruits and vegetables), and others have been reported who have a disruption of knowledge of both sensory and functional attributes of animals but a preservation of both sensory and functional knowledge for artifacts. It still remains an open question as to the organization of semantic knowledge in the brain.

In addition, evidence from speech production deficits also provides important information about the nature of grammatical representations in the brain. Deficits specific to certain grammatical categories have been reported because some patients have selective difficulties in the production of function words (i.e., words such as prepositions, pronouns, and auxiliary verbs that play primarily a grammatical role in a sentence). Such difficulties are remarkable given that these grammatical words are often quite short and easy to pronounce (e.g., “to” and “will”) and are the most frequently occurring words in the language. Some patients have demonstrated greater difficulty in producing nouns than verbs, and others have demonstrated the reverse. As with the semantic category deficits, there is no consensus on the explanation for these grammatical class deficits. In some cases, these apparent grammatical class effects have a semantic basis. For example, better production of nouns than verbs and better production of verbs than grammatical words may be observed because the patient is better able to produce more concrete words. However, for some patients, it appears that grammatical class effects cannot be reduced to a semantic basis; consequently, these deficits suggest that at some level in the production system words are distinguished neurally with regard to the grammatical role that they play in a sentence. The separability of grammatical information from other types of lexical information is supported by other findings showing that some patients with picture naming deficits can provide grammatical information about a word, such as its gender (in a language such as Italian or French), even though they are unable to retrieve any of the phonemes in the word.

Last, regarding the debate about whether word production involves either discrete stages (i.e., strict modular) or interactive activation (i.e., cascaded processing), the word production errors of aphasic patients can be better accounted for by an interactive approach. As discussed earlier, a number of studies investigating the speech errors produced by healthy populations support the interactive approach. Such an approach provides a means of accounting for some patients' tendency to produce words phonologically related to a target word (so-called “formal errors,” such as saying “mat” for “cat”) and for some patients' tendency to produce a large proportion of errors that are both semantically and phonologically related to a target (saying “rat” for “cat”). These observations support the argument that activation spreads to all phonological units before lexical selection is completed.

Neuropsychological research with brain-damaged patients, more so than research with healthy subjects, has addressed the issue of the relation between the phonological processing systems involved in speech perception and production and the relation between the orthographic systems involved in reading and writing. Some patients show an excellent ability to recognize and remember input phonological forms (e.g., being able to decide whether a spoken probe word rhymes with any of the words in a preceding list) but have great difficulty in producing output phonological forms (e.g., naming a picture). Other patients show the reverse pattern of great difficulty in holding onto input phonological forms (e.g., performing at chance on the rhyme probe task, in which subjects are required to judge whether the probe word rhymes with any of the word in the list) but showing preserved speech production. Similar double dissociations have been documented for orthographic processing. Thus, input and output forms in both speech and writing appear to be represented in different brain areas. However, although the input and output forms may be different, they are linked to each other. A close coupling between input and output forms appears to be involved in the development of speech production and in the maintenance of accurate speech production throughout adulthood.

Read full chapter

URL: //www.sciencedirect.com/science/article/pii/B9780128093245030789

Cross-linguistic perspectives on second language reading

Sheila Cira Chung, ... Esther Geva, in Journal of Neurolinguistics, 2019

2.3 Transfer of orthographic processing

Orthographic processing refers to the “ability to form, store, and access the orthographic representation” of words (Stanovich & West, 1989, p. 404). It is a print-based skill related to word-level reading among monolingual and bilingual children (Cunningham, 2006; Deacon, Wade-Woolley, & Kirby, 2009; Roman, Kirby, Parrila, Wade-Woolley, & Deacon, 2009). Traditionally in bilingual research, studies on students learning language pairings that do not share the same alphabet or writing system have converged on the conclusion that orthographic processing is language specific. To illustrate, studies on Chinese-speaking students learning English did not observe a cross-language effect of orthographic processing on reading between the two languages (Gottardo et al., 2001; Keung & Ho, 2009). Similar findings have emerged in studies involving English learners whose L1 is Russian (Abu-Rabia, 2001), Hebrew (Abu-Rabia, 1997), Korean (Wang, Park, et al., 2006), and Persian (Arab-Moghaddam & Sénéchal, 2001).

Recently, a growing body of research has yielded evidence of transfer of orthographic processing to reading in language pairs represented by the same alphabetic script such as English-French and English-Spanish (Deacon, Chen, Luo, & Ramírez, 2013a, 2013b, 2009; Commissaire, Duncan, & Casalis, 2011; Commissaire, Pasquarella, Chen, & Deacon, 2014; Sun-Alperin & Wang, 2011). For example, Deacon et al. (2009, 2013b) showed a bidirectional cross-language relation between lexical orthographic processing and word reading among grade 1 and 2 French immersion students. Similarly, there is evidence that Spanish orthographic processing contributed to English word reading among Spanish-speaking ELL students in grades 2 and 3 (Sun-Alperin & Wang, 2011), as well as in grades 4 and 7 (Deacon et al., 2013a).

With respect to spelling, Chung et al. (2017) reported that for grade 1 French immersion children, French orthographic processing was related to English spelling, but English orthographic processing did not predict French spelling. Given that English deviates from letter-sound correspondences more frequently than French, it is possible that English spelling requires children to rely on orthographic processing skills across languages, whereas French orthographic skills alone are sufficient to support French spelling. Sun-Alperin and Wang (2011), however, did not find transfer of orthographic processing to spelling in either direction among Spanish-speaking ELLs. Finally, at the construct level, cross-language correlations for orthographic processing measures have been observed among native French-speaking students learning English in grades 6 and 8 (Commissaire et al., 2011).

Qualitative studies on spelling errors have corroborated transfer of orthographic processing to spelling among L2 learners (for a review, see Figueredo, 2006). Rolla San Francisco and colleagues (Rolla San Francisco, Carlo, August, & Snow, 2006a; Rolla San Francisco, Mo, Carlo, August, & Snow, 2006b) examined English nonword spelling in Spanish-English bilingual and English monolingual children receiving either English or Spanish instruction in kindergarten and grade 1. In both studies, Spanish-instructed students were more likely than English-instructed students to produce spellings that were orthographically acceptable in Spanish, but incorrect in English. For example, in Spanish, /eI/ can be spelled either ei or ey. For the target English nonword nade, student responses of neid or neyd indicated Spanish influence. Similar evidence of L1 transfer to L2 spelling, and vice versa, has been observed in different L1-L2 combinations, including English-French (Chung, 2014; Joy, 2011; Morris, 2001), Arabic-English (Ibrahim, 1978), and Chinese-English (Wang & Geva, 2003).

To our knowledge, only one study examined the causal relation in cross-language transfer of orthographic processing. Pasquarella et al. (2014) demonstrated that French word reading in grade 1 predicted gains in English orthographic processing from grade 1 to grade 2 among French immersion students; however, French orthographic processing in grade 1 did not predict gains in English word reading. A similar temporal relation from word reading to orthographic processing was observed in an earlier study by Deacon, Benere, and Castles (2012) among monolingual English-speaking children. It seems that the orthographic choice tasks used in these studies (e.g., identifying which word looks more like a real word in a pair of homophones, dream-dreem, or pseudo-homophones, baff-bbaf) measure children's store of orthographic representations, which is an outcome rather than a predictor of word reading. Future research may consider tasks that tap into children's ability to acquire orthographic patterns, such as the orthographic learning task (Share, 1999).

Taken together, there is converging evidence of cross-language transfer of orthographic processing when the languages under acquisition share the same Roman alphabet (e.g., English-French; Chung et al., 2017; Commissaire et al., 2011; Deacon et al., 2009; Pasquarella et al., 2014; English-Spanish; Sun-Alperin & Wang, 2011). On the other hand, no evidence has been reported between languages represented by different scripts (e.g., Abu-Rabia, 2001; Gottardo et al., 2001). Due to the lack of overlapping orthographic units, languages represented by different scripts may require different underlying mechanisms for orthographic processing (Deacon et al., 2009). Thus, transfer of orthographic processing appears to be partially determined by the typological distance in the language pairings. However, the direction of cross-language transfer, as well as the temporal relation between orthographic processing and reading remain unclear.

Read full article

URL: //www.sciencedirect.com/science/article/pii/S0911604417300532

Task dependent lexicality effects support interactive models of reading: A meta-analytic neuroimaging review

Chris McNorgan, ... James R. Booth, in Neuropsychologia, 2015

1 Introduction

Reading entails the decoding of visual orthographic representations into a phonological representation. The ease with which skilled readers map between these very different representational systems is the product of a great deal of explicit and implicit learning. In alphabetic languages, on which we focus here, a fluent reader will have spent considerable time undertaking explicit instruction in the rules for mapping letters and letter combinations to existing verbal representations (i.e., the alphabetic principle). Models of reading development and disorders agree that phonologically decoding a particular string of letters depends on whether or not those letters map to a word with which an individual is familiar. Lexicality manipulations are consequently an important tool for investigating reading processes. Lexicality refers to whether a letter string represents a word with an associated meaning (e.g., TRAY). Letter strings that do not represent words can be either pseudowords (e.g., TAYR), which are pronounceable strings of letters sharing characteristics of legal words but without an associated meaning, or non-words (e.g., RTYA), which have no associated meaning and additionally violate the spelling rules for a language. Lexicality presumably influences many aspects of language processing and may consequently be investigated using any number of experimental tasks. Of these, however, the lexical decision task (LDT) and naming (overt or covert) dominate the neuroimaging literature (Katz et al., 2012).

1.1 LDT and naming task characteristics

In the context of orthographic processing, the LDT requires participants to indicate whether a given letter string is associated with a real word. Participants are not expected to retrieve or even possess robust semantic representations for these words, but must merely be aware that some such representation exists, and this task has consequently been described as a signal detection process (Jacobs et al., 2003). Not all models of reading agree on the degree to which the LDT relies on semantic knowledge. For example, in the dual route cascaded (DRC) model of reading aloud (Coltheart et al., 2001), lexicality decisions are based on the outcome of a lookup process in the orthographic lexicon, and may proceed even if the semantic system is removed entirely (Coltheart et al., 2010). A contrasting perspective, taken by parallel distributed processing (PDP) models, such as the triangle model (Seidenberg and McClelland, 1989) is that there are no lexicons (Dilkina et al., 2010). Rather, reading in these models is the product of the dynamic interaction of orthographic, phonological and semantic processing systems (Harm and Seidenberg, 2004). The centrality of these interactions to the triangle model of reading, which assumes that skilled reading is the dynamic product of interactions between these systems, suggests this model as a framework for their interpretation. Unfortunately, only one study to date (Harm and Seidenberg, 2004) has fully implemented the triangle model (i.e., containing semantic, orthographic and phonological representational units), and this study did not explore the interaction between task and lexicality. Within the triangle model, the presence or absence of associations between a particular orthographic/phonological pattern and a semantic representation determine the lexicality status of a token. We take the position that the LDT is, by definition, tied to semantic memory, as even in the DRC model, lexical entries exists only for a letter strings with underlying semantic representations. This position is supported behaviorally, as LDT appears to automatically activate semantic representations, if available, though this activation may decay quickly without active maintenance (Neely et al., 2010). Moreover, compared to naming, LDT performance appears to be more dependent on semantic properties of words (Balota et al., 2004; Yap and Balota, 2009). We reiterate for clarity, however, that different models make different assumptions regarding the nature and degree of support that semantic knowledge provides. Within the DRC, for example, the semantic system may provide input into the phonological and orthographic lexicons, providing a basis for semantic priming effects in LDT and naming tasks (Blazely et al., 2005), but it is not strictly required for either task. Moreover, simulations of semantic processing in these tasks within the DRC do not exist. Thus, it is unclear whether the DRC predicts that the LDT should be particularly sensitive to semantic input.

Naming, whether overt or covert, requires participants to transform a given letter string into the corresponding phonological representation, and in the case of overt naming, or “reading aloud”, additionally generate the articulatory motor sequences required to verbalize that representation. Because the spelling-to-sound mappings for pseudowords are unfamiliar, reading aloud should be more difficult for these items. The triangle model assumes that naming taps semantic representations, and the neuroimaging literature supports this argument (Binder et al., 2005). However, we assume that naming task performance is more tightly bound to processing within the phono-articulatory system, and this too is borne out behaviorally: Balota and colleagues carried out hierarchical regression analyses of naming and LDT latencies for monosyllabic (Balota et al., 2004) and multisyllabic words (Yap and Balota, 2009). These studies, which examined the influences of phonological (e.g., onset phoneme characteristics), lexical (e.g., orthographic neighborhood size) and semantic (e.g., imageability) features show that phonological features and word length (both characteristics relevant to pronunciation) are more predictive of naming performance, whereas semantic variables were more predictive of LDT performance.

Because only words have associated semantic content, we predict increased activation for words relative to pseudowords in regions implicated in semantic processing, most pronounced for the LDT. Conversely, we predict increased pseudoword activation in phono-articulatory areas, reflecting the increased difficulty in making spelling-to-sound mapping for these items, and this should most pronounced in naming.

To our knowledge, only Carreiras et al. (2007) have explored task by lexicality interactions, finding some evidence that lexicality effects are modulated by task. Naming was associated with greater left precentral gyrus activation than the LDT for the [Pseudowords>Words] contrast, which the authors argued reflects non-semantic phonological retrieval for pseudowords. This supports the argument that naming more strongly taps phonological processes and that these activations should be stronger for pseudowords. However, the LDT was associated with greater right inferior frontal gyrus activation (IFG) for words, which they argued reflected response inhibition for pseudowords, rather than semantic activation for words. Because processes related to response selection and attention have not been modeled within the triangle model, we will not speculate on this result. Carreiras et al. did, however, find greater activity for words than for pseudowords in a middle temporal region implicated in semantic processing (Binder et al., 2009) that was numerically greater for LDT. This leaves open the possibility of a subtle task by lexicality interaction within this region, or that the items used in this particular experiment were not ideally suited for eliciting robust semantic activation. A meta-analytic review of task and lexicality effects may thus reveal semantic-processing related interactions between lexicality and task in middle temporal regions.

1.2 Previous meta-analyses of lexicality effects

Reading in alphabetic languages involves the coordination of a network of brain regions that, broadly speaking, play specialized roles in supporting orthographic, phonological and semantic processing. The role of individual or networks of brain regions underlying these processes has been studied in great deal. Orthographic processing is attributed to bilateral occipitotemporal cortex and left mid-fusiform gyrus. Phonological processing is attributed to left superior posterior temporal cortex and the temporoparietal junction and inferior frontal gyrus extending to premotor cortex. Finally, semantic processing is attributed to anterior fusiform and inferior and middle temporal gyrus and the anterior inferior frontal sulcus. Though a thorough summary of the literature supporting these functional assignments is beyond the scope of the present article, they fall from meta-analyses of the neuroimaging literature (Taylor et al., 2013), and are also consistent with a large body of patient studies (e.g., Damasio, 1992; Schwartz et al., 2009; Turkeltaub et al., 2013).

As argued earlier, lexicality effects provide insight into the effect of word knowledge on reading, and experimental manipulations involving words and pseudowords are commonly used. Three previous meta-analyses have examined the patterns of word and pseudoword activations across multiple tasks, including naming, lexical decision, phonological decision and semantic tasks. Jobard et al. (2003) and Cattinelli et al. (2013) used anatomical label as a clustering mechanism, in contrast with the ALE approach used by Taylor et al. (2013), and in the present study, which assesses inter-study concordance by measuring co-activations within Gaussian fields. There are many ways in which words and nonwords differ, and lexicality effects can consequently be used to provide insight into many aspects of reading. The Cattinelli study aimed to further qualify the subnetworks that support different aspects of reading, and the authors argued that word and pseudoword reading depends on distinct subnetworks involved in lexical/semantic processing and in phonological/orthographic processing, respectively. Because models often make different assumptions about how lexicality influences reading, lexicality effects are often used to support or challenge these models. The Jobard and Taylor meta-analyses examined many such studies to assess whether the neuroimaging literature generally supports the DRC (Jobard et al., 2003), and test several predictions made by the DRC, connectionist dual-process (CDP+) and triangle models (Taylor et al., 2013). Though Cattinelli et al. (2013) separately examined the effects of lexicality, task and difficulty (which may also be task-dependent), none of the previous meta-analyses have examined interactions between lexicality and task.

1.3 Summary of predictions

Analyses of lexicality by task interactions would provide valuable insight into how semantic and phonological knowledge interact with the orthographic system during reading. Because these interactions have not been formally modeled in a fully-implemented simulation of the triangle model, our predictions are inferred from properties of the model discovered through related simulations, and those that are generally true of this class of connectionist models. The present meta-analysis explores task-driven interactions between semantic, phonological, and orthographic systems in the context of the triangle model of reading. There is a rich body of neuroimaging literature exploring the neural substrates of these systems. Understanding how these systems interact during reading and help constrain models of reading. We predict that task effects will emerge in brain regions implicated in semantic and phonological processing between the LDT and Naming tasks, which we assume to depend differently on semantic and phonological processing. Moreover, because words may have directly associated semantic representations, but pseudowords do not, and pseudowords should be more difficult to decode, we similarly predict that lexicality effects favoring words or pseudowords should be apparent in brain regions implicated in semantic and phonological processing, respectively. Finally, we predict that task and lexicality effects will interact additively, such that activation for naming relative to LDT will be strongest for pseudowords, and that activation for LDT relative to naming will be strongest for words.

Read full article

URL: //www.sciencedirect.com/science/article/pii/S0028393214004680

What lobe of the brain that is responsible for recognizing print letters and letter patterns?

The temporal lobe is responsible for phonological awareness and decoding/discriminating sounds. The frontal lobe handles speech production, reading fluency, grammatical usage, and comprehension, making it possible to understand simple and complex grammar in our native language.

Which part of the brain stores information for automatic word recognition?

The occipital-temporal region (at the back) where the brain stores the appearance and meaning of words (i.e., letter-word recognition, automaticity, and language comprehension). This is critical for automatic, fluent reading so that a reader can quickly identify words without having to sound each one out.

Do our eyes process letters by letters?

Although we may not be aware of it, we do not skip over words, read print selectively, or recognize words by sampling a few letters of the print, as whole language theorists proposed in the 1970s. Reading is accomplished with letter-by-letter processing of the word.

Which lobe of the brain is responsible for higher level thinking and planning and processing the sounds of speech?

The frontal lobe is responsible for initiating and coordinating motor movements; higher cognitive skills, such as problem solving, thinking, planning, and organizing; and for many aspects of personality and emotional makeup.

Toplist

Neuester Beitrag

Stichworte