Neurocognition of Language/Language Development
Introduction
editSo far this book has dealt with the biological underpinnings of speech comprehension and production, as well as with the concepts of reading and writing in general. Now the focus shall be on how language develops in humans. This chapter will outline the first steps of language development; from the initial exposure to language whilst still being in the womb to the first vocalizations, words and eventually to the early processing of sentences and production of own utterances (Figure 1). Thus, the so-called ‘milestones of language development’ will build the framework of this chapter.
First of all, what are the qualities newborns are equipped with when they are born? As everyone knows, newborns are not able to communicate their needs by talking, yet it is assumed that they are already attuned to the sound of human voices and even to speech and faces (Brooks & Kempe, 2012). Nevertheless, about a year passes by until infants are able to utter their first words (Brooks & Kempe, 2012). During this time, infants have to rely on other mechanisms helping them in enforcing their needs. Thus, the importance of gestures at the very beginning of an infant’s life is pointed out as well as the supportive function that social environment has on children (Brooks & Kempe, 2012; Kuhl, Tsao, & Liu, 2003). This section will also give an answer to the question why parents and other adults tend to talk in a very different, i.e. high pitched, voice to their children and what kind of advantages emanate from it (Purves et al., 2013). Furthermore, this chapter aims at dealing with the question whether differences in language are already noticeable within the first year of life. At that point it shall be referred to studies employing high amplitude sucking in babies indicating their ability to discriminate between languages (Mehler et al., 1988) as well as to studies on phoneme discrimination (e.g. Brooks & Kempe, 2012; Mampe, Friederici, Christophe, & Wermke, 2009).
Subsequently, the mechanisms of how children start to perceive single words, and how they eventually figure out the underlying meaning of these words, are described in detail. Thereby, the role of rhythm, language-specific metrical stress and distributional information are considered (e.g. Brooks & Kempe, 2012). In addition to that, it shall be dealt with sentence processing in children and the capabilities that assist them in doing so.
Until that point, the milestones of language acquisition of the first two years of children have been considered, yet we do not only want to know when children acquire which essential ability on their way to proper language acquisition, but also what makes their language processing different from adults (Brooks & Kempe, 2012). Therefore, studies employing functional imaging have been conducted in order to search for differences in neural activation. These studies have confronted children with speech or simple vocalizations (e.g. Dehaene-Lambertz, Dehaene, & Hertz-Pannier, 2002). Yet, on this account it shall not be disguised that scientists have discussed and questioned to some extent whether it is generally useful and reasonable to apply functional imaging techniques in children (e.g. Kuhl & Rivera-Gaxiola, 2008).
At the end of this chapter, the focus shall be on the question whether the acquisition of language follows a certain sequence, which implies that learning to understand and produce speech is only possible within a limited timeframe. The underlying idea is the one of a so-called ‘critical period’. The critical period in language acquisition has been an intensely debated topic in the past and researchers have attempted to answer this question by approaching it from different angles (Brooks & Kempe, 2012). On the one hand, research has investigated the effects brain injuries have in areas relevant for language comprehension and production (Bates, 1999), taking into consideration differences in age at which the injury was acquired. However, on the other hand scientists have attempted to examine the concept of a critical period for language by studying the process of how second languages are acquired, which has led to ambiguous results (Brooks & Kempe, 2012; Johnson & Newport, 1989). Finally, case studies, as for instance the one of Genie, a feral child who was unfortunate to live in an impoverished environment with barely any exposure to language, have given more insight into the time frame of language acquisition and have proven to be helpful in resolving the issue of the critical period (e.g. Purves et al., 2013).
Prelinguist stage
editDuring their first year of life, children learn “to recognize and reproduce sound patterns of their native language” (Brooks & Kempe, 2012; p. 20). Since children need about a year to form their first words, the step before is often referred to as the so-called ‘pre-linguist’ or ‘pre-verbal’ stage (Brooks & Kempe, 2012). The up-coming section shall focus on this very subject.
When babies are born they already come with distinct abilities or preferences, as for instance a preference for human voices and faces (Brooks & Kempe, 2012). This can be explained by the fact that the auditory system is already well developed before birth, i.e. during the third trimester of pregnancy, and the unborn is exposed to speech in the womb (Brooks & Kempe, 2012; DeCasper & Spence, 1986; Dehaene-Lambertz, Hertz-Pannier, & Dubois 2006). However, neither are these auditory perceptual abilities completely matured, nor can sounds outside the womb be perfectly processed, due to the womb acting as a kind of filter making speech sound flat (Brooks & Kempe, 2012; Conboy, Rivera-Gaxiola, Silva-Pereyra, & Kuhl, 2008; Dehaene-Lambertz et al., 2006). Anyhow, studies reveal that prenatal experience with regard to language exists, as the following two studies will demonstrate.
DeCasper and Spence (1986) conducted a study demonstrating that prenatal experience has an impact on postnatal auditory preferences. In their study, pregnant women were told to read a certain passage every day to their unborn child during the last 6 weeks of pregnancy. After the babies were born they were read this familiar passage again, as well as a novel one. Infants demonstrated their preference for the familiar passage by displaying increased sucking rates during these passages.
Another study from Mehler and his colleagues (1988) displays the influence that prenatal exposure to rhythm and intonation exerts on language discrimination. In their study, they found that newborns are able to differentiate utterances in another language from their native language, by exposing French newborn babies to French and Russian four days after birth. Since the babies were only four days old it can be inferred that prenatal instead of postnatal experience has to account for this ability.
So far the capabilities of newborns have been described, which include the ability of hearing by the beginning of the third trimester of pregnancy as well as the effect of prenatal experience on listening preferences and the ability to recognize the own native language already a few days after birth, due to the employment of prosodic cues. The following section shall focus on the perception of phonemes and on consequences for language acquisition emanating from it.
Speech perception
editPhonological processes represent the infant’s first step into language as infants need to tune in on the phonetic peculiarities of their native language (Friederici, 2005). Therefore, the ability of infants to discriminate phonetic contrasts within the own native language, as well as within foreign languages, has been the subject of many studies. As newborns or infants are a very special group of subjects that require specific behavioral research methods, the various methods available for examining them shall be reviewed at first. Subsequently, studies of phonetic discrimination employing different methods shall be discussed and explanations for these results shall be sought.
Methods. There are various methods available in order to examine infants’ language abilities (Conboy et al., 2008). First of all, many studies have employed various kinds of behavioral techniques that were developed specifically for investigating infants, such as high-amplitude sucking, visual habituation or dishabituation paradigms, or conditioned (operant) head turn paradigms (e.g. Kuhl, Tsao, & Liu, 2003; Mehler et al., 1988). Many other behavioral methods require an overt response of the subject, hence being more cognitively demanding and preventing their use in infants (Conboy et al., 2008; Kuhl & Rivera-Gaxiola, 2008).
As an alternative, functional neuroimaging technologies offer the possibility of investigating underlying cognitive processes without being invasive. Yet, when it comes to conducting research on children, scientists encounter the difficulty of not being able to apply certain neurophysiological methods on children (Friederici 2005, 2006). For instance, functional magnetic resonance imaging (fMRI) or magnetoencephalography (MEG) are barely suitable for the application on infants, since these methods are very susceptible to movement, are expensive and, in case of the fMRI, also very loud and aversive for young children (Kuhl & Rivera-Gaxiola, 2008). Methods such as electroencephalography (EEG) or functional near-infrared spectroscopy (fNIRS) can be used with adults as well as infants (Friederici, 2005, 2006; Kuhl & Rivera-Gaxiola, 2008). However, as fNIRS also measures hemodynamic responses, like fMRI, it also suffers from the relatively low temporal resolution of the haemodynamic signal measured with these methods (Kuhl & Rivera-Gaxiola, 2008). Therefore, EEG has proven to be the most adequate neurocognitive measure for language development research (Conyboy et al., 2008; Kuhl & Rivera-Gaxiola, 2008).
Usually, EEG is time-locked to the beginning of the presentation of a stimulus (e.g., in language research: a phoneme, syllable, or tone) and entails a series of different waveforms within a few hundred milliseconds after stimulus onset, which can have positive and/ or negative peaks (Conboy et al., 2008). These specific waveforms are described as event-related brain potentials (ERPs) and are usually distributed over different regions of the scalp (Friederici, 2005, 2006). ERPs indicate different cognitive processes in response to a certain stimulus (Conboy et al., 2008; Friederici, 2005, 2006). As it will be seen at the end of the chapter, the analysis and comparison of ERPs between infants and adults is a crucial part of neurocognitive developmental language research. The relevant ERP components will be introduced within the respective sections.
Results. At the beginning of their life, infants have the universal ability to discriminate between all of the 600 different phonemes. One of the most intuitive demonstrations of this is that Japanese infants are able to distinguish the phonemes /r/ and /l/, whereas Japanese adults are not (Kuhl, 2004; Purves et al., 2013). However, infants have to discriminate only between 30 and 40 phonemes, which make up their native language (Friederici, 2005; Kuhl, 2004). Several studies indicate that the ability of distinguishing non-native phonetic contrasts declines during the first year of life, while the same ability with regard to native contrasts increases (Conboy, Rivera-Gaxiola, Klarman, Aksoylu, & Kuhl, 2005; Conboy et al., 2008; Friederici, 2006; Kuhl, 2005, 2008; Kuhl & Rivera-Gaxiola, 2008). Most specifically, while the ability to discern non-native phonetic contrasts is still existent at the age of 7 months, it is not any longer present by the age of 11 months, as is demonstrated by a study of Conboy and her colleagues (2005), employing a novel version of the head turn paradigm. Furthermore, in the same study, word comprehension was negatively related to the ability of distinguishing between nonnative contrasts at the age of 11 months (i.e., the better the ability to discriminate non-native contrasts, the worse was word comprehension at the age of 11 months). Kuhl et al. (2008) were able to replicate this effect and could outline that better discrimination of native phonetic contrasts at the age of 7.5 months is associated with language growth (i.e., number of words produced, degree of sentence complexity, mean length of utterance) over the next two years – with growth being faster in children with better phoneme discrimination abilities.
In contrast to Conboy and her colleagues (2005), Kuhl et al. (2008) employed EEG and the so-called ‘oddball paradigm’. The ‘oddball paradigm’ involves the presentation of a background or standard stimulus (here: a syllable) and the presentation of a deviant stimulus, i.e., a different syllable in comparison to the standard stimulus (cf. Conboy et al., 2008, for a description of the oddball paradigm). Both stimuli are presented in a random order, with the deviant being displayed only in about 15 per cent of cases (Conboy et al. 2008; Kuhl et al., 2008). Subsequently, the average ERP waveforms for both standard and deviant are calculated, as well as their difference (Conboy et al., 2008). The deviant-specific ERP activity is called ‘mismatch negativity’ (MMN; Figure 2; see also Wikipedia entry for more details). It is represented by a negative peak at around 100-250 ms and regarded as an indicator for the (pre-attentive) discrimination of acoustically and phonetically different stimuli (Conboy et al., 2008). The larger the amplitude of the negativity, the better the discrimination (Kuhl et al., 2008). Importantly, as the MMN is elicited also in the absence of attention, this paradigm is particularly well suited to study perceptual processes already in very young children. Adults show negativities only, whereas infants show in addition positive peaks between 300-500 ms (Friederici, 2005, 2006).
As all studies cited here point into the same direction, the reader might wonder which underlying mechanisms account for this shift from broad perceptual abilities to more selective or narrowing ones, thus favoring the patterns of the native language (Conboy et al., 2008; Kuhl & Rivera-Gaxiola, 2008). Some researchers suggest that the acoustic or perceptual salience of stimuli as well as exposure to and experience with language exert an influence in that matter (Conboy et al., 2005, 2008; Kuhl & Rivera-Gaxiola, 2008). The concept of so-called 'native language neural commitment' (NLNC) is another attempt to explain this developmental shift (Conboy et al., 2005; Kuhl, 2004). It assumes that there is some kind of commitment of the brain’s speech processing systems to patterns of native-language speech (i.e., statistical and prosodic regularities) that promotes the learning of further aspects of the native language (Kuhl, 2004). In contrast, unfamiliar patterns, as for instance in case of foreign languages, are not to be identified any longer; in fact it is assumed that the acquisition of such new patterns is inhibited (Kuhl, 2004). Therefore, NLNC can give an explanation why performance on learning a second language depends on the resemblances of first and second language (Kuhl, 2004).
However, as the studies cited above show, the ability to discriminate phonemes varies between children. Thus, not every one displays this kind of neural commitment (Conboy et al., 2008; Kuhl et al., 2008) and some individuals maintain the ability to distinguish phonetic contrasts in a non-native language also beyond these initial stages of language development (Conboy et al., 2005). Put differently, the acquisition of patterns of the native language is not promoted in these individuals (Conboy et al., 2005). Consequently, it could be inferred that neural commitment entails rapid language development in comparison to individuals being less neuronally committed to patterns of their native language (Conboy et al., 2005).
Speech production
editEven though infants hardly ever utter their first words before the end of the first year of life, they still produce speech earlier in their development. Already at the age of two months infants start to vocalize in form of cooing (Brooks & Kempe, 2012). These vocalizations are subject to variation due to the fact that on the one hand, the infant vocal tract changes during the first year of life, which has an influence on the range of sounds that can be produced by the infant (Brooks & Kempe, 2012). On the other hand, variations might be voluntarily employed by infants in order to adapt to the behavior of other people in the environment (Brooks & Kempe, 2012).
As it was already shown, infants are able to recognize their own language soon after they are born (Mehler et al., 1988). On that account it was deduced that this ability relies on different rhythms and intonation structures distinct languages have. Consequently, the question arises whether infants’ vocalizations are a typical representation of their native language (Brooks & Kempe, 2012). Mampe, Friederici, Christophe and Wermke (2009) attempted to address this issue. In their study they could detect that surrounding speech prosody influences newborns' cry melody, with German newborns rather crying with a falling and French newborns crying with a rising melody contour. This shows that newborns are able to memorize and produce intonation patterns typical of their native language (Mampe et al., 2009).
Babbling follows cooing and appears around the age of six months (Brooks & Kempe, 2012; Kuhl, 2004). At first it consists of the repetition of syllables, which are eventually combined and represent a precursor for the formation of first words (Brooks & Kempe, 2012). The benefit of cooing and babbling during the pre-linguist stage lies in the information that can be conveyed. Infants want to engage with their caretakers and employ vocalizations for that purpose, which are in turn reinforced by the caretakers and other adults (Brooks & Kempe, 2012). Thus, parents apply social shaping and as a consequence language development is promoted (Brooks & Kempe, 2012).
As it can be seen, speech production has informative value for adults and helps the infant in enforcing its needs. Yet, there are also other mechanisms infants can employ in order to communicate with their environment before they actually have the linguistic ability to do so. These mechanisms shall be described in the upcoming section and their mutual influence shall be discussed.
Supportive mechanisms
editGestures. Facial and other gestures develop simultaneously to speech (Brooks & Kempe, 2012). Since they require the same motor control systems and due to the fact that during the first two years of life infants tend to communicate with others by employing gestures next to vocalizations, i.e. by pointing at objects, one could conclude that gestures and speech complement one another (Brooks & Kempe, 2012). However, not only do infants enjoy the benefits of using gestures, but also “mothers typically coordinate facial expressions and other gestures with their vocalizations in affective interactions with infants", thereby adapting to the needs of their children (Fernald & Kuhl, 1987, p. 291). In this context, baby-talk (a.k.a. motherese, child-directed speech) is also important to mention and will be described next.
Motherese. The term motherese describes a distinct kind of speech adults and older children use in order to communicate with infants (Brooks & Kempe, 2012). Fernald and Kuhn (1987) define typical aspects of motherese, including easier syntax, semantic and phonetics in comparison to adult speech. Moreover, the motherese consists of a simplified lexicon and differs in intonation and prosody from adult-directed speech (Fernald & Kuhn, 1987). The function of this kind of communication is to facilitate language acquisition by emphasizing the phonetic peculiarities of the native language (Purves et al., 2013). Infants prefer this kind of speech comparison to normal, adult-directed language (Brooks & Kempe, 2012).
Fernald and Kuhl (1987) wanted to know what exactly drives the attention of infants to motherese and thus is responsible for the infants’ preference for motherese. In their study, frequency (i.e., perceived pitch) accounted for this preference, whereas no effect was found for amplitude (loudness) or duration (speech rhythm) of the speech patterns of motherese. It is assumed that pitch is related to emotional activation and positive affect (Fernald & Kuhl, 1987).
Social environment. As the sections on speech production, gesture, and motherese have shown, language is not a concept that can be scrutinized in isolation, but in fact is an essential part of social interaction. Therefore, mutual effects of language and social interaction shall be outlined in this section.
Already when having discussed the underlying mechanisms of speech perception, the being exposed to and experiencing language was pointed out as an influential factor (Conboy et al., 2005, 2008; Kuhl & Rivera-Gaxiola, 2008). In addition to exposure and experience, research has also acknowledged the importance of referential cues such as gaze following or joint-attention, when it comes to the promotion of language abilities (Brooks & Kempe, 2012; Kuhl, 2004). Especially, joint-attention and interaction with others have been identified as emotionally encouraging and as helping infants to map words to objects in the environment (Brooks & Kempe, 2012).
Yet, Kuhl, Tsao and Liu (2003) investigated the effects that different kinds of exposition to foreign languages have on language acquisition. At first, one group of 9-month-old American infants was exposed to native Mandarin speakers during 12 laboratory sessions with each of these sessions lasting 25 minutes, while another group, which functioned as control, was exposed to native English speakers for an equal amount of sessions (and time). After four weeks, both groups were tested in Mandarin speech perception, with the experimental group being significantly better than the control. However, when in a second study the lessons with the native Mandarin speaker were presented via audio- or videotape to another group of infants, this group was not significantly better at discriminating Mandarin contrasts. As a consequence, this experimental design again emphasizes the importance of social interaction in language learning, indicating that mere audible exposure is not sufficient in maintaining the ability to discern non-native phonetic contrasts (Kuhl et al., 2003).
Word processing and word production
editIn contrast to written words, fluent speech does not have any 'spaces' or clear pauses and therefore the phyical signals of different words tend to merge (Jusczyk, Houston, & Newsome, 1999; Kuhl, 2004). Yet, the ability to segment words from fluent speech is essential and influences the acquisition of a vocabulary within the native language (Jusczyk et al., 1999). By the age of 6 months infants already know the boundaries of familiar words (Brooks & Kempe, 2012) and by the age of 7.5 months infants are even able to segment words from fluent speech (Jusczyk & Aslin, 1995). The question now arising is: how do they actually succeed in this task? On this account several studies have emphasized the impact of metrical stress (Friederici, 2005, 2006; Kuhl, 2004). Not only do languages differ in the kind of phonemes they consist of but also with respect to metrical stress (Friederici, 2005). In English, for instance, the first syllable of a word tends to be stressed, whereas it is the opposite in French (e.g. Friederici, 2005). As soon as infants gain knowledge about the metrical stress of their native language, it helps them to identify the beginning and the end of words in the continuous speech stream (Friederici, 2005). Furthermore, in an experiment with 4-month old children, Friederici, Friedrich and Christophe (2007) could demonstrate that ERPs display a language-specific discrimination of stress patterns with a MMN being elicited in case the ‘wrong’ syllable was stressed, i.e. wrong in the sense of differing from what infants are used to in their native language.
However, at the age of 7.5 months, infants tend to rely on metrical stress alone, which results in infants mis-segmenting words if they are stressed contrary to the basic rules of their native language (Jusczyk et al., 1999). Yet, by the age of 10.5 months infants have developed the ability to integrate several informative cues next to metrical stress, such as for instance distributional information on the frequency of syllables in certain contexts and probabilities of adjacent syllables (Brooks & Kempe, 2012; Hay, Pelucchi, Graf Estes, & Saffran, 2011; Jusczyk et al., 1999; Kuhl, 2004).
Finally, this subchapter shall also consider some findings concerning the development of word production. As already mentioned, during the first year of life infants usually utter simple vocalizations only. Yet, by the age of 12 months they start building so-called holophrases, which are one-word sentences (Brooks & Kempe, 2012; see also Chapter 1 ## LINK: Chapter 1/Evolution ##). Due to the fact that infants have the desire to communicate with their environment, they have a high incentive to learn more and more words (Brooks & Kempe, 2012). All the same, mapping words to meaning is cognitively demanding during the first year of life, and thus rather difficult for infants. It therefore frequently results in classical word production errors such as under-extension and over-extension (i.e., using the word ‘dog’ either only for a very special one, e.g., the own dog, or employing the word for every living animal; Brooks & Kempe, 2012; Conboy et al., 2008). However, during the second half of their second year of life, infants overcome this obstacle and a vocabulary spurt begins. During the age of 17 to 24 months, infants are able to map words onto meanings very fast (‘fast mapping’) and the lexicon grows from about 50 words at the age of 17 months to about 200 words by the age of two years (Kuhl & Rivera-Gaxiola, 2008).
Vocabulary Spurt Switch from slow increase in vocabulary to sudden fast increase.
Early sentence processing
editThe just described ability to segment words from speech is a prerequisite of early sentence processing, as it helps in extracting word meanings and thereby generating a lexicon (Kuhl & Rivera-Gaxiola, 2008). Nevertheless, when it comes to syntactic processing, only very few studies are available (Friederici, 2005, 2006) and accordingly, hardly any assumptions can be made on that account. EEG studies with adult participants have displayed that the violation of syntactic rules either elicits a P600 or an E/LAN ERP component, or combinations of both (Friederici, 2005). The P600 is associated with syntactic revision and centro-parietally distributed whereas the E/LAN is a left anterior negativity elicited by violations of local syntactic rules and morphosyntactic errors (Friederici, 2005, 2006). One study (Silva-Pereyra, Rivera-Gaxiola, & Kuhl, 2005) reports P600-like ERPs for 3- and 4-year-old infants, whereas another study (Silva-Pereyra, Klarman, Lin, & Kuhl, 2005) fails to do so for E/LAN.
Further studies have used EEG to evaluate infants’ abilities to process sentence-level lexical-semantics. Usually a semantically inappropriate word within or at the end of a sentence elicits an N400 ERP, which has a negative peak at around 400ms and is centro-parietally distributed (e.g., The dog chases the tree; Purves et al., 2013). Developmental studies using this paradigm revealed an N400 effect for infants aged 14 to 19 months, which appears to be significant later and lasting longer in comparison to adults (Friederici, 2005) (Figure 3). Thus, it is inferred that lexical-semantic processes in infants are slower than in adults (Friederici, 2005). Furthermore, the fact that infants engage more frontal regions could be related to infants needing more attention in order to distinguish words or to infants’ processing being more image-based (Friederici, 2005).
FIGURE 3 HERE
From these few results, it might tentatively be inferred that to some extent children have already developed higher linguistic processing abilities similar to those of adults by the age of 3 to 4 years, such as for instance the ability to discriminate detect semantic and syntactic anomalies (e.g. Friederici, 2005). Thus, these studies have been taken to indicate that the underlying mechanisms might already be present at this young age. Yet, as these components tend to occur later with relation to stimulus onset and due to the fact that no E/LAN is elicited in children in contrast to adults, it can be assumed that these underlying mechanisms still need to undergo substantial developmental change in order to be as efficient as they are in adults (Conboy et al., 2008). The upcoming section is concerned with these underlying mechanisms, i.e. the neural basis of language.
Neural specialization
editThe issue of neural specialization can be addressed best by studies relying on fMRI. All the same, as already mentioned before, fMRI tends to be less suited in its application for young infants, as the temporal resolution is not as good as that of EEG and also because of the tendency of infants to move frequently (Kuhl & Rivera-Gaxiola, 2008). Also, the fact that fMRI is loud and narrow and thus a rather challenging environment for young children should be remarked, which entails that there are only very few studies of infants using fMRI (Kuhl & Rivera-Gaxiola, 2008). Thus, the studies and their results, which are described subsequently, should be considered with caution.
A study reported by Dehaene-Lambertz and colleagues (2002) intended to investigate infant brain activity while being exposed to normal and reversed speech. Therefore, they tested awake and sleeping 3-month-old infants in an fMRI design. It could be shown that regions in the left hemisphere (i.e., the superior temporal and angular gyri), corresponding to activation in adults, were already active in infants. Furthermore, awake infants displayed activation in the right prefrontal cortex when confronted with normal speech. This is in line with the assumption that normal, i.e. forward, speech would entail stronger activation than reversed speech. However, in contrast to adults, no difference between normal and reversed speech was found in the left temporal lobe in infants. Dehaene-Lambertz et al. (2002) therefore assume that the infant brain still changes after the third month when it comes to processing natural language. In a later study, Dehaene-Lambertz and her colleagues (2006) came to the conclusion that lateralization in infants is not as strong as in adults, but that there are also many functional similarities between infants and adults, which supports the assumption of a genetic bias for speech processing in certain brain areas.
To conclude, there seems to exist an early functional specialization for certain aspects of language in infants, however full connectivity between all language areas will be achieved later during development (Brooks & Kempe, 2012).
A critical period of language
editAs already mentioned in the introduction of this chapter, the concept of a ‘critical period’ has been an intensely debated topic in the scientific community (Brooks & Kempe, 2012). This concepts serves to describe the fact that language learning is only possible within a certain timeframe and cannot be fully compensated afterwards (Brooks & Kempe, 2012). Thus, this closing chapter seeks to give a brief overview over some of the evidence that has been taken to support the existence of a critical period for language.
Age of recovery from brain injury
editNot only have studies on brain injuries disclosed important brain areas when it comes to language production and perception (Purves et al., 2013), but studies on the recovery of brain injuries indicate a critical age up until which complete recovery (i.e., devoid of any lasting dysfunctions) can take place (Brooks & Kempe, 2012).
In 1999, Bates conducted a study on adults and infants who had experienced focal brain injuries. The results of this study were interpreted as showing that linguistic knowledge is not innate, cannot clearly be localized, and that a high differentiation is already present at birth, which implies that some brain regions are simply biased “towards modes of information processing that are particularly useful for language“ (Bates, 1999, p. 195). In detail it says that infants make use of rapid, detailed auditory and visual perception which helps them to analyze speech. Also, infants are able to map sounds to meanings which facilitates integration.. Furthermore, the same study revealed that injuries affecting the left hemisphere do not result in aphasia if those injuries happened early in life (i.e., before the age of five).
As the functional neuroimaging studies of Feldman (2005) demonstrate, regions of the right hemisphere that are homologous to left-hemispheric language regions are able to compensate for language functioning in case of severe injury of the left hemisphere. Thus, it can be inferred that the infant brain is highly plastic, allowing a reorganization of language in case of left hemisphere damage (Bates, 1999; Feldman, 2005). Consequently, development within a normal range can be observed in children with early focal brain injuries albeit at a somewhat delayed rate (Bates, 1999; Feldman, 2005). In summary, results on the influence of age on recovery from brain injuries support the idea of a critical period, as they indicate a clear age up until which damage can be compensated by other brain regeions, i.e., until five years of age.
Age of exposure to a second language
editAnother approach to studying the presence of critical periods for language relies on investigating whether learning a second language at young age leads to a higher proficiency than learning a second language at older age. If that were the case it would promote the idea of the critical period.
Following this approach, Johnson and Newport (1988) compared the performance of native Korean or Chinese individuals who had moved to the United States at different ages (i.e., between 3 and 39 years) and who had by the time of the study been living there for 3 to 26 years. By testing individual abilities in English grammar, Johnson and Newport (1988) report a significant correlation between age of arrival and test performance with individuals arriving earlier performing significantly better than late arrivals. Importantly, it was shown that until arrival ages of 7 years, the individuals performed at equal levels compared to native speakers. However, after puberty, the individuals rather differ increasingly due to individual variability instead of age, which is not found in young arrivals.
While this study of Johnson and Newport (1988) supports the concept of the critical period by demonstrating an age-related effect when it comes to the acquisition of a second language, researchers have not been able to replicate this study (Brooks & Kempe, 2012). Rather, evidence seems to point towards a general decrease in proficiency as age of acquisition of the second language increases, which may be due to individual differences in motivation and general cognitive abilities (Brooks & Kempe, 2012). Consequently, from this evidence on the acquisition of second languages, a clear-cut critical period for language which can be delimited to certain points of age cannot be derived., Rather the field currently seems to favor the assumption of a decline in neural plasticity (Brooks & Kempe, 2012) that affects the ease and proficiency with which a second language can be learned.
Feral children
editBefore the case of Genie, a feral child, is outlined, it should be noted that feral children in general are a very rare phenomenon and also that Genie is a very unique case for that matter, since she had to suffer from social isolation and in consequence also from language deprivation for a very long time, i.e. 13 years. There are no other reports of children being held captive for such a long time (Fromkin, Krashen, Curtis, Rigler, & Rigler, 1974).
Before the report of Genie, all feral infants or children that had been investigated were somehow retarded or delayed in their development (Fromkin et al., 1974). When Genie was found, she was 13 years old, malnourished, emotionally disturbed and mute. From the age of 20 months on, her parents locked her up in a dark room and tied her to a potty chair most of the time. Not only did she have to stay in the room all day isolated from any social contact, but also did her father punish her if she made any noise. This circumstance might have accounted for her muteness when she was found by social workers. Yet, a few days after she had been taken to hospital, she started to respond to speech of others. Also, investigators could not find any evidence of brain damage or another kind of retardation (Purves et al., 2013). Furthermore, after having tested Genie's speech comprehension, it was inferred that despite the fact that she did not utter any words, she was capable of understanding single words. However, the tests also revealed that she had hardly any understanding of grammatical structures. Nevertheless, she started to acquire some knowledge in that area and made steady progress.
When it comes to speech production, it shall be noted that Genie had some physical difficulties due to lacking control over her muscles. Genie's vocabulary was larger than that of younger normally-developed children whose speech had a comparable level of syntactic complexity as Genie’s. Yet, her competence on grammar did not exceed the developmental state of a two or two and a half year old child. Thus, she was not able to produce any truly elaborate utterances, but remained at the level of two or three word sentences (Purves et al., 2013).
In summary, the case of Genie gives an impression of how important early experience and early exposure to language are. Although some aspects, as for instance vocabulary, can be learnt to a certain extent, grammatical features remain an obstacle, which is barely manageable for a feral child like Genie. Thus, the inability to gain grammatical knowledge provides evidence for the concept of a critical period for proper language acquisition, as do studies on the recovery fom early brain injuries (Brooks & Kempe, 2012).
Summary
editThis chapter has provided evidence for prenatal experience in infants with regards to hearing. It was pointed out that this experience is restricted to rhythmic and prosodic cues of language as the womb acts as a filter for auditory signals. Even so, prenatal exposure to prosodic cues has proven to be sufficient in order for newborns to distinguish their native language from a foreign one. By the age of 11 months, abilities in the discrimination of native phonetic contrasts significantly correlate with language abilities in the second and third year, predicting the existence a more extensive vocabulary at older age. Perception of speech signals in general seems to be the main accomplishment within the first year of life, whereas production of language apparently lags behind language processing, as first comprehensible words are hardly ever uttered before the age of one year. Furthermore, research has shown that brain activity when processing phonemes or sentences is very similar between infants and adults and that the underlying neuronal mechanisms thus seem to be alike, with the ones of infants still having to mature over the lifespan. Finally, the long-assumed existence of a critical period for language acquisition is supported by studies on the recovery of early life brain injuries and by a report on language development in a feral child.
Further Readings
editFriederici, A. D. (2006). The neural basis of language development and its impairment. Neuron, 52(6), 941–52. doi:10.1016/j.neuron.2006.12.002
Kuhl, P. K. (2004). Early language acquisition: cracking the speech code. Nature Reviews. Neuroscience, 5(11), 831–43. doi:10.1038/nrn1533
References
editBates, E. (1999). Language and the infant brain. Journal of Communication Disorders, 32(4), 195–205.
- Brooks, P. J., & Kempe, V. (2012). Language Development. Chichester, UK: BPS
Blackwell.
Conboy,B. T., Rivera-Gaxiola, M., Klarman, L.,Aksoylu, E., & Kuhl, P. K. (2005). Associations between native and nonnative speech sound discrimination and language development at the end of the first year. In Supplement to the Proc. 29th Boston University Conf. on Language Development (Eds A. Brugos, M. R. Clark-Cotton & S. Ha). See http://www.bu.edu/ linguistics/APPLIED/BUCLD/supp29.html.
- Conboy, B. T., Rivera-Gaxiola, M., Silva-Pereyra, J., & Kuhl, P. K. (2008). Early language processing at the phoneme, word, and sentence levels. In A. D. Friederici, & G. Thierry (Eds.), Early Language Development: Bridging Brain and Behaviour. Amsterdam: John Benjamins Publishing Company.
DeCasper, A. J., & Spence, M. J. (1986). Prenatal Maternal Speech Influences Newborns ’ Perception of Speech Sounds *. Infant Behavior and Development, 9, 133–150.
Dehaene-Lambertz, G., Dehaene, S., & Hertz-Pannier, L. (2002). Functional neuroimaging of speech perception in infants. Science, 298(5600), 2013–5. doi:10.1126/science.1077066
Dehaene-Lambertz, G., Hertz-Pannier, L., & Dubois, J. (2006). Nature and nurture in language acquisition: anatomical and functional brain-imaging studies in infants. Trends in Neurosciences, 29(7), 367–73. doi:10.1016/j.tins.2006.05.011
Feldman, H. M. (2005). Language Learning With an Injured Brain. Language Learning and Development, 1(3-4), 265–288. doi:10.1080/15475441.2005.9671949
Fernald, A., & Kuhl, P. (1987). Acoustic determinants of infant preference for motherese speech. Infant Behavior and Development, 10(3), 279–293. doi:10.1016/0163-6383(87)90017-8
Friederici, A. D. (2005). Neurophysiological markers of early language acquisition: from syllables to sentences. Trends in Cognitive Sciences, 9(10), 481–8. doi:10.1016/j.tics.2005.08.008
Friederici, A. D. (2006). The neural basis of language development and its impairment. Neuron, 52(6), 941–52. doi:10.1016/j.neuron.2006.12.002
Friederici, A. D., Friedrich, M., & Christophe, A. (2007). Brain responses in 4-month-old infants are already language specific. Current Biology : CB, 17(14), 1208–11. doi:10.1016/j.cub.2007.06.011
Fromkin, V., Krashen, S., Curtiss, S., Rigler, D., & Rigler, M. (1974). The Development of Language in Genie: a Case of Language Acquisition beyond the “Critical Period.” Brain and Language, 1, 81–107.
- Johnson, J. S., & Newport, E. L. (1989). Critical period effects in second language learning: the influence of maturational state on the acquisition of English as a second language. Cognitive Psychology, 21(1), 60–99.
Jusczyk, P. W., & Aslin, R. N. (1995). Infants’ Detection of the Sound Patterns of Words in Fluent Speech. Cognitive Psychology, (29), 1–23.
Jusczyk, P. W., Houston, D. M., & Newsome, M. (1999). The beginnings of word segmentation in English-learning infants. Cognitive Psychology, 39(3-4), 159–207. doi:10.1006/cogp.1999.0716
Kuhl, P. K. (2004). Early language acquisition: cracking the speech code. Nature Reviews. Neuroscience, 5(11), 831–43. doi:10.1038/nrn1533
Kuhl, P. K., Conboy, B. T., Coffey-Corina, S., Padden, D., Rivera-Gaxiola, M., & Nelson, T. (2008). Phonetic learning as a pathway to language: new data and native language magnet theory expanded (NLM-e). Philosophical Transactions of the Royal Society of London. Series B, Biological Sciences, 363(1493), 979–1000. doi:10.1098/rstb.2007.2154
Kuhl, P. K., Tsao, F.-M., & Liu, H.-M. (2003). Foreign-language experience in infancy: effects of short-term exposure and social interaction on phonetic learning. Proceedings of the National Academy of Sciences of the United States of America, 100(15), 9096–101. doi:10.1073/pnas.1532872100
Kuhl, P. K., Conboy, B. T., Padden, D., Nelson, T., & Pruitt, J. (2005). Early Speech Perception and Later Language Development: Implications for the “Critical Period.” Language Learning and Development, 1(3-4), 237–264. doi:10.1080/15475441.2005.9671948
- Kuhl, P., & Rivera-Gaxiola, M. (2008). Neural substrates of language acquisition. Annual Review of Neuroscience, 31, 511–34. doi:10.1146/annurev.neuro.30.051606.094321
Mampe, B., Friederici, A. D., Christophe, A., & Wermke, K. (2009). Newborns’ cry melody is shaped by their native language. Current Biology : CB, 19(23), 1994–7. doi:10.1016/j.cub.2009.09.064
Mehler, J., Jusczyk, P., Lamsertz, G., Halsted, N., Bertoncini, J., & Amiel-Tison, C. (1988). A precursor of language acquisition in young infants. Cognition, 29, 143–178.
Purves, D., Cabeza, R., Huettel, S. A., Labar, K. S., Platt, M. L., & Woldorff, M. G. (2013). Language. In D. Purves, R. Cabeza, S. A. Huettel, K. S. Labar, M. L. Platt, & M. G. Woldorff (Eds.), Principles of Cognitive Neuroscience (2nd Ed.). Sunderland, MA: Sinauer.
Silva Pereyra, J. F., Klarman, L., Lin, L. J.-F., & Kuhl, P. K. (2005). Sentence processing in 30-month-old children: an event-related potential study. Neuroreport, 16(6), 645–8.
Silva-Pereyra, J., Rivera-Gaxiola, M., & Kuhl, P. K. (2005). An event-related brain potential study of sentence comprehension in preschoolers: semantic and morphosyntactic processing. Brain Research. Cognitive Brain Research, 23(2-3), 247–58. doi:10.1016/j.cogbrainres.2004.10.01