Neurocognition of Language/Sign Language

Introduction

edit

This chapter will give an overview of several aspects in sign language. Besides introducing basic facts concerning the structure and production of sign language, this section provides information about the neurolinguistic bases of sign language, i.e., how sign language is processed and represented in the brain. Lastly, the acquisition and development of sign language especially in deaf children will be explained.

Human language can be expressed in two ways (besides written language), i.e., in the auditory-vocal modality of spoken language or in the visual-gestural modality of sign language. Signs are gestures produced by hands and arms, in the space in front of the torso. They are perceived by the visual system (Corina & Knapp, 2008; Meier, 2012). However, despite the fact that sign language is expressed in so obviously different ways than spoken language, it is necessary to stress that sign languages are also natural languages that share many biological, cognitive and linguistic characteristics with spoken language. Strong support for this comes from the observation that sign language is comparable in grammatical complexity to spoken language (Corina & Knapp, 2008).

Before sign language will be explained in more detail, it is important to clarify some common misconceptions. First, deaf people are not automatically mute. Second, sign language is different from other gestural actions like, e.g., imitative movements. Also, the movements that make up sign language are processed in different ways than for example pantomime (Corina & Knapp, 2008). Finally, it should be clear that there is not only one sign language but many different sign language all over the world, which can differ for example in their syntactic structure (Brooks & Kempe, 2012).

Structure and processing of sign language

edit

Sign and spoken language are produced by different articulators. Signs are produced by hands and arms – i.e., by manual articulators – whereas speech production includes the mouth and tongue – i.e., an oral articulator. Further, they are perceived by different sensory organs. Signs are visually registered and spoken language is entirely perceived by the auditory system. In order to be able to perceive sign language, it is necessary to shift one’s attention and one’s gaze to the communication partner, whereas in spoken language it suffices to hear without seeing the person who is speaking.

The iconicity and arbitrariness of the communicative symbols also differ between signed and spoken language. It turned out that the manual gestures of sign language are more iconic in nature, which means that it is (more often) possible to infer their meaning from their morphology as is nicely demonstrated for the British sign representing the word camera. Many signs, thus, are non-arbitrary while there are fewer opportunities for imagistic representation in spoken language. Most spoken word forms are arbitrary, as their meaning is not related to the acoustic form of the spoken word. Nevertheless, arbitrary signs are also existent in sign language, as they are needed to express abstract, complex, and non-imageable concepts in communication (Meier, 2012).

There are five elements which are distinctive features in meaning that represent the basic characteristics of sign language, subsumed under the acronym “HOLME”: Hand shape, Orientation, Location, Movement, and (facial) Expression. Hand shape is composed of the movement of the forearm and the fingers. Orientation is related to the orientation of the palm, while location refers to the space of articulation – i.e., where in front of the torso the sign is produced. These elements help to create different signs and assist to distinguish them from each other. Thus, orientation and location can be treated as phonems because they are equivalent in function compared to the phonemes of spoken language (Brooks & Kempe, 2012). When hand shape and movement are also added than they are assumed to constitute the phonology (see also below). The sign language analog of prosody in spoken language is represented by facial expressions that are produced in combination with the location of manual gestures in front of the body.

Morphology is the study of shape and forms of language. In linguistic it describes the inner structure of words including inflection and derivation for example. The HOLME elements can be described as a combinational system which represents the morphology of sign language. Interestingly, such complex combinations allow a simultaneous expression of multiple aspects of sign language, such that hand shape and orientation together can express a noun for example. In contrast to this, words of spoken language are mainly produced one after another – i.e., sequentially. Communication in sign language often has a “topic than comment”-structure, which means that background information (the topic) is given first, which is then followed by the main clause (the comment). For example, the spoken sentence "I like lamb for meat." would be sigend as “Meat, I like lamb.”, as the topic of the sentence (i.e., ‘meat’) is stated first. The word order in the main clause, called syntax, varies in different sign languages. American Sign Language (ASL) uses a subject – verb – object word order (as in the example given in the previous sentence), whereas British Sign Language (BSL) uses an object – subject – verb structure (Brooks & Kempe, 2012). In BSL, the previous example, thus, would be signed as “Meat, lamb I like.”


Representation in the brain

edit

In general, neurocognitive research converges to suggest that signed and spoken language to a large degree use the same brain regions and may be seen as functionally equivalent in terms of their neurobiological bases (Campbell, MacSweeney, & Walters, 2007). The hemispheric dominance in terms of language production and processing in deaf people is comparable to that of hearing people (Corina & Spotswood, 2012). In contrast to this, it is observed that environmental factors – e.g., the nature of language input during development or the age of acquisition of sign language – have an impact on brain development which is reflected in individual differences for example in brain activation (Campbell, MacSweeney, & Walters, 2007; Homan, Criswell, Wada, & Ross, 1982).

Research on the brain bases of sign language is mainly performed by comparing and contrasting sign language to spoken language. Case studies of brain-lesioned patients helped to get information about the location and involvement of relevant brain regions in production and comprehension of sign language. However, due to some of the limitations of these approaches, researchers in the field are cautious as it may not (yet) be possible to detect all regions that are involved (Campbell, MacSweeney, & Walters, 2007). The intention of this section will be to give an overview over those brain regions that are known to be critical and important for the performance and processing of sign language.


The role of the left hemisphere

edit

The left hemisphere is specialized for linguistic processing of both signed and spoken language. The organization of this language-dominant part of the brain follows an anterior-posterior dichotomy for language production and comprehension. Analogous to spoken language, the production of sign language was associated with left inferior frontal cortex, i.e., Broca’s area, whereas comprehension is attributed to left posterior-temporal cortex, i.e., the classical receptive language area of Wernicke. This was demonstrated in a seminal study by Bavelier et al. (1998) who compared the cerebral organization in hearing and deaf people during sentence processing in English and ASL. Thus, the neural organization of sign language can be seen as comparable to the organization of spoken language in the brain.

Stimulation or damage of Broca’s area has a global effect on the motor output, which means that errors involving the motor execution of signs are caused. These errors are characterized by reduced hand shapes or non-specific movements for example. This disturbance of sign production – including syntactic processing – is called sign paraphasia (Corina & Spotswood, 2012). Sign paraphasia can also include formational and semantic errors, like correct location but wrong movement of the hand. These deficits, however, can be observed when the TMS is used to interfere with neural processing in the supra-marginal gyrus (SMG). A damage in this area results in an incorrect selection of linguistic components (Corina & Spotswood, 2012). Sign aphasia describes deficits in comprehension associated with lesions in Wernicke’s area. See also Chapter Acquired Disorders of Speech and Language for a more general treatment of aphasic syndromes.


Sign paraphasia In sign language users, paraphasias are a typical aspect of acquired language dysfunction after brain damage. Sign language paraphasias involve the substitution of an unintended element in sign production, for example producing a word at the correct location of the hand but using a wrong movement or, as in the image below, producing the correct sign but at an incorrect location, thereby changing its meaning (e.g., from mother to father).


Further areas of the left hemisphere were identified as being involved in sign language production, that were shown to be more strongly activated during sign as compared to spoken language production. The supra-marginal gyrus (SMG), whose impairment can cause specific types of sign paraphasia as outlined above, was shown to be involved in phonological processing (Emmorey, Mehta, & Grabowski, 2007). Phonology describes the smallest units of a word or a sign, called phonems. According to this a sign can be devided into hand shape, orientation of the palm, location and movement – i.e., the first four elements of HOLME that are assumed to constitute the phonology of a sign language (see also above). The superior parietal lobule (SPL) is specifically involved in monitoring the motoric output. Emmorey, Mehta and Grabowski (2007) suggest due to their database of patients with different neurological impairments of sign that “[…] proprioceptive monitoring may play a more important role in sign production because visual monitoring of signing [is different to the] auditory monitoring of speech […]” (Emmorey, Mehta, & Grabowski, 2007, p.206). Speech output can be monitored by listening to the own voice, while when producing signs the perception needs to be focused on hand posture or movement for example. So the SPL – responsible for visual perception and visual attention – is in general more involved in sign language due to proprioceptive monitoring (Campbell et al., 2007; Corina & Spotswood, 2012; Emmorey, Mehta, & Grabowski, 2007).


The role of the right hemisphere

edit

The right hemisphere is known to be specialized for visual-spatial processing, both during visual perception and when processing spatial relationships in spoken language as well as sign language. Case studies of patients with acquired brain lesions who use sign language showed that damage in the right hemisphere causes no problems in linguistic processing (Campbell et al., 2007; Hickok et al., 1999). Lesions in this hemisphere caused visual-spatial deficits, but no language-specific deficits such as sign language aphasia or paraphasia (Campbell et al., 2007; Corina & Spotswood, 2012). Nevertheless, there is some support for the assumption that in sign language, right brain regions are involved to a greater extent than in spoken language. The right hemisphere is dominant for detecting and interpreting visual movement, which is a principal component of processing signed language. The right hemisphere is also specialized for discriminating shape, size, or position of visual objects. Furthermore, the right hemisphere is dominant in social communication. Superior temporal regions are sensitive to facial information which can play an important role in sign production and comprehension. Neuroimaging studies show that posteriorly located parietal and temporal regions of the right hemisphere play a special role in mediating sign language (Newman, Bavelier, Corina, Jezzard, & Neville, 2002). The right hemisphere also seems to be involved when visual object components and space relations need to be translated into body-centered representations, another cognitive demand that is taxed during the production and processing of signed language.

In sum, neuropsychological and functional brain imaging studies of sign language suggest that classical language areas like Broca’s and Wernicke’s areas are involved in comparable ways to spoken language, when signers produce or comprehend language. Even if the left and right hemispheres have mainly different functions in each case, the available activation studies do not allow for a clear separation. For example, syntactic processing can also involve the right hemisphere in sign language (Campbell et al., 2007; Corina & Spotswood, 2012). In addition, a number of brain systems related to visual-spatial processing, particularly in the right hemisphere, is additionally evoked when processing signed language (Bavelier et al., 1998). Recent research suggests that the degree of involvement of these right hemispheric system depends on the proficiency obtained in sign language usage, rather than on the age of acquisition of sign language (Campbell et al., 2007; Corina & Spotswood, 2012). “Importantly, activation in right hemisphere temporalparietal regions was specific to BSL and was not observed in hearing non-signers watching audiovisual English translations of the same sentences” (Corina & Spotswood, 2012, p.749). In comparison it can be said that sign language and spoken language show – modality-independent – neural activity in left inferior frontal gyrus and left temporal regions (Corina & Spotswood, 2012; Emmorey et al., 2007). As mentioned before, some parts of the visual cortex are activated more in sign language, whereas the auditory cortex in the superior temporal lobe is more activated in spoken language and less active in sign language.

A final important result that shall be mentioned here pertains to the neuroplasticity underlying the acquisition of sign language. Most specifically, MacSweeney, Campbell, Woll, Giampietro, David, and McGuire (2004) “[…] found greater activation in Deaf than hearing signers during BSL perception in regions traditionally considered to be responsible for auditory processing.” (MacSweeney et al., 2004, p. 1615). This suggests functional plasticity within regions of the auditory system, so that auditory regions can be used to process linguistic input from other (here: visual) modalities if deprived from auditory input (Campbell et al., 2007).

Digression: Iconicity

edit

Examinations of sign-aphasic patients showed that they are often unable to produce iconic signs (e.g., “toothbrush”), but are nevertheless able to produce the action which is associated with the respective word as a pantomimed gesture (i.e., “how to brush teeth”). These findings are of particular interest as they indicate that despite their superficial resemblance, there exists a neurobiologically grounded dissociation between sign language and gestures (Campbell et al., 2007). The brains of deaf signers, thus, differentiate linguistic from non-linguistic actions. It was suggested that signs may rely more on top-down processing, while non-linguistic gestures might be more bottom-up (to read more about this issue: Corina & Knapp, 2008).


Acquisition and development of sign language

edit

In general, deaf children of deaf or hearing parents pass through the early stages of language acquisition in comparable ways to that of hearing children, i.e., the cognitive development of deaf children is comparable to that of hearing children. Deaf babies show manual babble and produce also errors in sign articulation that are qualitatively comparable to the errors that hearing infants make in speech production (Lederberg, Schick, & Spencer, 2013). Interestingly, a study about acquisition of sign language and motor development of 11 infants of deaf parents showed that deaf infants can produce signs earlier – about 2-3 month – than hearing infants produce words (Bonvillian, Orlansky, & Novack, 1983). This led to the assumption that basic motor preconditions for sign production are acquired earlier in development than those needed to speak.

The acquisition of sign language depends on the age of exposure. Thus, the age at which children are first exposed to sign language can be relevant for further cognitive development. For example, age of first exposure can affect the fluency in production and comprehension of sign language. Late learners cannot produce signs as fluently as native or early learners. The effect of late exposure influences all structural levels, like prosody, morphology, or syntax, in production and comprehension. As outlined elsewhere in this Wikibook in more detail (Language Development), it is important to acquire a first language profoundly and early in development so that negative consequences for following stages of language learning can be prevented. This applies to sign language in the same way as to spoken language. Research found out that – just as for the development of spoken language – sign language has to be available from the environment and must be acquired in a critical period to develop the involved neural systems and the plasticity which is needed to develop ordinarily. Deficits in the ability for production and comprehension of signs can also have effects in language-related areas like, for example, causing delays in the development of Theory of Mind (ToM) or literacy (Lederberg, Schick, & Spencer, 2013). Further consequences of late-signing, like greater impulsivity, less attention control or fewer working memory capacities as well as disturbances in executive functions are reported (Brook & Kempe, 2012). Deaf people often show less age-appropriate skills than hearing people and the fact that most deaf children do not have representations of the (spoken) phonology of (printed) words makes reading acquisition more difficult for them (Lederberg, Schick, & Spencer, 2013).


Theory of Mind describes the ability to mentally take over someone else’s perspective to understand his/her point of view or feelings. This skill is seen as very important in terms of social behavior and contact with surroundings.

Literacy describes the ability of reading and writing. See Chapter Reading and Writing for more details.


As mentioned before, the learning environment of the child influences its development eminently. Deaf children with deaf parents typically grow up with sign language in a deaf community. They normally acquire sign language through natural interactive experiences and are supported by their surroundings. Deaf children of hearing parents usually learn sign language due to early interventions or in special school classes. In general, efforts are taken to offer early opportunities so that deaf children can learn sign language as their first language. Bilingualism is also promoted, which means that spoken language is acquired as a second language for example. However, only a small part of deaf or hard-hearing children is successful in acquiring spoken language. One option to support deaf or hard-of-hearing children in their acquisition and development of language could be a Cochlea Implant, that increases the ability of deaf children to perceive sounds and thus also to acquire spoken language. Cochlear implants are implanted into the cochlea and have the capability of translating auditory signals into electronic ones that are transmitted to the auditory nerve. Children seem to benefit from early implantation even if the outcome – with respect to hearing quality and speech production – is not comparable to that of hearing children (Brooks & Kempe, 2012; Lederberg, Schick, & Spencer, 2013).

In general, it can be said that deaf and hearing children do not in principle differ in their (cognitive) development, especially in relation to language production and processing. However, the development of deaf children with hearing parents differs from the development of deaf children with deaf parents. The development of deaf children with hearing parents, in contrast, can be much more critical. Due to their specific developmental environment, these children are often delayed in the development of language as the critical language input is lacking during the sensitive periods for language (Lederberg, Schick, & Spencer, 2013). Also, the sign language-proficiency of hearing parents has a large impact on the development of sign language in their deaf children. A lack of fluency, which is more frequent in hearing parents than in deaf parents of deaf children, can affect the acquisition and following development negatively (e.g., Meronen & Ahonen, 2008). As expected given the findings discussed above, deaf children with hearing parents often show delays in other cognitive abilities such as ToM. This delay may be related to the input from their parents, which is usually less multifaceted (Lederberg, Schick, & Spencer, 2013).

Even if there is no linguistic input, deaf infants seem to be motivated to communicate just like any other child. Several surveys indicate that children possess the ability to create a communication system also when linguistic input is not available. Deaf children, isolated from an approved sign language, do not have the possibility to imitate or reproduce approved signs but they do communicate with their environment, e.g., with their parents, through manual gestures. They create a so called home sign, i.e., a system of gestures that have special meanings – for example pointing may mean to pick something up – or that are associated with a specific object. It is observed that across culture, home sign systems have similarities like the consistency in construction of grammatical categories of signs. Thus, the emergence of home sign supports the idea that the structure of sign languages can emerge spontaneously. To become an accepted and used sign language (e.g., in cultures that have no established sign language system yet), it is necessary that the next generation adopts the new communication system and also modifies it to become more systematic (Brooks & Kempe, 2012).


Summary

edit

In conclusion, communication can occur in two natural ways, with the obvious difference between those systems being the modality of input and output. In the auditory-oral modality, spoken language includes the production and perceptual processing of speech sounds, whereas sign language uses manual gestures and visual attention to communicate. Sign languages are – just as spoken languages – natural languages that are used to communicate between persons. Sign languages in many ways have the same quality and linguistic complexity as have spoken languages and, besides this, are also by and large represented in the same brain areas. Also in sign language, the left hemisphere seems to be the dominant neural basis of language. The right hemisphere is more involved in viso-spatial processing, which is why this hemisphere plays a more important role in sign language.

In general, deaf babies are capable of learning sign language much in the same way as hearing infants learn to speak. Deaf children show similar cognitive development when they acquire sign language early in life. However, the environment and linguistic input from the parents have a huge impact on the development and acquisition of sign language in childhood, which is particularly critical in deaf children of hearing parents who often cannot immediately provide the necessary language input during the critical periods of language acquisition in early life. Late exposure to sign language or exposure to lower quality language input can cause enormous delays and lasting deficits in linguistic and general cognitive development of deaf children.

In sum, there is a lot of research to bring up more information about sign language and language processing in deaf people. Nevertheless the findings are not exhaustive yet – so the neural system and brain activity in deaf signers compared to hearing people needs to be explored further.


Further Readings

edit

Capek, C. M., Grossi, G., Newman, A. J., McBurney, S. L., Corina, D., Roeder, B., & Neville, H. J. (2009). Brain systems mediating semantic and syntactic processing in deaf native signers: Biological invariance and modality specificity. Proceedings of the National Academy of Sciences 106 (21), p. 8784–8789. doi: 10.1073/pnas.0809609106.

Pichler, D.C. (2012). Acquisition. In: Pfau, R., Steinbach, M., & Woll, B. (eds.) Sign language. An international handbook. Berlin, Boston: De Gruyter Mouton, 647–686.

Scholes, R. J., & Fischler, I. (1979). Hemispheric Function and Linguistic Skill in the Deaf . Brain and Language, 7, 336–350.


References

edit

Bavelier, D., Corina, D., Jezzard, P., Clark, V., Karni, A., Lalwani, A., Rauschecker, J. P. Braun, A., Turner, R., Neville, H. J. (1998) Hemispheric Specialization for English and ASL: Left Invariance Right Variability. In: Neuroreport 9(7), 1537 – 1542.

Bonvillian, J. D., Orlansky, M. D., Novack, L. L., (1983). Developmental Milestones: Sign Language Acquisition and Motor Development In: Child Development, 12/1/1983, Vol. 54, Issue 6, p. 1435-1445 University of Chicago Press. URL: http://www.jstor.org/stable/1129806

Brooks, P., & Kempe, V. (2012). How do deaf children acquire language? In: Brooks, P., & Kempe, V. (eds.) Language development. Chicester: BPS Blackwell, p. 240–262.

Campbell, R., MacSweeney, M., & Waters, D. (2007). Sign Language and the Brain: A Review. In: Journal of Deaf Studies and Deaf Education, 13, (1), p. 3–20. doi: 10.1093/deafed/enm035.

Corina, D. P., & Knapp, H. P. (2008). Signed Language and Human Action Processing. In: Annals of the New York Academy of Sciences 1145 (1), p. 100–112. doi: 10.1196/annals.1416.023.

Corina, D., & Spotswood, N. (2012). Neurolinguistics. In: Pfau, R., Steinbach, M., &Woll, B. (eds.) Sign language. An international handbook. Berlin, Boston: De Gruyter Mouton , 739–762.

Emmorey, K., Mehta, S., & Grabowski, T. J. (2007). The neural correlates of signversus word production. In: NeuroImage 36 (1), S. 202–208. doi: 10.1016/j.neuroimage.2007.02.040

Hickok, G., Wilson, M., Clark, K., Klima, Edward S., Kritchevsky, M., & Bellugi, U. (1999). Discourse Deficits Following Right Hemisphere Damage in Deaf Signers. In: Brain and Language 66 (2), S. 233–248.

Homan, R. W., Criswell, E., Wada, J. A., & Ross, E. D. (1982). Hemispheric contributions to manual communication (signing and finger-spelling). In: Neurology 32 (9), S. 1020. doi: 10.1212/WNL.32.9.1020

Lederberg, A. R., Schick, B., & Spencer, P. E. (2012). Language and literacy development of deaf and hard-of-hearing children: Successes and challenges. In: Developmental Psychology 49 (1), p. 15–30. doi: 10.1037/a0029558

MacSweeney, M., Campbell, R., Woll, B., Giampietro, V., David, A. S., McGuire, P. K., et al. (2004). Dissociating linguistic and nonlinguistic gestural communication in the brain. Neuroimage, 22, 1605–1618. doi:10.1016/j.neuroimage.2004.03.015

Meier, R. P. (2012). Language and modality. In: Pfau, R., Steinbach, M., & Woll, B. (eds.) Sign language. An international handbook. Boston: De Gruyter Mouton, 574–601.

Meronen, A., & Ahonen, T. (2008). Individual differences in sign language abilities in deaf children. American Annals of the Deaf, 152, 495–504. doi:10.1353/aad.2008.0015

Newman, A. J., Bavelier, D., Corina, D., Jezzard, P., & Neville, H. J. (2001). A critical period for right hemisphere recruitment in American Sign Language processing. In: Nat. Neurosci 5 (1), S. 76–80. doi: 10.1038/nn775