# Cognitive Psychology and the Brain

Imagine the following situation: A young man, let’s call him Knut, is sitting at his desk, reading some papers which he needs to complete a psychology assignment. In his right hand he holds a cup of coffee. With his left one he reaches for a bag of sweets without removing the focus of his eyes from the paper. Suddenly he stares up to the ceiling of his room and asks himself: “What is happening here?”

Probably everybody has had experiences like the one described above. Even though at first sight there is nothing exciting happening in this everyday situation, a lot of what is going on here is highly interesting particularly for researchers and students in the field of Cognitive Psychology. They are involved in the study of lots of incredibly fascinating processes which we are not aware of in this situation. Roughly speaking, an analysis of Knut's situation by Cognitive Psychologists would look like this:

Knut has a problem; he really needs to do his assignment. To solve this problem, he has to perform loads of cognition. The light reaching his eyes is transduced into electrical signals traveling through several stations to his visual cortex. Meanwhile, complex nets of neurons filter the information flow and compute contrast, colour, patterns, positions in space, motion of the objects in Knut's environment. Stains and lines on the screen become words; words get meaning; the meaning is put into context; analyzed on its relevance for Knut's problem and finally maybe stored in some part of his memory. At the same time an appetite for sweets is creeping from Knut's hypothalamus, a region in the brain responsible for controlling the needs of an organism. This appetite finally causes Knut to reach out for his sweets.

Now, let us take a look into the past to see how Cognitive Psychologists developed its terminology and methods to interpret ourselves on the basis of brain, behaviour and theory.

## History of Cognitive Psychology

Early thoughts claimed that knowledge was stored in the brain.

### Renaissance and Beyond

Renaissance philosophers of the 17th century generally agreed with Nativists and even tried to show the structure and functions of the brain graphically. But also empiricist philosophers had very important ideas. According to David Hume, the internal representations of knowledge are formed obeying particular rules. These creations and transformations take effort and time. Actually, this is the basis of much current research in Cognitive Psychology. In the 19th Century Wilhelm Wundt and Franciscus Cornelis Donders made the corresponding experiments measuring the reaction time required for a response, of which further interpretation gave rise to Cognitive Psychology 55 years later.

### 20th Century and the Cognitive Revolution

During the first half of the 20th Century, a radical turn in the investigation of cognition took place. Behaviourists like Burrhus Frederic Skinner claimed that such mental internal operations - such as attention, memory, and thinking – are only hypothetical constructs that cannot be observed or proven. Therefore, Behaviorists asserted, mental constructs are not as important and relevant as the study and experimental analysis of behaviour (directly observable data) in response to some stimulus. According to Watson and Skinner, man could be objectively studied only in this way. The popularity of Behavioralist theory in the psychological world led investigation of mental events and processes to be abandoned for about 50 years.

In the 1950s scientific interest returned again to attention, memory, images, language processing, thinking and consciousness. The “failure” of Behaviourism heralded a new period in the investigation of cognition, called Cognitive Revolution. This was characterized by a revival of already existing theories and the rise of new ideas such as various communication theories. These theories emerged mainly from the previously created information theory, giving rise to experiments in signal detection and attention in order to form a theoretical and practical understanding of communication.

Modern linguists suggested new theories on language and grammar structure, which were correlated with cognitive processes. Chomsky’s Generative Grammar and Universal Grammar theory, proposed language hierarchy, and his critique of Skinner’s “Verbal Behaviour” are all milestones in the history of Cognitive Science. Theories of memory and models of its organization gave rise to models of other cognitive processes. Computer science, especially artificial intelligence, re-examined basic theories of problem solving and the processing and storage of memory, language processing and acquisition.

For clarification: Further discussion on the "behaviorist" history.

Although the above account reflects the most common version of the rise and fall of behaviorism, it is a misrepresentation. In order to better understand the founding of cognitive psychology it must be understood in an accurate historical context. Theoretical disagreements exist in every science. However, these disagreements should be based on an honest interpretation of the opposing view. There is a general tendency to draw a false equivalence between Skinner and Watson. It is true that Watson rejected the role that mental or conscious events played in the behavior of humans. In hindsight this was an error. However, if we examine the historical context of Watson's position we can better understand why he went to such extremes. He, like many young psychologists of the time, was growing frustrated with the lack of practical progress in psychological science. The focus on consciousness was yielding inconsistent, unreliable and conflicting data. Excited by the progress coming from Pavlov's work with elicited responses and looking to the natural sciences for inspiration, Watson rejected the study of observable mental events and also pushed psychology to study stimulus-response relations as a means to better understand human behavior. This new school of psychology, "behaviorism" became very popular. Skinner's school of thought, although inspired by Watson, takes a very different approach to the study of unobservable mental events. Skinner proposed that the distinction between "mind" and "body" brought with it irreconcilable philosophical baggage. He proposed that the events going on "within the skin", previously referred to as mental events, be called private events. This would bring the private experiences of thinking, reasoning, feeling and such, back into the scientific fold of psychology. However, Skinner proposed that these were things we are doing rather than events going on at a theorized mental place. For Skinner, the question was not of a mental world existing or not, it was whether or not we need to appeal to the existence of a mental world in order to explain the things going on inside our heads. Such as the natural sciences ask whether we need to assume the existence of a creator in order to account for phenomena in the natural world. For Skinner, it was an error for psychologists to point to these private events (mental) events as causes of behavior. Instead, he suggested that these too had to be explained through the study of how one evolves as a matter of experience. For example, we could say that a student studies because she "expects" to do better on an exam if she does. To "expect" might sound like an acceptable explanation for the behavior of studying, however, Skinner would ask why she "expects". The answer to this question would yield the true explanation of why the student is studying. To "expect" is to do something, to behave "in our head", and thus must also be explained.

The cognitive psychologist Henry Roediger pointed out that many psychologists erroneously subscribe to the version of psychology presented in the first paragraph. He also pointed to the successful rebuttal against Chomsky's review of Verbal behavior. The evidence for the utility in Skinner's book can be seen in the abundance of actionable data it has generated, therapies unmatched by any modern linguistic account of language. Roediger reminded his readers that in fact, we all measure behavior, some simply choose to make more assumptions about its origins than others. He recalls how, even as a cognitive psychologist, he has been the focus of criticism for not making more assumptions about his data. The law of parsimony tells us that when choosing an explanation for a set of data about observable behavior (the data all psychologists collect), we must be careful not to make assumptions beyond those necessary to explain the data. This is where the main division lies between modern day behavior analysts and cognitive psychologists. It is not in the rejection of our private experiences, it is in how these experiences are studied. Behavior analysts study them in relation to our learning history and the brain correlates of that history. They use this information to design environments that change our private experience by changing our interaction with the world. After all, it is through our interaction with our relative world that our private experiences evolve. It is a far cry from the mechanical stimulus-response psychology of John Watson. Academic honesty requires that we make a good faith effort to understand what we wish to criticize. Henry Roediger pointed out that many psychologists understand a very stereotyped, erroneous version of psychology's history. In doing so they miss the many successful real world applications that Skinner's analysis has generated.

Neuroinformatics, which is based on the natural structure of the human nervous system, tries to build neuronal structures by the idea of artificial neurons. In addition to that, Neuroinformatics is used as a field of evidence for psychological models, for example models for memory. The artificial neuron network “learns” words and behaves like “real” neurons in the brain. If the results of the artificial neuron network are quite similar to the results of real memory experiments, it would support the model. In this way psychological models can be “tested”. Furthermore it would help to build artificial neuron networks, which posses similar skills like the human such as face recognition.

If more about the ways humans process information was understood, it would be much simpler to build artificial structures, which have the same or similar abilities. The area of cognitive development investigation tried to describe how children develop their cognitive abilities from infancy to adolescence. The theories of knowledge representation were first strongly concerned with sensory inputs. Current scientists claim to have evidence that our internal representation of reality is not a one-to-one reproduction of the physical world. It is rather stored in some abstract or neurochemical code. Tolman, Bartlett, Norman and Rumelhart made some experiments on cognitive mapping. Here, the inner knowledge seemed not only to be related to sensory input, but also to be modified by some kind of knowledge network modeled by past experience.

Newer methods, like Electroencephalography (EEG) and functional magnetic resonance imaging (fMRI) have given researchers the possibility to measure brain activity and possibly correlate it to mental states and processes. All these new approaches in the study of human cognition and psychology have defined the field of Cognitive Psychology, a very fascinating field which tries to answer what is quite possibly the most interesting question posed since the dawn of reason. There is still a lot to discover and to answer and to ask again, but first we want to make you more familiar with the concept of Cognitive Psychology.

## What is Cognitive Psychology?

The easiest answer to this question is: “Cognitive Psychology is the study of thinking and the processes underlying mental events.” Of course this creates the new problem of what a mental event actually is. There are many possible answers for this:

Let us look at Knut again to give you some more examples and make the things clearer. He needs to focus on reading his paper. So all his attention is directed at the words and sentences which he perceives through his visual pathways. Other stimuli and information that enter his cognitive apparatus - maybe some street noise or the fly crawling along a window - are not that relevant in this moment and are therefore attended much less. Many higher cognitive abilities are also subject to investigation. Knut’s situation could be explained as a classical example of problem solving: He needs to get from his present state – an unfinished assignment – to a goal state - a completed assignment - and has certain operators to achieve that goal. Both Knut’s short and long term memory are active. He needs his short term memory to integrate what he is reading with the information from earlier passages of the paper. His long term memory helps him remember what he learned in the lectures he took and what he read in other books. And of course Knut’s ability to comprehend language enables him to make sense of the letters printed on the paper and to relate the sentences in a proper way.

This situation can be considered to reflect mental events like perception, comprehension and memory storage. Some scientists think that our emotions cannot be considered separate from cognition, so that hate, love, fear or joy are also sometimes looked at as part of our individual minds. Cognitive psychologists study questions like: How do we receive information about the outside world? How do we store it and process it? How do we solve problems? How is language represented?

Cognitive Psychology is a field of psychology that learns and researches about mental processes, including perception, thinking, memory, and judgment. The mainstay of cognitive psychology is the idea where sensation and perception are both different issues.

## Relations to Neuroscience

### Cognitive Neuropsychology

Of course it would be very convenient if we could understand the nature of cognition without the nature of the brain itself. But unfortunately it is very difficult if not impossible to build and prove theories about our thinking in absence of neurobiological constraints. Neuroscience comprises the study of neuroanatomy, neurophysiology, brain functions and related psychological and computer based models. For years, investigations on a neuronal level were completely separated from those on a cognitive or psychological level. The thinking process is so vast and complex that there are too many conceivable solutions to the problem of how cognitive operation could be accomplished.

Neurobiological data provide physical evidence for a theoretical approach to the investigation of cognition. Therefore it narrows the research area and makes it much more exact. The correlation between brain pathology and behaviour supports scientists in their research. It has been known for a long time that different types of brain damage, traumas, lesions, and tumours affect behaviour and cause changes in some mental functions. The rise of new technologies allows us to see and investigate brain structures and processes never seen before. This provides us with a lot of information and material to build simulation models which help us to understand processes in our mind. As neuroscience is not always able to explain all the observations made in laboratories, neurobiologists turn towards Cognitive Psychology in order to find models of brain and behaviour on an interdisciplinary level – Cognitive Neuropsychology. This “inter-science” as a bridge connects and integrates the two most important domains and their methods of research of the human mind. Research at one level provides constraints, correlations and inspirations for research at another level.

### Neuroanatomy Basics

The basic building blocks of the brain are a special sort of cells called neurons. There are approximately 100 billion neurons involved in information processing in the brain. When we look at the brain superficially, we can't see these neurons, but rather look at two halves called the hemispheres. The hemispheres themselves may differ in size and function, as we will see later in the book, but principally each of them can be subdivided into four parts called the lobes: the temporal, parietal, occipital and frontal lobe. This division of modern neuroscience is supported by the up- and down-bulging structure of the brain's surface. The bulges are called gyri (singular gyrus), the creases sulci (singular sulcus). They are also involved in information processing. The different tasks performed by different subdivisions of the brain as attention, memory and language cannot be viewed as separated from each other, nevertheless some parts play a key role in a specific task. For example the parietal lobe has been shown to be responsible for orientation in space and the relation you have to it, the occipital lobe is mainly responsible for visual perception and imagination etc. Summed up, brain anatomy poses some basic constraints to what is possible for us and a better understanding will help us to find better therapies for cognitive deficits as well as guide research for cognitive psychologists. It is one goal of our book to present the complex interactions between the different levels on which the brain that can be described, and their implications for Cognitive Neuropsychology.

### Methods

Newer methods, like EEG and fMRI etc. allow researchers to correlate the behaviour of a participant in an experiment with the brain activity which is measured simultaneously. It is possible to record neurophysiological responses to certain stimuli or to find out which brain areas are involved in the execution of certain mental tasks. EEG measures the electric potentials along the skull through electrodes that are attached to a cap. While its spatial resolution is not very precise, the temporal resolution lies within the range of milliseconds. The use of fMRI benefits from the fact the increased brain activity goes along with increased blood flow in the active region. The haemoglobin in the blood has magnetic properties that are registered by the fMRI scanner. The spatial resolution of fMRI is very precise in comparison to EEG. On the other hand, the temporal resolution is in the range of just 1–2 seconds.

## Conclusion

Remember the scenario described at the beginning of the chapter. Knut was asking himself “What is happening here?” It should have become clear that this question cannot be simply answered with one or two sentences. We have seen that the field of Cognitive Psychology comprises a lot of processes and phenomena of which every single one is subject to extensive research to understand how cognitive abilities are produced by our brain. In the following chapters of this WikiBook you will see how the different areas of research in Cognitive Psychology are trying to solve the initial question raised by Knut.

# Problem Solving from an Evolutionary Perspective

## Introduction

Same place, different day. Knut is sitting at his desk again, staring at a blank paper in front of him, while nervously playing with a pen in his right hand. Just a few hours left to hand in his essay and he has not written a word. All of a sudden he smashes his fist on the table and cries out: "I need a plan!"

That thing Knut is confronted with is something everyone of us encounters in his daily life. He has got a problem - and he does not really know how to solve it. But what exactly is a problem? Are there strategies to solve problems? These are just a few of the questions we want to answer in this chapter.

We begin our chapter by giving a short description of what psychologists regard as a problem. Afterwards we are going to present different approaches towards problem solving, starting with gestalt psychologists and ending with modern search strategies connected to artificial intelligence. In addition we will also consider how experts do solve problems and finally we will have a closer look at two topics: The neurophysiological background on the one hand and the question what kind of role can be assigned to evolution regarding problem solving on the other.

The most basic definition is “A problem is any given situation that differs from a desired goal”. This definition is very useful for discussing problem solving in terms of evolutionary adaptation, as it allows to understand every aspect of (human or animal) life as a problem. This includes issues like finding food in harsh winters, remembering where you left your provisions, making decisions about which way to go, learning, repeating and varying all kinds of complex movements, and so on. Though all these problems were of crucial importance during the evolutionary process that created us the way we are, they are by no means solved exclusively by humans. We find a most amazing variety of different solutions for these problems in nature (just consider, e.g., by which means a bat hunts its prey, compared to a spider). For this essay we will mainly focus on those problems that are not solved by animals or evolution, that is, all kinds of abstract problems (e.g. playing chess). Furthermore, we will not consider those situations as problems that have an obvious solution: Imagine Knut decides to take a sip of coffee from the mug next to his right hand. He does not even have to think about how to do this. This is not because the situation itself is trivial (a robot capable of recognising the mug, deciding whether it is full, then grabbing it and moving it to Knut’s mouth would be a highly complex machine) but because in the context of all possible situations it is so trivial that it no longer is a problem our consciousness needs to be bothered with. The problems we will discuss in the following all need some conscious effort, though some seem to be solved without us being able to say how exactly we got to the solution. Still we will find that often the strategies we use to solve these problems are applicable to more basic problems, too.

Non-trivial, abstract problems can be divided into two groups:

### Well-defined Problems

For many abstract problems it is possible to find an algorithmic solution. We call all those problems well-defined that can be properly formalised, which comes along with the following properties:

• The problem has a clearly defined given state. This might be the line-up of a chess game, a given formula you have to solve, or the set-up of the towers of Hanoi game (which we will discuss later).
• There is a finite set of operators, that is, of rules you may apply to the given state. For the chess game, e.g., these would be the rules that tell you which piece you may move to which position.
• Finally, the problem has a clear goal state: The equations is resolved to x, all discs are moved to the right stack, or the other player is in checkmate.

Not surprisingly, a problem that fulfils these requirements can be implemented algorithmically (also see convergent thinking). Therefore many well-defined problems can be very effectively solved by computers, like playing chess.

### Ill-defined Problems

Though many problems can be properly formalised (sometimes only if we accept an enormous complexity) there are still others where this is not the case. Good examples for this are all kinds of tasks that involve creativity, and, generally speaking, all problems for which it is not possible to clearly define a given state and a goal state: Formalising a problem of the kind “Please paint a beautiful picture” may be impossible. Still this is a problem most people would be able to access in one way or the other, even if the result may be totally different from person to person. And while Knut might judge that picture X is gorgeous, you might completely disagree.

Nevertheless ill-defined problems often involve sub-problems that can be totally well-defined. On the other hand, many every-day problems that seem to be completely well-defined involve- when examined in detail- a big deal of creativity and ambiguities.

If we think of Knut's fairly ill-defined task of writing an essay, he will not be able to complete this task without first understanding the text he has to write about. This step is the first subgoal Knut has to solve. Interestingly, ill-defined problems often involve subproblems that are well-defined.

## Restructuring - The Gestalt Approach

One dominant approach to Problem Solving originated from Gestalt psychologists in the 1920s. Their understanding of problem solving emphasises behaviour in situations requiring relatively novel means of attaining goals and suggests that problem solving involves a process called restructuring. Since this indicates a perceptual approach, two main questions have to be considered:

• How is a problem represented in a person's mind?
• How does solving this problem involve a reorganisation or restructuring of this representation?

This is what we are going to do in the following part of this section.

### How is a problem represented in the mind?

In current research internal and external representations are distinguished: The first kind is regarded as the knowledge and structure of memory, while the latter type is defined as the knowledge and structure of the environment, such like physical objects or symbols whose information can be picked up and processed by the perceptual system autonomously. On the contrary the information in internal representations has to be retrieved by cognitive processes.

Generally speaking, problem representations are models of the situation as experienced by the agent. Representing a problem means to analyse it and split it into separate components:

• objects, predicates
• state space
• operators
• selection criteria

Therefore the efficiency of Problem Solving depends on the underlying representations in a person’s mind, which usually also involves personal aspects. Analysing the problem domain according to different dimensions, i.e., changing from one representation to another, results in arriving at a new understanding of a problem. This is basically what is described as restructuring. The following example illustrates this:

Two boys of different age are playing badminton. The older one is a more skilled player, and therefore it is predictable for the outcome of usual matches who will be the winner. After some time and several defeats the younger boy finally loses interest in playing, and the older boy faces a problem, namely that he has no one to play with anymore.
The usual options, according to M. Wertheimer (1945/82), at this point of the story range from 'offering candy' and 'playing another game' to 'not playing to full ability' and 'shaming the younger boy into playing'. All those strategies aim at making the younger stay.
And this is what the older boy comes up with: He proposes that they should try to keep the bird in play as long as possible. Thus they change from a game of competition to one of cooperation. They'd start with easy shots and make them harder as their success increases, counting the number of consecutive hits. The proposal is happily accepted and the game is on again.

The key in this story is that the older boy restructured the problem and found out that he used an attitude towards the younger which made it difficult to keep him playing. With the new type of game the problem is solved: the older is not bored, the younger not frustrated.

Possibly, new representations can make a problem more difficult or much easier to solve. To the latter case insight– the sudden realisation of a problem’s solution – seems to be related.

### Insight

There are two very different ways of approaching a goal-oriented situation. In one case an organism readily reproduces the response to the given problem from past experience. This is called reproductive thinking.

The second way requires something new and different to achieve the goal, prior learning is of little help here. Such productive thinking is (sometimes) argued to involve insight. Gestalt psychologists even state that insight problems are a separate category of problems in their own right.

Tasks that might involve insight usually have certain features - they require something new and non-obvious to be done and in most cases they are difficult enough to predict that the initial solution attempt will be unsuccessful. When you solve a problem of this kind you often have a so called "AHA-experience" - the solution pops up all of a sudden. At one time you do not have any ideas of the answer to the problem, you do not even feel to make any progress trying out different ideas, but in the next second the problem is solved.

For all those readers who would like to experience such an effect, here is an example for an Insight Problem: Knut is given four pieces of a chain; each made up of three links. The task is to link it all up to a closed loop and he has only 15 cents. To open a link costs 2, to close a link costs 3 cents. What should Knut do?

If you want to know the correct solution, click to enlarge the image.

To show that solving insight problems involves restructuring, psychologists created a number of problems that were more difficult to solve for participants provided with previous experiences, since it was harder for them to change the representation of the given situation (see Fixation). Sometimes given hints may lead to the insight required to solve the problem. And this is also true for involuntarily given ones. For instance it might help you to solve a memory game if someone accidentally drops a card on the floor and you look at the other side. Although such help is not obviously a hint, the effect does not differ from that of intended help.

For non-insight problems the opposite is the case. Solving arithmetical problems, for instance, requires schemas, through which one can get to the solution step by step.

### Fixation

Sometimes, previous experience or familiarity can even make problem solving more difficult. This is the case whenever habitual directions get in the way of finding new directions – an effect called fixation.

#### Functional fixedness

Functional fixedness concerns the solution of object-use problems. The basic idea is that when the usual way of using an object is emphasised, it will be far more difficult for a person to use that object in a novel manner. An example for this effect is the candle problem: Imagine you are given a box of matches, some candles and tacks. On the wall of the room there is a cork-board. Your task is to fix the candle to the cork-board in such a way that no wax will drop on the floor when the candle is lit. – Got an idea?

Explanation: The clue is just the following: when people are confronted with a problem and given certain objects to solve it, it is difficult for them to figure out that they could use them in a different (not so familiar or obvious) way. In this example the box has to be recognised as a support rather than as a container.

A further example is the two-string problem: Knut is left in a room with a chair and a pair of pliers given the task to bind two strings together that are hanging from the ceiling. The problem he faces is that he can never reach both strings at a time because they are just too far away from each other. What can Knut do?

Solution: Knut has to recognise he can use the pliers in a novel function – as weight for a pendulum. He can bind them to one of the :strings, push it away, hold the other string and just wait for the first one moving towards him. If necessary, Knut can even climb on the chair, but he is not that small, we suppose . . .

#### Mental fixedness

Functional fixedness as involved in the examples above illustrates a mental set - a person’s tendency to respond to a given task in a manner based on past experience. Because Knut maps an object to a particular function he has difficulties to vary the way of use (pliers as pendulum's weight).

One approach to studying fixation was to study wrong-answer verbal insight problems. It was shown that people tend to give rather an incorrect answer when failing to solve a problem than to give no answer at all.

A typical example: People are told that on a lake the area covered by water lilies doubles every 24 hours and that it takes 60 days to cover the whole lake. Then they are asked how many days it takes to cover half the lake. The typical response is '30 days' (whereas 59 days is correct).

These wrong solutions are due to an inaccurate interpretation, hence representation, of the problem. This can happen because of sloppiness (a quick shallow reading of the problem and/or weak monitoring of their efforts made to come to a solution). In this case error feedback should help people to reconsider the problem features, note the inadequacy of their first answer, and find the correct solution. If, however, people are truly fixated on their incorrect representation, being told the answer is wrong does not help. In a study made by P.I. Dallop and R.L. Dominowski in 1992 these two possibilities were contrasted. In approximately one third of the cases error feedback led to right answers, so only approximately one third of the wrong answers were due to inadequate monitoring.[1]

Another approach is the study of examples with and without a preceding analogous task. In cases such like the water-jug task analogous thinking indeed leads to a correct solution, but to take a different way might make the case much simpler:

Imagine Knut again, this time he is given three jugs with different capacities and is asked to measure the required amount of water. :Of course he is not allowed to use anything despite the jugs and as much water as he likes. In the first case the sizes are: 127 litres, 21 litres and 3 litres while 100 litres are desired.
In the second case Knut is asked to measure 18 litres from jugs of 39, 15 and three litres size.

In fact participants faced with the 100 litre task first choose a complicate way in order to solve the second one. Others on the contrary who did not know about that complex task solved the 18 litre case by just adding three litres to 15.

## Problem Solving as a Search Problem

The idea of regarding problem solving as a search problem originated from Alan Newell and Herbert Simon while trying to design computer programs which could solve certain problems. This led them to develop a program called General Problem Solver which was able to solve any well-defined problem by creating heuristics on the basis of the user's input. This input consisted of objects and operations that could be done on them.

As we already know, every problem is composed of an initial state, intermediate states and a goal state (also: desired or final state), while the initial and goal states characterise the situations before and after solving the problem. The intermediate states describe any possible situation between initial and goal state. The set of operators builds up the transitions between the states. A solution is defined as the sequence of operators which leads from the initial state across intermediate states to the goal state.

The simplest method to solve a problem, defined in these terms, is to search for a solution by just trying one possibility after another (also called trial and error).

As already mentioned above, an organised search, following a specific strategy, might not be helpful for finding a solution to some ill-defined problem, since it is impossible to formalise such problems in a way that a search algorithm can find a solution.

As an example we could just take Knut and his essay: he has to find out about his own opinion and formulate it and he has to make sure he understands the sources texts. But there are no predefined operators he can use, there is no panacea how to get to an opinion and even not how to write it down.

### Means-End Analysis

In Means-End Analysis you try to reduce the difference between initial state and goal state by creating subgoals until a subgoal can be reached directly (probably you know several examples of recursion which works on the basis of this).

An example for a problem that can be solved by Means-End Analysis are the „Towers of Hanoi“:

Towers of Hanoi - A well defined problem

The initial state of this problem is described by the different sized discs being stacked in order of size on the first of three pegs (the “start-peg“). The goal state is described by these discs being stacked on the third pegs (the “end-peg“) in exactly the same order.

There are three operators:

• You are allowed to move one single disc from one peg to another one
• You are only able to move a disc if it is on top of one stack
• A disc cannot be put onto a smaller one.

In order to use Means-End Analysis we have to create subgoals. One possible way of doing this is described in the picture:

1. Moving the discs lying on the biggest one onto the second peg.

2. Shifting the biggest disc to the third peg.

3. Moving the other ones onto the third peg, too

You can apply this strategy again and again in order to reduce the problem to the case where you only have to move a single disc – which is then something you are allowed to do.

Strategies of this kind can easily be formulated for a computer; the respective algorithm for the Towers of Hanoi would look like this:

1. move n-1 discs from A to B

2. move disc #n from A to C

3. move n-1 discs from B to C

where n is the total number of discs, A is the first peg, B the second, C the third one. Now the problem is reduced by one with each recursive loop.

Means-end analysis is important to solve everyday-problems - like getting the right train connection: You have to figure out where you catch the first train and where you want to arrive, first of all. Then you have to look for possible changes just in case you do not get a direct connection. Third, you have to figure out what are the best times of departure and arrival, on which platforms you leave and arrive and make it all fit together.

### Analogies

Analogies describe similar structures and interconnect them to clarify and explain certain relations. In a recent study, for example, a song that got stuck in your head is compared to an itching of the brain that can only be scratched by repeating the song over and over again.

### Restructuring by Using Analogies

One special kind of restructuring, the way already mentioned during the discussion of the Gestalt approach, is analogical problem solving. Here, to find a solution to one problem - the so called target problem, an analogous solution to another problem - the source problem, is presented.

An example for this kind of strategy is the radiation problem posed by K. Duncker in 1945:

As a doctor you have to treat a patient with a malignant, inoperable tumour, buried deep inside the body. There exists a special kind of ray, which is perfectly harmless at a low intensity, but at the sufficient high intensity is able to destroy the tumour - as well as the healthy tissue on his way to it. What can be done to avoid the latter?

When this question was asked to participants in an experiment, most of them couldn't come up with the appropriate answer to the problem. Then they were told a story that went something like this:

A General wanted to capture his enemy's fortress. He gathered a large army to launch a full-scale direct attack, but then learned, that all the roads leading directly towards the fortress were blocked by mines. These roadblocks were designed in such a way, that it was possible for small groups of the fortress-owner's men to pass them safely, but every large group of men would initially set them off. Now the General figured out the following plan: He divided his troops into several smaller groups and made each of them march down a different road, timed in such a way, that the entire army would reunite exactly when reaching the fortress and could hit with full strength.

Here, the story about the General is the source problem, and the radiation problem is the target problem. The fortress is analogous to the tumour and the big army corresponds to the highly intensive ray. Consequently a small group of soldiers represents a ray at low intensity. The solution to the problem is to split the ray up, as the general did with his army, and send the now harmless rays towards the tumour from different angles in such a way that they all meet when reaching it. No healthy tissue is damaged but the tumour itself gets destroyed by the ray at its full intensity.

M. Gick and K. Holyoak presented Duncker's radiation problem to a group of participants in 1980 and 1983. Only 10 percent of them were able to solve the problem right away, 30 percent could solve it when they read the story of the general before. After given an additional hint - to use the story as help - 75 percent of them solved the problem.

With this results, Gick and Holyoak concluded, that analogical problem solving depends on three steps:

1. Noticing that an analogical connection exists between the source and the target problem.
2. Mapping corresponding parts of the two problems onto each other (fortress → tumour, army → ray, etc.)
3. Applying the mapping to generate a parallel solution to the target problem (using little groups of soldiers approaching from different directions → sending several weaker rays from different directions)

Next, Gick and Holyoak started looking for factors that could be helpful for the noticing and the mapping parts, for example:

Discovering the basic linking concept behind the source and the target problem.

-->picture coming soon<--

#### Schema

The concept that links the target problem with the analogy (the “source problem“) is called problem schema. Gick and Holyoak obtained the activation of a schema on their participants by giving them two stories and asking them to compare and summarise them. This activation of problem schemata is called “schema induction“.

The two presented texts were picked out of six stories which describe analogical problems and their solution. One of these stories was "The General" (remember example in Chapter 4.1).

After solving the task the participants were asked to solve the radiation problem (see chapter 4.2). The experiment showed that in order to solve the target problem reading of two stories with analogical problems is more helpful than reading only one story: After reading two stories 52% of the participants were able to solve the radiation problem (As told in chapter 4.2 only 30% were able to solve it after reading only one story, namely: “The General“).

Gick and Holyoak found out that the quality of the schema a participant developed differs. They classified them into three groups:

• Good schemata: In good schemata it was recognised that the same concept was used in order to solve the problem (21% of the participants created a good schema and 91% of them were able to solve the radiation problem).
• Intermediate schemata: The creator of an intermediate schema has figured out that the root of the matter equals (here: many small forces solved the problem). (20% created one, 40% of them had the right solution).
• Poor schemata: The poor schemata were hardly related to the target problem. In many poor schemata the participant only detected that the hero of the story was rewarded for his efforts (59% created one, 30% of them had the right solution).

The process of using a schema or analogy, i.e. applying it to a novel situation is called transduction. One can use a common strategy to solve problems of a new kind.

To create a good schema and finally get to a solution is a problem-solving skill that requires practise and some background knowledge.

## How do Experts Solve Problems?

With the term expert we describe someone who devotes large amounts of his or her time and energy to one specific field of interest in which he, subsequently, reaches a certain level of mastery. It should not be of surprise that experts tend to be better in solving problems in their field than novices (people who are beginners or not as well trained in a field as experts) are. They are faster in coming up with solutions and have a higher success rate of right solutions. But what is the difference between the way experts and non-experts solve problems? Research on the nature of expertise has come up with the following conclusions:

Experts know more about their field,
their knowledge is organised differently, and
they spend more time analysing the problem.

When it comes to problems that are situated outside the experts' field, their performance often does not differ from that of novices.

Knowledge: An experiment by Chase and Simon (1973a, b) dealt with the question how well experts and novices are able to reproduce positions of chess pieces on chessboards when these are presented to them only briefly. The results showed that experts were far better in reproducing actual game positions, but that their performance was comparable with that of novices when the chess pieces were arranged randomly on the board. Chase and Simon concluded that the superior performance on actual game positions was due to the ability to recognise familiar patterns: A chess expert has up to 50,000 patterns stored in his memory. In comparison, a good player might know about 1,000 patterns by heart and a novice only few to none at all. This very detailed knowledge is of crucial help when an expert is confronted with a new problem in his field. Still, it is not pure size of knowledge that makes an expert more successful. Experts also organise their knowledge quite differently from novices.

Organisation: In 1982 M. Chi and her co-workers took a set of 24 physics problems and presented them to a group of physics professors as well as to a group of students with only one semester of physics. The task was to group the problems based on their similarities. As it turned out the students tended to group the problems based on their surface structure (similarities of objects used in the problem, e.g. on sketches illustrating the problem), whereas the professors used their deep structure (the general physical principles that underlay the problems) as criteria. By recognising the actual structure of a problem experts are able to connect the given task to the relevant knowledge they already have (e.g. another problem they solved earlier which required the same strategy).

Analysis: Experts often spend more time analysing a problem before actually trying to solve it. This way of approaching a problem may often result in what appears to be a slow start, but in the long run this strategy is much more effective. A novice, on the other hand, might start working on the problem right away, but often has to realise that he reaches dead ends as he chose a wrong path in the very beginning.

## Creative Cognition

We already introduced a lot of ways to solve a problem, mainly strategies that can be used to find the “correct” answer. But there are also problems which do not require a “right answer” to be given - It is time for creative productiveness!

Imagine you are given three objects – your task is to invent a completely new object that is related to nothing you know. Then try to describe its function and how it could additionally be used. Difficult? Well, you are free to think creatively and will not be at risk to give an incorrect answer. For example think of what can be constructed from a half-sphere, wire and a handle. The result is amazing: a lawn lounger, global earrings, a sled, a water weigher, a portable agitator, ... [2]

### Divergent Thinking

The term divergent thinking describes a way of thinking that does not lead to one goal, but is open-ended. Problems that are solved this way can have a large number of potential 'solutions' of which none is exactly 'right' or 'wrong', though some might be more suitable than others.

Solving a problem like this involves indirect and productive thinking and is mostly very helpful when somebody faces an ill-definedproblem, i.e. when either initial state or goal state cannot be stated clearly and operators or either insufficient or not given at all.

The process of divergent thinking is often associated with creativity, and it undoubtedly leads to many creative ideas. Nevertheless, researches have shown that there is only modest correlation between performance on divergent thinking tasks and other measures of creativity. Additionally it was found that in processes resulting in original and practical inventions things like searching for solutions, being aware of structures and looking for analogies are heavily involved, too.

Thus, divergent thinking alone is not an appropriate tool for making an invention. You also need to analyse the problem in order to make the suggested, i.e. invention, solution appropriate.

#### right or wrong

The ability of children to imitate the people and the surrounding environment also influential in recognizing the concepts of right and wrong To introduce the concepts of right and wrong must be seen from the age of the child. When children are a year old, their brains are not fully developed so their understanding is still limited. But keep in mind, too, from an early age the average child is able to imitate parents, see their surroundings and do imitation or called modeling. Therefore, the introduction of the concept of right and wrong also depends on how the parents or other adults live with the child. "If a mother often sits on the couch while raising both legs, children tend to sit with more or less the same style and think this is true. As we get older, modeling is the most natural thing that children can get about right and wrong," said this psychologist called Kiki. The method of giving understanding about the concepts of right and wrong is also adjusted to the age of the child. If children are still toddlers, they can go through activities such as telling stories that are rich in social values. Slip conclusions at the end of a fairy tale. "For example, the Kancil tale, after storytelling parents can say, 'So, stealing is not good', to emphasize the moral message in the fairy tale," said the psychologist from the Indonesian Psychological Practice Foundation, Bintaro, South Jakarta. For children who are older, for example in primary school age and still under 12 years of age, understanding can be given by giving an explanation of their eyes. Because the nature of them still tends to be egocentric. However, when entering adolescence, giving an explanation can be through a general perspective, especially cause and effect. "When giving to tell children about the concepts of right and wrong, parents need to pay attention to whether the child really understands the message that was delivered as a whole or only part of the contents of the message," Kiki added. For example, when parents want to teach the concept of stealing is not good through the story of Kancil, parents must make sure the child understands that anyone should not steal, no matter what the circumstances. Do not let the child who understands that is not allowed to steal a mouse deer or that should not be stolen is cucumber. Therefore, ask the child to explain his understanding once more so that the child is sure to understand. Responsible Learning If you have been taught the concept of right and wrong, but the child still violates it, parents must act and the child needs to know the consequences of the wrong actions. "For example, it was explained that you should not pick rambutan from a neighboring tree, but the child still did it, immediately reprimanded firmly and words that were not ambiguous or ambiguous, but still polite. "However, the child must be responsible for his attitude," Kiki reminded. Of course, continued Kiki, all this depends on the age of the child. In a small age for certain things, it is better for parents to stay with children, but when they are older, children need to know that parents will not risk their mistakes. Children who from childhood have understood between right and wrong will grow into individuals who are independent, responsible and well-mannered. This will also make it easier for them to socialize in their environment, have healthy friendships and make it easier for them to get good jobs because employers and coworkers certainly want to work with people who are polite, honest and responsible. Important to remember The following basic things can be done by parents to instill in children the right behavior - To say thanks - Say a word please if you want to ask for help - apologize if wrong, even to the child if the parents are wrong - Say greetings

### Convergent Thinking

Convergent thinking patterns are problem solving techniques that unite different ideas or fields to find a solution. The focus of this mindset is speed, logic and accuracy, also identification of facts, reapplying existing techniques, gathering information. The most important factor of this mindset is: there is only one correct answer. You only think of two answers, namely right or wrong. This type of thinking is associated with certain science or standard procedures. People with this type of thinking have logical thinking, are able to memorize patterns, solve problems and work on scientific tests. Most school subjects sharpen this type of thinking ability.   Research shows that the creative process involves both types of thought processes. But experts recommend not joining the two processes in one session. For example, in the next 30 minutes, you invite everyone on your team to brainstorm creating new ideas (which involve divergent thinking patterns). Within 30 minutes, all ideas should only be recorded, not judged, for example by saying that an idea is irrelevant because of a limited budget. After all the ideas are contained, go to the next session, namely analysis and decision making (which involves convergent thinking patterns). Based on research too, doing creative jobs causes mood swings (mood swings), and it turns out that both types of thinking create two different moods. Convergent thinking patterns create negative moods, while divergent thinking patterns create a positive mood. J.A. Research Horne in 1988 revealed that lack of sleep will greatly affect the performance of people with divergent thought patterns, whereas people with convergent mindsets will be more likely to be fine. Including which mindset do you have? Use wisely your talents, and practice both types of thinking to be able to use them in balance at the right times.

## Neurophysiological Background

Presenting Neurophysiology in its entirety would be enough to fill several books. Fortunately we do not have to concern ourselves with most of these facts. Instead, let's just focus on the aspects that are really relevant to problem solving. Nevertheless this topic is quite complex and problem solving cannot be attributed to one single brain area. Rather there are systems of several brain areas working together to perform a specific task. This is best shown by an example:

In 1994 Paolo Nichelli and coworkers used the method of PET (Positron Emission Tomography), to localise certain brain areas, which are involved in solving various chess problems. In the following table you can see which brain area was active during a specific task:
• Identifying chess pieces
• determining location of pieces

• Thinking about making a move
• Remembering a pieces move

• Planning and executing strategies
• Pathway from Occipital to Temporal Lobe

(also called the "what"-pathway of visual processing)

• Pathway from Occipital to parietal Lobe

(also called the "where"-pathway of visual processing)

• Premotor area
• Hippocampus

(forming new memories)

• Prefrontal cortex

Lobes of the Brain

One of the key tasks, namely planning and executing strategies, is performed by a brain area which also plays an important role for several other tasks correlated with problem solving - the prefrontal cortex (PFC). This can be made clear if you take a look at several examples of damages to the PFC and their effects on the ability to solve problems.
Patients with a lesion in this brain area have difficulty switching from one behaviouristic pattern to another. A well known example is the wisconsin card-sorting task. A patient with a PFC lesion who is told to separate all blue cards from a deck, would continue sorting out the blue ones, even if the experimenter told him to sort out all brown cards. Transferred to a more complex problem, this person would most likely fail, because he is not flexible enough to change his strategy after running into a dead end.
Another example is the one of a young homemaker, who had a tumour in the frontal lobe. Even though she was able to cook individual dishes, preparing a whole family meal was an infeasible task for her.

As the examples above illustrate, the structure of our brain seems to be of great importance regarding problem solving, i.e. cognitive life. But how was our cognitive apparatus designed? How did perception-action integration as a central species specific property come about?

## The Evolutionary Perspective

Charles Darwin developed the evolutionary theory which was primarily meant to explain why there are so many different kinds of species. This theory is also important for psychology because it explains how species were designed by evolutionary forces and what their goals are. By knowing the goals of species it is possible to explain and predict their behaviour.

The process of evolution involves several components, for instance natural selection - which is a feedback process that 'chooses' among 'alternative designs' on the basis of deciding how good the respective modulation is. As a result of this natural selection we find adaption. This is a process that constantly tests the variations among individuals in relation to the environment. If adaptions are useful they get passed on; if not they’ll just be an unimportant variation.

Another component of the evolutionary process is sexual selection, i.e. increasing of certain sex characteristics, which give individuals the ability to rival with other individuals of the same sex or an increased ability to attract individuals of the opposite sex.

Altruism is a further component of the evolutionary process, which will be explained in more detail in the following chapter Evolutionary Perspective on Social Cognitions.

## Summary and Conclusion

After Knut read this WikiChapter he was relieved that he did not waste his time for the essay – quite the opposite! He now has a new view on problem solving - and recognises his problem as a well-defined one:

His initial state was the clear blank paper without any philosophical sentences on it. The goal state was just in front of his mind's eye: Him – grinning broadly – handing in the essay with some carefully developed arguments.

He decides to use the technique of Means-End Analysis and creates several subgoals:

2. Summarise parts of the text
3. Develop an argumentative structure
4. Write the essay
5. Look for typos

Right after he hands in his essay Knut will go on reading this WikiBook. He now looks forward to turning the page over and to discovering the next chapter...

## References

1. R.L. Dominowski and P. Dallob, Insight and Problem Solving. In The Nature of Insight, R.J. Sternberg & J.E. Davidson (Eds). MIT Press: USA, pp.33-62 (1995).
2. Goldstein, E.B. (2005). Cogntive Psychology. Connecting Mind, Research, and Everyday Experience. Belmont: Thomson Wadsworth.

# Evolutionary Perspective on Social Cognitions

## Introduction

Why do we live in cities? Why do we often choose to work together? Why do we enjoy sharing our spare time with others? These are questions of Social Cognition and its evolutionary development.

The term Social Cognition describes all abilities necessary to act adequately in a social system. Basically, it is the study of how we process social information, especially its storage, retrieval and application to social situations. Social Cognition is a common skill among various species.

In the following, the focus will be on Social Cognition as a human skill. Important concepts and the development during childhood will be explained. Having built up a conceptional basis for the term, we will then take a look at this skill from an evolutionary perspective and present the common theories on the origin of Social Cognition.

The publication of Michael Tomasello et al. in the journal Behavioral and Brain Sciences (2005) [1] will serve as a basis for this chapter.

## Social Cognition

### The human faculty of Social Cognition

Playing football as a complex social activity

Humans are by far the most talented species in reading the minds of others. That means we are able to successfully predict what other humans perceive, intend, believe, know or desire. Among these abilities, understanding the intention of others is crucial. It allows us to resolve possible ambiguities of physical actions. For example, if you were to see someone breaking a car window, you would probably assume he was trying to steal a stranger’s car. He would need to be judged differently if he had lost his car keys and it was his own car that he was trying to break into. Humans also collaborate and interact culturally. We perform complex collaborative activities, like building a house together or playing football as a team. Over time this led to powerful concepts of organizational levels like societies and states. The reason for this intense development can be traced back to the concept of Shared Intentionality.

### Shared Intentionality

An intentional action is an organism’s intelligent behavioural interaction with its environment towards a certain goal state. This is the concept of Problem Solving, which was already described in the previous chapter.

The social interaction of agents in an environment which understand each other as acting intentionally causes the emergence of Shared Intentionality. This means that the agents work together towards a shared goal in collaborative interaction. They do that in coordinated action roles and mutual knowledge about themselves. The nature of the activity or its complexity is not important, as long as the action is carried out in the described fashion. It is important to mention that the notion of shared goals means that the internal goals of each agent include the intentions of the others. This can easily be misinterpreted. For example take a group of apes on a hunt. They appear to be acting in a collaborative way, however, it is reasonable to assume that they do not have coordinated action roles or a shared goal – they could just be acting towards the same individual goal. Summing up, the important characteristics of the behaviour in question are that the agents are mutually responsive, have the goal of achieving something together and coordinate their actions with distributed roles and action plans.

The strictly human faculty to participate in collaborative actions that involve shared goals and socially coordinated action plans is also called Joint Intention. This requires an understanding of the goals and perceptions of other involved agents, as well as sharing and communicating these, which again seems to be a strictly human behaviour. Due to our special motivation to share psychological states , we also need certain complex cognitive representations. These representations are called dialogic cognitive representations, because they have as content mostly social engagement. This is especially important for the concept of joint intentions, since we need not only a representation for our own action plan, but also for our partner's plan. Joint Intentions are an essential part of Shared Intentionality.

Dialogic cognitive representations are closely related with the communication and use of linguistic symbols. They allow in some sense a form of collective intentionality, which is important to construct social norms, conceptualize beliefs and, most importantly, share them. In complex social groups the repeated sharing of intentions in a particular interactive context leads to the creation of habitual social practices and beliefs. That may form normative or structural aspects of a society, like government, money, marriage, etc. Society might hence be seen as a product and an indicator of Social Cognition.

The social interaction that builds ground for activities involving Shared Intentionality is proposed to be divided into three groups:

• Dyadic engagement: The simple sharing of emotions and behaviour, by means of interaction and direct mutual response between agents. Dyadic interaction between human infants and adults are called protoconversations. These are turn-taking sequences of touching, face expressions and vocalisations. The exchange of emotions is the most important outcome of this interaction.
• Triadic engagement: Two agents act together towards a shared goal, while monitoring the perception and goal-direction of the other agent. They focus on the same problem and coordinate their actions respectively, which makes it possible to predict following events.
• Collaborative engagement: The combination of Joint Intentions and attention. At this point, the agents share a goal and act in complementary roles with a complex action plan and mutual knowledge about the selective attention and the intentions of one another. The latter aspect allows the agents to assist each other and reverse or take over roles.

These different levels of social engagement require the understanding of different aspects of intentional action, as introduced above, and presuppose the motivation to share psychological states with each other.

### Development of Social Cognition during childhood

Children making social experiences

A crucial point for Social Cognition is the comprehension of intentional action. Children's understanding of intentional action can basically be divided into three groups, each representing a more complex level of grasp.

1. The first one to be mentioned is the identification of animate action. This means that after a couple of months, babies can differentiate between motion that was caused by some external influence and actions that an organism has performed by itself, as an animate being. At this stage, however, the child has not yet any understanding of potential goals the observed actor might have, so it is still incapable of predicting the behaviour of others.
2. The next stage of comprehension includes the understanding that the organism acts with persistence towards achieving a goal. Children can now distinguish accidental incidents from intentional actions and failed from successful attempts. This ability develops after about 9 months. With this new perspective, the child also learns that the person it observes has a certain perception - thus a certain amount of predicting behaviour is possible. This is an essential difference between the first and the second stage.
3. After around 14 months of age, children fully comprehend intentional action and the basics of rational decision making. They realise, that an actor pursuing a goal may have a variety of action plans to achieve a goal, and is choosing between them. Furthermore, a certain sense for the selective attention of an agent develops. This allows a broad variety of predictions of behaviour in a certain environment. In addition to that, children acquire the skill of cultural learning: when they observe how an individual successfully reaches a goal, they memorise the procedure. Hence, they can use the methods to reach their own goals. This is called imitative learning, which turns out to be an extremely powerful tool. By applying this technique, children also learn how things are conventionally done in their culture.

## Evolutionary perspective on Social Cognition

So far we discussed what Social Cognition is about. But how could this behaviour develop during evolution? At first glance, Darwin’s theory of the survival of the fittest does not support the development of social behaviour. Caring for others, and not just for oneself, seems to be a decrease of fitness. Nevertheless, various theories have been formulated which try to explain Social Cognition from an evolutionary perspective. We will present three influential theories which have been formulated by Steven Gaulin and Donald McBurney.[2]

### Group Selection

Moai at Rano Raraku

Vero Wynne-Edwards first proposed this theory in the 1960's. From an evolutionary perspective, a group is a number of individuals which affect the fitness of each other. Group Selection means that if any of the individuals of a group is doing benefit to its group, the group is more likely to survive and pass on its predisposition to the next generation. This again improves the chance of the individual to spread its genetic material. So in this theory a social organism is more likely to spread its genes than a selfish organism. The distinction to the classical theory of evolution is that not only the fittest individuals are likely to survive, but also the fittest groups.

An example would be the history of the Rapa Nui. The Rapa Nui were the natives of Easter Island which handled their resources extremely wasteful in order to build giant heads made of stone. After a while, every tree on the island was extinct because they needed the trunks to transport the stones. The following lack of food led to the breakdown of their civilization.

A society which handles their resources more moderate and provident would not have ended up in such a fate. However, if both societies would have lived on one island, the second group would not have been able to survive because they would not have been able to keep the resources.

This indicates the problem of the Group Selection: it needs certain circumstances to describe things properly. Additionally, every theory about groups should include the phenomenon of migration. So in this simple form, the theory is not capable of handling selfish behaviour of some agents in altruistic groups: Altruistic groups which include selfish members would turn into pure selfish ones over time, because altruistic agents would work for selfish agents, thereby increasing the cheaters' fitness while decreasing their own. Thus, Group Selection may not be a sufficient explanation for the development of Social Cognition.

### Kin Selection

Since altruistic populations are vulnerable to cheaters, there must exist a mechanism that allows altruism to be maintained by natural selection. The Kin Selection approach provides an explanation how altruistic genes can spread without being eliminated by selfish behaviour. The theory was developed by William D. Hamilton and John M. Smith in 1964.[3] The basic principle of Kin Selection is to benefit somebody who is genetically related, for example by sharing food. For the altruistic individual, this means a reduction of its own fitness by increasing the fitness of its relative. However, the closer the recipient is related to the altruist, the more likely he shares the altruistic genes. The loss of fitness can be compensated since the genes of the altruistically behaving agent have then the chance to be spread indirectly through the recipient: The relative might be able to reproduce and pass the altruistic genes over to the next generation.

In principle, the disadvantage for the giver should always be less than the increased fitness of the addressee. This relation between costs and benefit is expressed by Hamilton's rule taking additionally the relatedness of altruist and recipient into account:

${\displaystyle r\cdot b>c}$

Ant colonies provide evidence for Kin Selection

where

r shows the genetic relatedness between altruist and recipient (coefficient between zero and one),
b is the reproductive benefit or increased fitness for the recipient and
c are the altruist's reproductive costs or the reduction of his fitness in the performed action.

If the product of relatedness and benefit outweighs the costs for the giver, the altruistic action should be performed. The closer the recipient is genetically related, the higher costs are acceptable.

Examples for kin-selected altruism can be found in populations of social insects like ants, termites or bees. An ant colony, for instance, consists of one fertile queen and several hundreds or more of sterile female workers. While the queen is the only one reproducing, the workers are among other things responsible for brood care. The workers are genetically closer related to the sisters they raise (75%) than they would be to their own offspring (50%). Therefore, they are passing on more of their genes than if they bred on their own.

According to Hamilton's rule, altruism is only favoured if directed towards relatives, that is ${\displaystyle r>0}$ . Therefore, Kin Selection theory accounts only for genetic relatives. Altruism however occurs among not related individuals as well. This issue is addressed by the theory of Reciprocal Altruism.

### Reciprocal Altruism

The theory of Reciprocal Altruism describes beneficial behaviour in expectation of future reciprocity. This form of altruism is not a selfless concern for the welfare of others but it denotes mutual cooperation of repeatedly interacting species in order to maximise their individual utility. In social life an individual can benefit from mutual cooperation, but each one can also do even better by exploiting the cooperative efforts of others. Game Theory allows a formalisation of the strategic possibilities in such situations. It can be shown, that altruistic behaviour can be more successful (in terms of utility) than purely self-interested strategies and therefore will lead to better fitness and survivability.

In many cases social interactions can be modelled by the Prisoner's Dilemma, which provides the basis of our analysis. The classical prisoner’s dilemma is as follows: Knut and his friend are arrested by the police. The police has insufficient evidence for a conviction, and, having separated both prisoners, visits each of them to offer the same deal: if one testifies for the prosecution against the other and the other remains silent, the betrayer goes free and the silent accomplice receives the full ten-year sentence. If both stay silent, the police can sentence both prisoners to only six months in jail for a minor charge. If each betrays the other, each will receive a two-year sentence.

Possible outcomes of the Prisoner's Dilemma:
Prisoner 1 / Prisoner 2 Cooperate Defect
Cooperate 6 months each 10 years / free
Defect free / 10 years 2 years each

Each prisoner has two strategies to choose from, to remain silent (cooperate) or to testify (defect). Assume Knut wants to minimize his individual durance. If Knut’s friend cooperates, it is better to defect and go free than to cooperate and spend six months in jail. If Knut’s friend defects, then Knut should defect too, because two years in jail are better than ten. The same holds for the other prisoner. So defection is the dominant strategy in the prisoner’s dilemma, even though both would do better, if they cooperated. In a one-shot game a rational player would always defect, but what happens if the game is played repeatedly?

One of the most effective strategies in the iterated prisoner’s dilemma is the mixed strategy called Tit for Tat: Always cooperate in the first game, then do whatever your opponent did in the previous game. Playing Tit for Tat means to maintain cooperation as long as the opponent does. If the opponent defects he gets punished in succeeding games by defecting likewise until cooperation is restored. With this strategy rational players can sustain the cooperative outcome at least for indefinitely long games (like life).[4] Clearly Tit for Tat is only expected to evolve in the presence of a mechanism to identify and punish cheaters.

Assuming species are not able to choose between different strategies, but rather that their strategical behaviour is hard-wired, we can finally come back to the evolutionary perspective. In The Evolution of Cooperation Robert Axelrod formalised Darwin’s emphasis on individual advantage in terms of game theory.[5] Based on the concept of an evolutionary stable strategy in the context of the prisoner’s dilemma game he showed how cooperation can get started in an asocial world and can resist invasion once fully established.

## Conclusion

Summing up, Social Cognition is a very complex skill and can be seen as the fundament of our current society. On account of the concept of Shared Intentionality, humans show by far the most sophisticated form of social cooperation. Although it may not seem obvious, Social Cognition can actually be compatible with the theory of evolution and various reasonable approaches can be formulated. These theories are all based on a rather selfish drive to pass on our genetic material - so it may be questionable, if deep-rooted altruism and completely selfless behaviour truly exists.

## References

1. Tomasello, M. et al (2005). Understanding and sharing intentions: The origins of cultural cognition. Behavioral and Brain Sciences, 28(5), 675–735.
2. Gaulin, S. J. C, & McBurney, D. H. (2003). Evolutionary Psychology. New Jersey: Prentice-Hall.
3. Hamilton, W. D. (1964). The genetical evolution of social behaviour I and II. Journal of Theoretical Biology, 7, 17-52.
4. Aumann, R. J. (1959). Acceptable Points in General Cooperative n-Person Games. Contributions to the Theory of Games IV, Annals of Mathematics Study, 40, 287-324.
5. Axelrod, R. (1984). The Evolution of Cooperation. New York: Basic Books.

# Behavioural and Neuroscience Methods

## Introduction

Ct-Scan

human brain lobes

Behavioural and Neuroscientific methods are used to gain insight into how the brain influences the way individuals think, feel, and act. There are an array of methods, which can be used to analyze the brain and its relationship to behavior. Well-known techniques include EEG (electroencephalography) which records the brain’s electrical activity and fMRI (functional magnetic resonance imaging) which produces detailed images of brain structure and/or activity. Other methods, such as the lesion method, are lesser known, but still influential in today's neuroscience research.

Methods can be organized into the following categories: anatomical, physiological, and functional. Other techniques include modulating brain activity, analyzing behavior or computational modeling.

## Lesion method

In the lesion method, patients with brain damage are examined to determine which brain structures are damaged and how this influences the patient's behavior. Researchers attempt to correlate a specific brain area to an observed behavior by using reported experiences and research observations. Researchers may conclude that the loss of functionality in a brain region causes behavioral changes or deficits in task performance. For example, a patient with a lesion in the parietal-temporal-occipital association area will exhibit agraphia, a condition in which he/she is not able to write, despite having no deficits in motor ability. If damage to a particular brain region (structure X) is shown to correlate with a specific change in behavior (behavior Y), researchers may deduce that structure X has a relation to behavior Y.

In humans, lesions are most often caused by tumors or strokes. Through current brain imaging technologies, it is possible to determine which area was damaged during a stroke. Loss of function in the stroke victim may then be correlated with that damaged brain area. While lesion studies in humans have provided key insights into brain organization and function, lesions studies in animals offer many advantages.

First, animals used in research are reared in controlled environmental conditions that limit variability between subjects. Secondly, researchers are able to measure task performance in the same animal, before and after a lesion. This allows for within-subject comparison. And third the control groups can be watched who either did not undergo surgery or who did have surgery in another brain area. These benefits also increase the accuracy of the hypothesis being tested which is more difficult in human research because the before-after comparison and control experiments drop out.

Visualization of iron rod passing through brain of Phineas Gage

To strengthen conclusions regarding a brain area and task performance, researchers may perform double dissociation. The goal of this method is to prove that two dissociations are independent. Through comparison of two patients with differing brain damage and contradictory disease patterns, researchers may localize different behaviors to each brain area. Broca's area is a region of the brain is responsible for language processing, comprehension and speech production. Patients with a lesion in Broca's area will exhibit Broca's aphasia or non-fluent aphasia. These patients are unable to speak fluently; a sentence produced by a patient with damage to the Broca's area may sound like: "I ... er ... wanted ... ah ... well ... I ... wanted to ... er ... go surfing ... and ..er ... well...". On the other hand, Wernicke's area is responsible for speech comprehension. A patient with a lesion in this area has Wernicke's aphasia. They may be able to produce language, but lack the ability to produce meaningful sentences. Patients may produce 'word salad': " I then did this chingo for some hours after my dazi went through meek and been sharko". Patients with Wernicke's aphasia are often unaware of speech deficits and may believe that they are speaking properly.

Certainly one of the famous "lesion" cases was that of Phineas Gage. On 13 September 1848 Gage, a railroad construction foreman, was using an iron rod to tamp an explosive charge into a body of rock when premature explosion of the charge blew the rod through his left jaw and out the top of his head. Miraculously, Gage survived, but reportedly underwent a dramatic personality change as a result of destruction of one or both of his frontal lobes. The uniqueness of Gage case (and the ethical impossibility of repeating the treatment in other patients) makes it difficult to draw generalizations from it, but it does illustrate the core idea behind the lesion method. Further problems stem from the persistent distortions in published accounts of Gage—see the Wikipedia article Phineas Gage.

## Techniques for Assessing Brain Anatomy / Physiological Function

### CAT

X-ray picture.

CAT scanning was invented in 1972 by the British engineer Godfrey N. Hounsfield and the South African (later American) physicist Alan Cromack.

CAT (Computed Axial Tomography) is an x-ray procedure which combines many x-ray images with the aid of a computer to generate cross-sectional views, and when needed 3D images of the internal organs and structures of the human body. A large donut-shaped x-ray machine takes x-ray images at many different angles around the body. Those images are processed by a computer to produce cross-sectional picture of the body. In each of these pictures the body is seen as an x-ray ‘slice’ of the body, which is recorded on a film. This recorded image is called a tomogram.

CAT scans are performed to analyze, for example, the head, where traumatic injuries (such as blood clots or skull fractures), tumors, and infections can be identified. In the spine the bony structure of the vertebrae can be accurately defined, as can the anatomy of the spinal cord. CAT scans are also extremely helpful in defining body organ anatomy, including visualizing the liver, gallbladder, pancreas, spleen, aorta, kidneys, uterus, and ovaries. The amount of radiation a person receives during a CAT scan is minimal. In men and non-pregnant women it has not been shown to produce any adverse effects. However, doing a CAT test hides some risks. If the subject or the patient is pregnant it maybe recommended to do another type of exam to reduce the possible risk of exposing her fetus to radiation. Also in cases of asthma or allergies it is recommended to avoid this type of scanning. Since the CAT scan requires a contrast medium, there's a slight risk of an allergic reaction to the contrast medium. Having certain medical conditions; Diabetes, asthma, heart disease, kidney problems or thyroid conditions also increases the risk of a reaction to contrast medium.

### MRI

Although CAT scanning was a breakthrough, in many cases it was substituted by magnetic resonance imaging (MRI), a method of looking inside the body without using x-rays, harmful dyes or surgery. Instead, radio waves and a strong magnetic field are used in order to provide remarkably clear and detailed pictures of internal organs and tissues.

History and Development of MRI

MRI is based on a physics phenomenon called nuclear magnetic resonance (NMR), which was discovered in the 1930s by Felix Bloch (working at Stanford University) and Edward Purcell (from Harvard University). In this resonance, magnetic field and radio waves cause atoms to give off tiny radio signals. In the year 1970, Raymond Damadian, a medical doctor and research scientist, discovered the basis for using magnetic resonance imaging as a tool for medical diagnosis. Four years later a patent was granted, which was the world's first patent issued in the field of MRI. In 1977, Dr. Damadian completed the construction of the first “whole-body” MRI scanner, which he called the ”Indomitable”. The medical use of magnetic resonance imaging has developed rapidly. The first MRI equipment in healthcare was available at the beginning of the 1980s. In 2002, approximately 22,000 MRI scanners were in use worldwide, and more than 60 million MRI examinations were performed.

A full size MRI-Scanner.

Common Uses of the MRI Procedure

Because of its detailed and clear pictures, MRI is widely used to diagnose sports-related injuries, especially those affecting the knee, elbow, shoulder, hip and wrist. Furthermore, MRI of the heart, aorta and blood vessels is a fast, non-invasive tool for diagnosing artery disease and heart problems. The doctors can even examine the size of the heart-chambers and determine the extent of damage caused by a heart disease or a heart attack. Organs like lungs, liver or spleen can also be examined in high detail with MRI. Because no radiation exposure is involved, MRI is often the preferred diagnostic tool for examination of the male and female reproductive systems, pelvis and hips and the bladder.

Risks

An undetected metal implant may be affected by the strong magnetic field. MRI is generally avoided in the first 12 weeks of pregnancy. Scientists usually use other methods of imaging, such as ultrasound, on pregnant women unless there is a strong medical reason to use MRI.

### PPT MRIII

Reconstruction of nerve fibers

There has been some further development of the MRI: The DT-MRI (diffusion tensor magnetic resonance imaging) enables the measurement of the restricted diffusion of water in tissue and gives a 3-dimensional image of it. History: The principle of using a magnetic field to measure diffusion was already described in 1965 by the chemist Edward O. Stejskal and John E. Tanner. After the development of the MRI, Michael Moseley introduced the principle into MR Imaging in 1984 and further fundamental work was done by Dennis LeBihan in 1985. In 1994 the engineer Peter J. Basser published optimized mathematical models of an older diffusion-tensor model.[1] This model is commonly used today and supported by all new MRI-devices.

The DT-MRI technique takes advantage of the fact that the mobility of water molecules in brain tissue is restricted by obstacles like cell membranes. In nerve fibers mobility is only possible alongside the axons. So measuring the diffusion gives rise to the course of the main nerve fibers. All the data of one diffusion-tensor are too much to process in a single image, so there are different techniques for visualization of different aspects of this data: - Cross section images - tractography (reconstruction of main nerve fibers) - tensor glyphs (complete illustration of diffusion-tensor information)

The diffusion manner changes by patients with specific diseases of the central nervous system in a characteristic way, so they can be discerned by the diffusion-tensor technique. Diagnosis of apoplectic strokes and medical research of diseases involving changes of the white matter, like Alzheimer's disease or Multiple sclerosis are the main applications. Disadvantages of DT-MRI are that it is far more time consuming than ordinary MRI and produces large amounts of data, which first have to be visualized by the different methods to be interpreted.

### fMRI

The fMRI (Functional Magnetic Resonance) Imaging is based on the Nuclear magnetic resonance (NMR). The way this method works is the following: All atomic nuclei with an odd number of protons have a nuclear spin. A strong magnetic field is put around the tested object which aligns all spins parallel or antiparallel to it. There is a resonance to an oscillating magnetic field at a specific frequency, which can be computed in dependence on the atom type (the nuclei’s usual spin is disturbed, which induces a voltage s (t), afterwards they return to the equilibrium state). At this level different tissues can be identified, but there is no information about their location. Consequently the magnetic field’s strength is gradually changed, thereby there is a correspondence between frequency and location and with the help of Fourier analysis we can get one-dimensional location information. Combining several such methods as the Fourier analysis it is possible to get a 3D image.

fMRI picture

The central idea for fMRI is to look at the areas with increased blood flow. Hemoglobin disturbs the magnetic imaging, so areas with an increased blood oxygen level dependant (BOLD) can be identified. Higher BOLD signal intensities arise from decreases in the concentration of deoxygenated haemoglobin. An fMRI experiment usually lasts 1-2 hours. The subject will lie in the magnet and a particular form of stimulation will be set up and MRI images of the subject's brain are taken. In the first step a high resolution single scan is taken. This is used later as a background for highlighting the brain areas which were activated by the stimulus. In the next step a series of low resolution scans are taken over time, for example, 150 scans, one every 5 seconds. For some of these scans, the stimulus will be presented, and for some of the scans, the stimulus will be absent. The low resolution brain images in the two cases can be compared, to see which parts of the brain were activated by the stimulus. The rest of the analysis is done using a series of tools which correct distortions in the images, remove the effect of the subject moving their head during the experiment, and compare the low resolution images taken when the stimulus was off with those taken when it was on. The final statistical image shows up bright in those parts of the brain which were activated by this experiment. These activated areas are then shown as coloured blobs on top of the original high resolution scan. This image can also be rendered in 3D.

fMRI has moderately good spatial resolution and bad temporal resolution since one fMRI frame is about 2 seconds long. However, the temporal response of the blood supply, which is the basis of fMRI, is poor relative to the electrical signals that define neuronal communication. Therefore, some research groups are working around this issue by combining fMRI with data collection techniques such as electroencephalography (EEG) or magneto encephalography (MEG), which has much higher temporal resolution but rather poorer spatial resolution.

### PET

Positron emission tomography, also called PET imaging or a PET scan, is a diagnostic examination that involves the acquisition of physiologic images based on the detection of radiation from the emission of positrons. It is currently the most effective way to check for cancer recurrences. Positrons are tiny particles emitted from a radioactive substance administered to the patient. This radiopharmaceutical is injected to the patient and its emissions are measured by a PET scanner. A PET scanner consists of an array of detectors that surround the patient. Using the gamma ray signals given off by the injected radionuclide, PET measures the amount of metabolic activity at a site in the body and a computer reassembles the signals into images. PET's ability to measure metabolism is very useful in diagnosing Altsheimer's disease, Parkinson's disease, epilepsy and other neurological conditions, because it can precisely illustrate areas where brain activity differs from the norm. It is also one of the most accurate methods available to localize areas of the brain causing epileptic seizures and to determine if surgery is a treatment option. PET is often used in conjunction with an MRI or CT scan through "fusion" to give a full three-dimensional view of an organ.

## Electromagnetic Recording Methods

The methods we have mentioned up to now examine the metabolic activity of the brain. But there are also other cases in which one wants to measure electrical activity of the brain or the magnetic fields produced by the electrical activity. The methods we discussed so far do a great job of identifying where activity is occurring in the brain. A disadvantage of these methods is that they do not measure brain activity on a millisecond-by-millisecond basis. This measuring can be done by electromagnetic recording methods, for example by single-cell recording or the Electroencephalography (EEG). These methods measure the brain activity really fast and over a longer period of time so that they can give a really good temporal resolution.

### Single cell

When using the single-cell method an electrode is placed into a cell of the brain on which we want to focus our attention. Now, it is possible for the experimenter to record the electrical output of the cell that is contacted by the exposed electrode tip. That is useful for studying the underlying ion currents which are responsible for the cell’s resting potential. The researchers’ goal is then to determine for example, if the cell responds to sensory information from only specific details of the world or from many stimuli. So we could determine whether the cell is sensitive to input in only one sensory modality or is multimodal in sensitivity. One can also find out which properties of a stimulus make cells in those regions fire. Furthermore we can find out if the animal’s attention towards a certain stimulus influences in the cell’s respond.

Single cell studies are not very helpful for studying the human brain, since it is too invasive to be a common method. Hence, this method is most often used in animals. There are just a few cases in which the single-cell recording is also applied in humans. People with epilepsy sometimes get removed the epileptic tissue. A week before surgery electrodes are implanted into the brain or get placed on the surface of the brain during the surgery to better isolate the source of seizure activity. So using this method one can decrease the possibility that useful tissues will be removed. Due to the limitations of this method in humans there are other methods which measure electrical activity. Those we are going to discuss next.

### EEG

Placement of electrodes

EEG record during sleep

One of the most famous techniques to study brain activity is probably the Electroencephalography (EEG). Most people might know it as a technique which is used clinically to detect aberrant activity such as epilepsy and disorders.

Electroencephalogram (Electroencephalography, EEG) is obtained by electro-electron electroencephalography, which collects weak creatures produced by the human brain from the scalp and enlarges notes. Measuring electroencephalogram, and EEG measures, voltage fluctuations generated by the flow of ionic neurons in the brain. EEG is a diagnosis of a brain-related disease, but because it is susceptible to interference, it is usually used in combination with other methods.

EEG is most commonly used to diagnose epilepsy because epilepsy can cause abnormal EEG readings. It is also used to diagnose sleep disorders, coma, cerebrovascular disease, etc., and brain death. Brain waves have been used in first-line methods to diagnose tumors, strokes, and other focal brain diseases, but this has been reduced with the advent of high-resolution anatomical imaging techniques, such as nuclear magnetic resonance (MRI). And computed tomography (CT). Unlike CT and MRI, EEGs have a higher temporal resolution. Therefore, although spatial resolution of EEG is limited, it is still a valuable tool for research and diagnostics, especially when determining studies that require time resolution in the millisecond range

• Delta（δ）
• Theta（θ）
• Alpha（α）
• Beta（β）Low Range
• Beta（β) Middle Range
• Beta（β) High Range
• Gamma（γ）
• Lambda（λ）
• P300
• 0.1~3 Hz
• 4~7Hz
• 8~12Hz
• 12.5 ~ 16 Hz
• 16.5 ~ 20 Hz
• 20.5 ~ 28 Hz
• 25 ~ 100 Hz（normally 40Hz）
• according to the power generated
• according to the power generated
• Deep sleep and no dreams
• When adults are under stress, especially disappointment or frustration
• Relax, calm, close your eyes, but when you are awake
• Relax but concentrate
• Thinking, dealing with receiving external messages (hearing or thinking)
• Excitement, anxiety
• Raise awareness, happiness, stress reduction, meditation
• Induced by 100ms after the eye is stimulated by light (also known as P100)
• Induced after seeing or hearing something imagined in the brain 300ms later

In an experimental way this technique is used to show the brain activity in certain psychological states, such as alertness or drowsiness. To measure the brain activity mental electrodes are placed on the scalp. Each electrode, also known as lead, makes a recording of its own. Next, a reference is needed which provides a baseline, to compare this value with each of the recording electrodes. This electrode must not cover muscles because its contractions are induced by electrical signals. Usually it is placed at the “mastoid bone” which is located behind the ear.

During the EEG electrodes are places like this. Over the right hemisphere electrodes are labelled with even numbers. Odd numbers are used for those on the left hemisphere. Those on the midline are labelled with a z. The capital letters stands for the location of the electrode(C=central, F=frontal, Fop= frontal pole, O= occipital, P= parietal and T= temporal).

After placing each electrode at the right position, the electrical potential can be measured. This electrical potential has a particular voltage and furthermore a particular frequency. Accordingly, to a person’s state the frequency and form of the EEG signal can differ. If a person is awake, beta activity can be recognized, which means that the frequency is relatively fast. Just before someone falls asleep one can observe alpha activity, which has a slower frequency. The slowest frequencies are called delta activity, which occur during sleep. Patients who suffer epilepsy show an increase of the amplitude of firing that can be observed on the EEG record. In addition EEG can also be used to help answering experimental questions. In the case of emotion for example, one can see that there is a greater alpha suppression over the right frontal areas than over the left ones, in the case of depression. One can conclude from this, that depression is accompanied by greater activation of right frontal regions than of left frontal regions.

The disadvantage of EEG is that the electric conductivity, and therefore the measured electrical potentials vary widely from person to person and, also during time. This is because all tissues (brain matter, blood, bones etc.) have other conductivities for electrical signals. That is why it is sometimes not clear from which exact brain-region the electrical signal comes from.

### ERP

Whereas EEG recordings provide a continuous measure of brain activity, event-related potentials (ERPs) are recordings which are linked to the occurrence of an event. A presentation of a stimulus for example would be such an event. When a stimulus is presented, the electrodes, which are placed on a person’s scalp, record changes in the brain generated by the thousands of neurons under the electrodes. By measuring the brain's response to an event we can learn how different types of information are processed. Representing the word eats or bake for example causes a positive potential at about 200msec. From this one can conclude, that our brain processes these words 200 ms after presenting it. This positive potential is followed by a negative one at about 400ms. This one is also called N400 (whereas N stands for negative and 400 for the time). So in general one can say that there is a letter P or N to denote whether the deflection of the electrical signal is positive or negative. And a number, which represent, on average, how many hundreds of milliseconds after stimulus presentation the component appears. The event-related- potential shows special interest for researchers, because different components of the response indicate different aspects of cognitive processing. For example, presenting the sentences “The cats won’t eat” and “The cat won’t bake”, the N400 response for the word “eat” is smaller than for the word “bake”. From this one can draw the conclusion that our brain needs 400 ms to register information about a word’s meaning. Furthermore, one can figure out where this activity occurs in the brain, namely if one looks at the position on the scalp of the electrodes that pick up the largest response.

### MEG

Magnetoencephalography (MEG) is related to electroencephalography (EEG). However, instead of recording electrical potentials on the scalp, it uses magnetic potentials near the scalp to index brain activity. To locate a dipole, the magnetic field can be used, because the dipole shows excellently the intensity of the magnetic field. By using devices called SQUIDs (superconducting quantum interference device) one can record these magnetic fields.

MEG is mainly used to localize the source of epileptic activity and to locate primary sensory cortices. This is helpful because by locating them they can be avoided during neurological intervention. Furthermore, MEG can be used to understand more about the neurophysiology underlying psychiatric disorders such as schizophrenia. In addition, MEG can also be used to examine a variety of cognitive processes, such as language, object recognition and spatial processing among others, in people who are neurologically intact.

MEG has some advantages over EEG. First, magnetic fields are less influenced than electrical currents by conduction through brain tissues, cerebral spinal fluid, the skull and scalp. Second, the strength of the magnetic field can tell us information about how deep within the brain the source is located. However, MEG also has some disadvantages. The magnetic field in the brain is about 100 million times smaller than that of the earth. Due to this, shielded rooms, made out of aluminum, are required. This makes MEG more expensive. Another disadvantage is that MEG cannot detect activity of cells with certain orientations within the brain. For example, magnetic fields created by cells with long axes radial to the surface will be invisible.

## Techniques for Modulating Brain Activity

### TMS

History: Transcranial magnetic stimulation (TMS) is an important technique for modulating brain activity. The first modern TMS device was developed by Antony Baker in the year 1985 in Sheffield after 8 years of research. The field has developed rapidly since then with many researchers using TMS in order to study a variety of brain functions. Today, researchers also try to develop clinical applications of TMS, because there are long lasting effects on the brain activity it has been considered as a possible alternative to antidepressant medication.

Method: UMTS utilizes the principle of electromagnetic induction to an isolated brain region. A wire-coil electromagnet is held upon the fixed head of the subject. When inducing small, localized, and reversible changes in the living brain tissue, especially the directly under laying parts of the motor cortex can be effected. By altering the firing-patterns of the neurons, the influenced brain area is disabled. The repetitive TMS (rTMS) describes, as the name reveals, the application of many short electrical stimulations with a high frequency and is more common than TMS. The effects of this procedure last up to weeks and the method is in most cases used in combination with measuring methods, for example: to study the effects in detail.

Application: The TMS-method gives more evidence about the functionality of certain brain areas than measuring methods on their own. It was a very helpful method in mapping the motor cortex. For example: While rTMS is applied to the prefrontal cortex, the patient is not able to build up short term memory. That determines the prefrontal cortex, to be directly involved in the process of short term memory. By contrast measuring methods on their own, can only investigate a correlation between the processes. Since even earlier researches were aware that TMS could cause suppression of visual perception, speech arrest, and paraesthesias, TMS has been used to map specific brain functions in areas other than motor cortex. Several groups have applied TMS to the study of visual information processing, language production, memory, attention, reaction time and even more subtle brain functions such as mood and emotion. Yet long time effects of TMS on the brain have not been investigated properly, Therefore experiments are not yet made in deeper brain regions like the hypothalamus or the hippocampus on humans. Although the potential utility of TMS as a treatment tool in various neuropsychiatric disorders is rapidly increasing, its use in depression is the most extensively studied clinical applications to date. For instance in the year 1994, George and Wassermann hypothesized that intermittent stimulation of important prefrontal cortical brain regions might also cause downstream changes in neuronal function that would result in an antidepressant response. Here again, the methods effects are not clear enough to use it in clinical treatments today. Although it is too early at this point to tell whether TMS has long lasting therapeutic effects, this tool has clearly opened up new hopes for clinical exploration and treatment of various psychiatric conditions. Further work in understanding normal mental phenomena and how TMS affects these areas appears to be crucial for advancement. A critically important area that will ultimately guide clinical parameters is to combine TMS with functional imaging to directly monitor TMS effects on the brain. Since it appears that TMS at different frequencies has divergent effects on brain activity, TMS with functional brain imaging will be helpful to better delineate not only the behavioral neuropsychology of various psychiatric syndromes, but also some of the pathophysiologic circuits in the brain.

### tDCS

transcranial Direct Current Stimulation: The principle of tDCS is similar to the technique of TMS. Like TMS this is a non-invasive and painless method of stimulation. The excitability of brain regions is modulated by the application of a weak electrical current.

History and development: It was first observed that electrical current applied to the skull lead to an alleviation of pain. Scribonius Largus, the court physician to the Roman emperor Claudius, found that the current released by the electric ray has positive effects on headaches. In the Middle Ages the same property of another fish, the electrical catfish, was used to treat epilepsy. Around 1800, the so-called galvanism (it was concerned with effects of today’s electrophysiology) came up. Scientists like Giovanni Aldini experimented with electrical effects on the brain. A medical application of his findings was the treatment of melancholy. During the twentieth century among neurologists and psychiatrists electrical stimulation was a controversial but nevertheless wide spread method for the treatment of several kinds of mental disorders (e.g. Electroconvulsive therapy by Ugo Cerletti).

Mechanism: The tDCS works by fixation of two electrodes on the skull. About 50 percent of the direct current applied to the skull reaches the brain. The current applied by a direct current battery usually is around 1 to 2 mA. The modulation of activity of the brain regions is dependent on the value of current, on the duration of stimulation and on the direction of current flow. While the former two mainly have an effect on the strength of modulation and its permanence beyond the actual stimulation, the latter differentiates the modulation itself. The direction of the current (anodic or cathodic) is defined by the polarity and position of the electrodes. Within tDCS two distinct ways of stimulation exist. With the anodal stimulation the anode is put near the brain region to be stimulated and analogue for the cathodal stimulation the cathode is placed near the target region. The effect of the anodal stimulation is that the positive charge leads to depolarization in the membrane potential of the applied brain regions, whereas hyperpolarisation occurs in the case of cathodal stimulation due to the negative charge applied. The brain activity thereby is modulated. Anodal stimulation leads to a generally higher activity in the stimulated brain region. This result can also be verified with MRI scans, where an increased blood flow in the target region indicates a successful anodal stimulation.

Applications: From the description of the TMS method it is should be obvious that there are various fields of appliances. They reach from identifying and pulling together brain regions with cognitive functions to the treatment of mental disorders. Compared to TMS it is an advantage of tDCS to not only is able to modulate brain activity by decreasing it but also to have the possibility to increase the activity of a target brain region. Therefore the method could provide an even better suitable treatment of mental disorders such as depression. The tDSC method has also already proven helpful for apoplectic stroke patients by advancing the motor skills.

## Behavioural Methods

Besides using methods to measure the brain’s physiology and anatomy, it is also important to have techniques for analyzing behaviour in order to get a better insight on cognition. Compared to the neuroscientific methods, which concentrate on neuronal activity of the brain regions, behavioural methods focus on overt behaviour of a test person. This can be realized by well defined behavioural methods (e.g. eye-tracking), test batteries (e.g. IQ-test) or measurements which are designed to answer specific questions concerning the behaviour of humans. Furthermore, behavioural methods are often used in combination with all kinds of neuroscientific methods mentioned above. Whenever there is an overt reaction on a stimulus (e.g. picture) these behavioural methods can be useful. Another goal of a behavioural test is to examine in what terms damage of the central nervous system influences cognitive abilities.

### A Concept of a behavioural test

The tests are performed to give an answer to certain questions about human behaviour. In order to find an answer to that question, a test strategy has to be developed. First it has to be carefully considered, how to design the test in the best way, so that the measurement results provide an accurate answer to the initial question. How can the test be conducted so that founding variables are minimal and the focus really is on the problem? When an appropriate test arrangement is found, the defining of test variables is the next part. The test is now conducted and probably repeated until a sufficient amount of data is collected. The next step is the evaluation of the resulting data, with the suitable methods of statistics. If the test reveals a significant result, it might be the case that further questions arise about neuronal activity underlying the behaviour. Then neuroscientific methods are useful to investigate correlating brain activities. Methods, which proved to provide good evidence to a certain recurrent question about cognitive abilities of subjects, can bring together in a test battery.

Example: Question: Does a noisy surrounding affect the ability to solve a certain problem?

Possible test design: Expose half of the subject to a silent environment while solving the same task as the other half in a noisy environment. In this example founding variables might be different cognitive abilities of the participants. Test variables could be the time needed to solve the problem and the loudness of the noise etc. If statistical evaluation shows significance: Probable further questions: How does noise affect the brain activities on a neuronal level?

Are you interested in doing a behavioural test on your own, visit: the socialpsychology.org website.[2]

### Test batteries

A neuropsychological assessment utilizes test batteries that give an overview on a person’s cognitive strengths and weaknesses by analyzing various cognitive abilities. A neuropsychological test battery is used by a neuropsychologist to assess brain dysfunctions that can rise from developmental, neurological or psychiatric issues. Such batteries can appraise various mental functions and the overall intelligence of a person.

Firstly, there are test batteries designed to assess whether a person suffers from a brain damage or not. They generally work well in discriminating those with brain damage from neurologically impaired patients, but worse when it comes to discriminating them from those with psychiatric disorders. The most popular test, Halstead-Reitan battery, assesses abilities ranging from basic sensory processing to complex reasoning. Furthermore, the Halstead-Reitan battery provides information on the cause of the damage, the brain areas that were harmed, and the stage the damage has reached. Such information is valuable in developing a rehabilitation program. Another test battery, the Luria-Nebraska battery, is twice as fast to administer as the Halstead-Reitan. Its subtests are ordered according to twelve content scales (e.g. motor functions, reading, memory etc.). These two test batteries do not focus only on the absolute level of performance, but look at the qualitative manner of performance as well. This allows for a more comprehensive understanding of the cognitive impairment.

Another type of test batteries, the so-called IQ tests, aims to measure the overall cognitive performance of an individual. The most commonly used tests for estimating intelligence are the Wechsler family intelligence tests. Age-appropriate test versions exist for small children from age 2 years and 6 months, school-aged children, and adults. For example, the Wechsler Intelligence Scale for Children, fifth edition (WISC-V) measures various cognitive abilities in children between 6 and 16 years of age. The test consists of multiple subtests that form five different main indexes of cognitive performance. These main constructs are verbal reasoning skills, inductive reasoning skills, visuo-spatial processing, processing speed and working memory. Performance is analyzed both compared to a normative sample of similarly aged peers and within the test subject, assessing personal strengths and weaknesses.

### The Eye Tracking Procedure

Another important procedure for analyzing behavior and cognition is Eye-tracking. This is a procedure of measuring either where we are looking (the point of gaze) or the motion of an eye relative to the head. There are different techniques for measuring the movement of the eyes and the instrument that does the tracking is called the tracker. The first non-intrusive tracker was invented by George Buswell.

The eye tracking is a study with a long history, starting back in the 1800s. In 1879 Louis Emile Javal noticed that reading does not involve smooth sweeping of the eye along the text but rather series of short stops which are called fixations. This observation is one of the first attempts to examine the eye’s directions of interest. The book of Alfred L. Yarbus which he published in 1967 after an important eye tracking research is one of the most quoted eye tracking publications ever. The eye tracking procedure is not that complicated. Video based eye trackers are frequently used. A camera focuses on one or both eyes and records the movements while the viewer looks at some stimulus. The most modern eye trackers use contrast to locate the center of the pupil and create corneal reflections using infrared or near-infrared non-collimated light.

Eye tracking has a wide range of application – it is used to study a variety of cognitive processes, mostly visual perception and language processing. It is also used in human-computer interactions. It is also helpful for marketing and medical research. In recent years the eye tracking has generated a great deal of interest in the commercial sector. The commercial eye tracking studies present a target stimulus to consumers while a tracker is used to record the movement of the eye. Some of the latest applications are in the field of the automotive design. Eye tracking can analyze a driver’s level of attentiveness while driving and prevent drowsiness from causing accidents.

## Modeling Brain-Behaviour

Another major method, which is used in cognitive neuroscience, is the use of neural networks (computer modelling techniques) in order to simulate the action of the brain and its processes. These models help researchers to test theories of neuropsychological functioning and to derive principles viewing brain-behaviour relationships.

A basic neural network.

In order to simulate mental functions in humans, a variety of computational models can be used. The basic component of most such models is a “unit”, which one can imagine as showing neuron-like behaviour. These units receive input from other units, which are summed to produce a net input. The net input to a unit is then transformed into that unit’s output, mostly utilizing a sigmoid function. These units are connected together forming layers. Most models consist of an input layer, an output layer and a “hidden” layer as you can see on the right side. The input layer simulates the taking up of information from the outside world, the output layer simulates the response of the system and the “hidden” layer is responsible for the transformations, which are necessary to perform the computation under investigation. The units of different layers are connected via connection weights, which show the degree of influence that a unit in one level has on the unit in another one.

The most interesting and important about these models is that they are able to "learn" without being provided specific rules. This ability to “learn” can be compared to the human ability e.g. to learn the native language, because there is nobody who tells one “the rules” in order to be able to learn this one. The computational models learn by extracting the regularity of relationships with repeated exposure. This exposure occurs then via “training” in which input patterns are provided over and over again. The adjustment of “the connection weights between units” as already mentioned above is responsible for learning within the system. Learning occurs because of changes in the interrelationships between units, which occurrence is thought to be similar in the nervous system.

## References

1. Filler, AG: The history, development, and impact of computed imaging in neurological diagnosis and neurosurgery: CT, MRI, DTI: Nature Precedings DOI: 10.1038/npre.2009.3267.4.
• Ward, Jamie (2006) The Student's Guide to Cognitive Neuroscience New York: Psychology Press
• Banich,Marie T. (2004). Cognitive Neurosciene and Neuropsychology. Housthon Mifflin Company. ISBN 0618122109
• Gazzangia, Michael S.(2000). Cognitive Neuroscience. Blackwell Publishers. ISBN 0631216596
• 27.06.07 Sparknotes.com
• (1) 4 April 2001 / Accepted: 12 July 2002 / Published: 26 June 2003 Springer-Verlag 2003. Fumiko Maeda • Alvaro Pascual-Leone. Transcranial magnetic stimulation: studying motor neurophysiology of psychiatric disorders
• (2) a report by Drs Risto J Ilmoniemi and Jari Karhu Director, BioMag Laboratory, Helsinki University Central Hospital, and Managing Director, Nexstim Ltd
• (3) Repetitive Transcranial Magnetic Stimulation as Treatment of Poststroke Depression: A Preliminary Study Ricardo E. Jorge, Robert G. Robinson, Amane Tateno, Kenji Narushima, Laura Acion, David Moser, Stephan Arndt, and Eran Chemerinski
• Moates, Danny R. An Introduction to cognitive psychology. B:HRH 4229-724 0

# Motivation and Emotion

## Introduction

Happiness, sadness, anger, surprise, disgust and fear. All these words describe some kind of abstract inner states in humans, in some cases difficult to control. We usually call them feelings or emotions. But what is the reason that we are able to "feel"? Where do emotions come from and how are they caused? And are emotions and feelings the same thing? Or are we supposed to differentiate?

These are all questions that cognitive psychology deals with in emotion research. Emotion research in the cognitive science is not much older than twenty years. The reason for this lies perhaps in the fact that much of the cognitive psychology tradition was based on computer-inspired information-processing models of cognition.

This chapter gives an overview about the topic for a better understanding of motivation and emotions. It provides information about theories concerning the cause of motivation and emotions in the human brain, their processes, their role in the human body and the connection between the two topics. We will try to show the actual state of research, some examples of psychologist experiments, and different points of view in the issue of emotions. In the end we will briefly outline some disorders to emphasize the importance of emotions for the social interaction.

## Motivation

Motivation is an extended notion, which refers to the starting, controlling and upholding of corporal and mental activities. It is declared by inner processes and variables which are used to explain behavioral changes. Motivations are commonly separated into two types:

1.Drives: describe acts of motivation like thirst or hunger that have primarily biological purposes.

2.Motives: are driven by primarily social and psychological mechanisms.

Motivation is an interceding variable, which means that it is a variable that is not directly observable. Therefore, in order to study motivation, one must approach it through variables which are measurable and observable:

- Observable terms of variation (independent variables [1])

- Indicators of behavior (dependent variables[2]) e.g.: rate of learning, level of activity, ...

There are two major methodologies used to manipulate drives and motives in experiments:

Stimulation: Initiating motives by aversive attractions like shocks, loud noise, heat or coldness. On the other side attractions can activate drives which lead to positive affective states, e.g. sexual drives.

Deprivation: means that you prohibit the access to elementary aspects of biological or psychological health, like nutrition or social contacts. As a result it leads it to motives or drives which are not common for this species under normal conditions.

A theory of motivations was conceived by Abraham Maslow in 1970 (Maslow's hierarchy of needs). He considered two kinds of motivation:

1. Defected motivation: brings humans to reconsider their psychical and physical balance.

2. Adolescence motivation: gets people to pass old events and states of their personal development.

Maslow argues that everyone has a hierarchy of needs(see picture).

Regarding to this, our innate needs could be ordered in a hierarchy, starting at the “basic” ones and heading towards higher developed aspects of humanity. The hypothesis is that the human is ruled by lower needs as long as they are not satisfied. If they are satisfied in an adequate manner, the human then deals with higher needs. (compare to chapter attention)

Hierarchy of needs, Maslow (1970)

Nevertheless, all throughout history you can find examples of people who willingly practiced deprivation through isolation, celibacy, or by hunger strike. These people may be the exceptions to this hypothesis, but they may also have some other, more pressing motives or drives which induce them to behave in this way.

It seems that individuals are able to resist certain motives via personal cognitive states. The ability of cognitive reasoning and willing is a typical feature of being human and can be the reason for many psychological diseases which indicates that humans are not always capable to handle all rising mental states. Humans are able to manipulate their motives without knowing the real emotional and psychological causes. This introduces the problem that the entity of consciousness, unconsciousness and what ever else could be taken into account is essentially unknown. Neuroscience cannot yet provide a concrete explanation for the neurological substructures of motives, but there has been considerable progress in understanding the neurological procedures of drives.

### The Neurological Regulation of Drives

#### The Role of the Hypothalamus

The purpose of drives is to correct disturbances of homeostasis which is controlled by the hypothalamus. Deviations from the optimal range of a regulated parameter like temperature are detected by neurons concentrated in the periventricular zone of the hypothalamus. These neurons then produce an integrated response to bring the parameter back to its optimal value. This response generally consists of

1. Humoral response

2. Visceromotor response

3. Somatic motor response

When you are dehydrated, freezing, or exhausted, the appropriate humoral and visceromotor responses are activated automatically,[3] e.g.: body fat reserves are mobilized, urine production is inhibited, you shiver, blood is shunted away from the body surface, … But it is much faster and more effective to correct these disturbances by eating, drinking water or actively seeking or generating warmth by moving. These are examples of drives generated by the somatic motor system, and they are incited to emerge by the activity of the lateral hypothalamus.

For illustration we will make a brief overview on the neural basis of the regulation of feeding behavior, which is divided into the long-term and the short-term regulation of feeding behavior.

The long-term regulation of feeding behavior prevents energy shortfalls and concerns the regulation of body fat and feeding. In the 1940s the “dual center” model was popular, which divided the hypothalamus in a “hunger center” (lateral hypothalamus) and a “satiety center” (ventromedial hypothalamus). This theory developed from the facts that bilateral lesions of the lateral hypothalamus causes anorexia, a severely diminished appetite for food (lateral hypothalamic syndrome) and on the other side bilateral lesions of the ventromedial hypothalamus causes overeating and obesity (ventromedial hypothalamic syndrome). Anyway, it has been proved that this “dual model” is overly simplistic. The reason why hypothalamic lesions affect body fat and feeding behavior has in fact much to do with leptin signaling. Adipocytes (fat cells) release the hormone leptin, which regulates body mass by acting directly on neurons of the arcuate nucleus[4] of the hypothalamus that decreases appetite and increase energy expenditure. A fall in leptin levels stimulates another type of arcuate nuleus neurons[5] and neurons in the lateral hypothalamus,[6] which activate the parasympathetic division of the ANS, and stimulate feeding behavior. The short-term regulation of feeding behavior deals with appetite and satiety. Until 1999 scientists believed that hunger was merely the absence of satiety. This changed with the discovery of a peptide called ghrelin, which is highly concentrated in the stomach and is released into the bloodstream when the stomach is empty. In the arcuate nucleus it activates neurons,[7] that strongly stimulate appetite and food consumption. The meal finally ends by the concerted actions of several satiety signals, like gastric distension and the release of insulin.[8] But it seems that animals not only eat because they want food to satisfy their hunger. They also eat because they like food in a merely hedonistic sense. Research on humans and animals suggests that “liking” and “wanting” are mediated by separate circuits in the brain.

#### The Role of Dopamine in Motivation

In the early 1950s, Peter Milner and James Olds conducted an experiment in which a rat had an electrode implanted in its brain, so the brain could be locally stimulated at any time. The rat was seated in a box, which contained a lever for food and water and a lever that would deliver a brief stimulus to the brain when stepped on. At the beginning the rat wandered about the box and stepped on the levers by accident, but before long it was pressing the lever for the brief stimulus repeatedly. This behavior is called electrical self-stimulation. Sometimes the rats would become so involved in pressing the lever that they would forget about food and water, stopping only after collapsing from exhaustion. Electrical self-stimulation apparently provided a reward that reinforced the habit to press the lever. Researches were able to identify the most effective sites for self-stimulation in the different regions of the brain: the mesocorticolimbic dopamine system. Drugs that block dopamine receptors reduced the self-stimulation behavior of the rat. In the same way this drugs greatly reduced the pressing of a lever for receiving of food even if the rat was hungry. These experiments suggested a mechanism by which natural rewards (food, water, sex) reinforce particular behavior. Dopamine plays an important role in addiction of drugs like heroin, nicotine and cocaine. Thus these drugs either stimulate dopamine release (heroin, nicotine) or enhance dopamine actions (cocaine) in the nucleus accumbens. Chronic stimulation of this pathway causes a down-regulated of the dopamine “reward” system. This adaption leads to the phenomenon of drug tolerance. Indeed, drug discontinuation in addicted animals is accompanied by a marked decrease in dopamine release and function in the nucleus accumbens, leading to the symptom of craving for the discontinued drug. The exact role of dopamine in motivating behavior continues to be debated. However, much evidence suggests that animals are motivated to perform behaviors that stimulate dopamine release in the nucleus accumbens and related structures

## Emotions

### Basics

In contrast to previous research, modern brain based neuroscience has taken a more serviceable approach to the field of Emotions, because emotions definitely are brain related processes which deserve scientific study, whatever their purpose may be.

One interpretation regards emotions as „action schemes“, which especially lead to a certain behaviour which is essential for survival. It is important to distinguish between conscious aspects of emotion like subjective - often bodily - feelings, as well as unconscious aspects like the detection of a threat. This will be discussed later on in conjunction with awareness of emotion. It is also important to differentiate between a mood and an emotion. A mood refers to a situation where an emotion occurs frequently or continuously. As an example: Fear is an emotion, anxiety is a mood.

The first question which arises is how to categorise emotions. They could be treated as a single entity, but perhaps it could even make more sense to distinguish between them, which leads to the question if some emotions like happiness or anger are more basic than other types like jealousy or love and if emotions are dependent on culture and/or language.

One of the most influential ethnographic studies by Eckman and Friesen, which is based on the comparison of facial expressions of emotions in different cultures, concluded that there are six basic types of emotions expressed in faces - namely sadness, happiness, disgust, surprise, anger and fear, independent from culture and language. An alternative approach is to differentiate between emotions not by categorising but rather by measuring the intensity of an emotion by imposing different dimensions, e.g. their valence and their arousal. If this theory would be true then one might expect to find different brain regions which selectivey process positive or negative emotions.

Six basic types of emotions expressed in faces

Complex emotions like jealousy, love and pride are different from basic emotions as they comprehend awareness of oneself in relation to other people and one's attitude towards other people. Hence they come along with a more complex attributional process which is required to appreciate thoughts and beliefs of other people. Complex emotions are more likely be dependent on cultural influences than basic types of emotions. If you think of Knut who is feeling embarrassment, you have to consider what kind of action he committed in which situation and how this action raised the disapproval of other people.

### Awareness and Emotion

Awareness is closely connected with changes in the environment or in the psycho-physiological state. Why recognise changes rather than stable states? An answer could be that changes are an important indicator of our situation. They show that our situation is unstable. Paying attention or focusing on that might increase the chance to survive. A change bears more information than repetitive events. This appears more exciting. Repetition reduces excitement. If we think that we got the most important information from a situation or an event,we become unaware of such an event or certain facts.

Current research in this field suggest that changes are needed to emerge emotions,so we can say that it is strong attention dependent. The event has to draw our attention. No recognition, no emotions. But do we have always an emotional evaluation, when we are aware of certain events? How has the change to be relevant for our recognition? Emotional changes are highly personal significant, saying that it needs a relation to our personal self.

Significance presupposes order and relations. Relations are to meaning as colours are to vision: a necessary condition, but not its whole content. One determines the significance and the scope of a change by f.e. event´s impact (event´s strength), reality, relevance and factors related to the background circumstances of the subject. We feel no emotion in response to change which we perceive as unimportant or unrelated. Roughly one can say that emotions express our attitude toward unstable significant objects which are somehow related to us.

This is also always connected with the fact that we have greater response to novel experience. Something that is unexpected or unseen yet. When children get new toys, they are very excited at first, but after a while one can perceive, or simply remember their own childhood, that they show less interest in that toy. That shows, that emotional response declines during time. This aspect is called the process of adaptation. The threshold of awareness keeps rising if stimulus level is constant. Hence, awareness decreases. The organism withdraws its consciousness from more and more events. The person has the pip, it has enough. The opposite effect is also possible. It is known as the process of facilitation. In this case the threshold of awareness diminishes.

Consciousness is focusing on increasing number of events. This happens if new stimuli are encountered. The process of adaptation might prevent us from endlessly repetitive actions. A human would not be able to learn something new or be caught in an infinite loop. The emotional environment contains not only what is, and what will be, experienced but also all that could be, or that one desires to be, experienced; for the emotional system, all such possibilities are posited as simultaneously there and are compared with each other.

Whereas intellectual thinking expresses a detached and objective manner of comparison, the emotional comparison is done from a personal and interested perspective; intellectual thinking may be characterised as an attempt to overcome the personal emotional perspective. It is quite difficult to give an external description of something that is related to an intrinsic, personal perspective. But it is possible. In the following the most popular theories will be shown, and an rough overview about the neural substrates of emotions.

### The Neural Correlate of Emotion

#### Papez Circuit

James W. Papez was the investigator of the Papez Circuit theory (1937). He was the first who tried to explain emotions in a neurofuncional way. Papez discovered the circuit after injecting the rabing-virus into a cat's hippocampus andobserved its effects on the brain. The Papez circuit is chiefly involved in the cortical control of emotion. The corpus mamillare (part of the hypothalamus) plays a central role. The Papez Circuit involves several regions in the brain with the following course:

● The hippocampus projects to fornix and via this to corpus mamillare

● from here neurons project via the fasciculus mamillothalamicus to nucleus anterior of the thalamus and then to the gyrus cinguli

● due to the connection of gyrus cinguli and hippocampus the circuit is closed.

1949 Paul MacLean extended this theory by hypothezing that regions like the amygdala and the orbitofrontal cortex work together with the circuit and form an emotional brain. However, the theory of the Papez circuit could no longer be held because, for one, some regions of the circuit can no longer be related to functions to which they were ascribed primarily. And secondly, current state of research concludes that each basic emotion has its own circuit. Furthermore, the assumption that the limbic system is solely responsible for these functions is out-dated. Other cortical and non-cortical structures of the brain have an enormous bearing on the limbic system. So the emergence of emotion is always an interaction of many parts of the brain.

#### Amygdala and Fear

The Amygdala (lat. Almond), latinic-anatomic Corpus amygdaloideum, is located in the left and right temporal lobe. It belongs to the limbic system and is essentially involved in the emergence of fear. In addition, the amygdala plays a decisive role in the emotional evaluation and recognition of situations as well as in the analysis of potential threat. It handles external stimuli and induces vegetative reactions. These may help prepare the body for fight and flight by increasing heart and breathing-rate. The small mass of grey matter is also responsible for learning on the basis of reward or punishment. If the two parts of the amygdala are destroyed the person loses their sensation of fear and anger. Experiments with patients whose amygdala is damaged show the following: The participants were impaired to a lesser degree with recognizing facial anger and disgust. They could not match pictures of the same person when the expressions were different. Beyond Winston, O´Doherty and Dolan report that the amygdala activation was independent of whether or not subjects engaged in incidental viewing or explicit emotion judgements. However, other regions (including the ventromedial frontal lobes) were activated only when making explicit judgements about the emotion. This was interpreted as reinstatement of the „feeling“ of the emotion. Further studies show that there is a slow route to the amygdala via the primary visual cortex and a fast subcortical route from the thalamus to the amygdala. The amygdala is activated by unconscious fearful expressions in healthy participants and also „blindsight“ patients with damage to primary visual cortex. The fast route is imprecise and induces fast unconscious reactions towards a threat before you consciously notice and may properly react via the slow route. This was shown by experiments with persons who have a snake phobia (ophidiophobics) or a spider phobia (arachnophobics). When they get to see a snake, the former showed a bodily reaction, before they reported seeing the snake. A similar reaction was not observable in the case of a spiderphobia. By experiments with spiders the results were the other way round.

#### Recognition of Other Emotional Categories

Another basic emotional category which is largely independent of other emotions is disgust. It literally means „bad taste“ and is evolutionary related to contamination through ingestion. Patients with the Huntington's disease have problems with recognizing disgust. The insula, a small region of cortex buried beneath the temporal lobes, plays an important role for facial expressions of disgust. Furthermore, the half of the patients with a damaged amygdala have problems with facial expressions of sadness. The damage of the ventral regions of the basal ganglia causes the deficit in the selective perception of anger and this brain area could be responsible for the perception of aggression. Happiness cannot be selectively impaired because it consist of a more distributed network.

### Functional Theories

In order to explain human emotions, that means to discover how they arise and investigate how they are represented in the brain, researchers worked out several theories. In the following the most important views will be discussed

#### James – Lange Theory

The James – Lange theory of emotion states that the self – perception of bodily changes produces emotional experience. For example you are happy because you are laughing or you feel sad because you are crying. Alternatively, when a person sees a spider he or she might experience fear. One problem according this theory is that it is not clear what kind of processing leads to the changes in the bodily state and wether this process can be seen as a part of the emotion itself. However, people paralyzed from the neck down, who have little awareness of sensory input are still able to experience emotions. Also, research by Schacter and Singer has shown, that changes in bodily state are not enough to produce emotions. Because of that, an extension of this theory was necessary.

#### Two Factor Theory

The two factor theory views emotion as an compound of the two factors: physiological arousal and cognition. Schacter and Singer (1962) did well-known studies in this field of research. They injected participants with adrenaline (called epinephrine in the USA). This is a drug that causes a number of effects like increased blood flow to the muscles and increased heart rate. The result was that the existence of the drug in the body did not lead to experiences of emotion. Just with the presence of an cognitive setting, like an angry man in the room, participants did self – report an emotion. Contrary to the James – Lange theory this study suggests that bodily changes can only support conscious emotional experiences but do not create emotions. Therefore, the interpretation of a certain emotion depends on the physiological state in correlation to the subjects circumstances.

#### Somatic Marker Hypothesis

This current theory of emotions (from A. Damasio) emphasizes the role of bodily states and implies that “somatic marker” signals have influence on behaviour, like particularly reasoning and decision–making. Somatic markers are the connections between previous situations, which are stored in the cortex, and the bodily feeling of such situations (e.g. stored in the amygdala). From this it follows, that the somatic markers are very useful during the decision process, because they can give you immediate response on the grounds of previous acquired knowledge, whether the one or the other option “feels” better. People who are cheating and murdering without feeling anything miss somatic markers which would prevent them from doing this.

In order to investigate this hypothesis a gambling task was necessary. There have been four decks of cards (A, B, C, D) on the table and the participants had to take always one in turn. On the other side of the card was either a monetary penalty or gain. The players have been told that they must play so that they win the most. Playing from decks A and B leads to a loss of money whereas choosing decks C and D leads to gain. Persons without a brain lesion learned to avoid deck A and B but players with such damage did not.

Empathy is the ability to appreciate others’ emotions and their point of view. Simulation theory states that the same neural and cognitive resources are used by perceiving the emotional expressions of others and by producing actions and this expressions in oneself. If you are watching a movie where one person touches another, the same neural mechanism (in the somatosensory cortex) is activated as if you were physically touched. Further studies investigated empathy for pain. That means, if you see someone experiencing pain, two regions in your brain are overlapping. The first region is responsible for expecting another person’s pain, and the second region is responsible for experiencing this pain oneself.

### Mood and Memory

While we store a memory, we not only record all sensory data, we also store our mood and emotional state. Our current mood thus will affect the memories that are most effortlessly available to us, such that when we are in a good mood we recollect good memories (and vice versa). While the nature of memory is associative this also means that we tend to store happy memories in a linked set. There are two different ways we remember past events:

#### Mood-congruence

Memory occurs where current mood helps recall of mood-congruent material, e.g. characters in stories that feel like the reader feels while reading, regardless of our mood at the time the material was stored. Thus when we are happy, we are more likely to remember happy events. Also remembering all of the negative events of our past when depressed is an example of mood congruence. That means that you can rather remember a funeral where you were happy in a happy mood while you remember a party where you were sad in a sad mood, although a funeral is sad and a party is happy.

#### Mood-dependency

Memory occurs where the congruence of current mood with the mood at the time of memory storage helps recall of that memory. When we are happy, we are more likely to remember other times when we were happy. So, if you want to remember something, get into the mood you were in when you experienced it. You can easily try this yourself. You just have to bring into a certain mood by listening to the saddest/happiest music you know. Now you learn a list of words. Then you try to recall the list in the other/the same mood. You will see that you remember the list better when you are in the same mood as you were while learning it.

## Disorders

Without balanced emotions, one's ability to interact in a social network will be affected in some manner (e.g. reading minds). In this part of the chapter some grave disorders will be presented- these are: depression, austism and antisocial behaviour disorders as psychopathy and sociopathy. It is important to mention that those disorders will mainly be considered in regard to their impact on social competence. To get a full account of the characteristics of each of the disorders, we recommend reading the particular articles provided by Wikipedia.

### Autism

Autism is thought to be an innate condition with individual forms distributed on a broad spectrum. This means that symptoms can range from minor behavioral problems to major mental deficits, but there is always some impairment of social competence. The American Psychiatric Association characterizes autism as "the presence of markedly abnormal or impaired development in social interaction and communication and a markedly restricted repertoire of activities and interests" (1994, diagnostic and statistical manual; DSM-IV). The deficits in social competence are sometimes divided into the so-called "triad of impairments", including:

(1)Social interaction This includes difficulties with social relationships, for example appearing distanced and indifferent to other people.

(2)Social communication Autists have problems with verbal and non-verbal communication, for example, they do not fully understand the meaning of common gestures, facial expressions or the voice tones. They often show reduced or even no eye-contact as well, avoid body contact like shaking hands and have difficulties to understand metaphores and "to read between the lines".

(3)Social imagination Autists lack social imagination manifesting in difficulties in the development of interpersonal play and imagination, for example having a limited range of imaginative activities, possibly copied and pursued rigidly and repetitively.

All forms of autism can already be recognized during childhood and therefore disturb the proper socialization of the afflicted child. Often autistic children are less interested in playing with other children but for example love to arrange their toys with utmost care. Unable to interpret emotional expressions and social rules autists are prone to show inappropriate behaviour towards the people surrounding them. Autists may not obviously be impaired therefore other people misunderstand their actions as provocation.

Still there are other features of autism- autists often show stereotyped behaviour and feel quite uncomfortable when things change in the routines and environment they are used to. Very rarely, a person with autism may have a remarkable talent, such as memorizing a whole city panorama including, for example, the exact number of windows in each of the buildings.

There are several theories trying to explain autism or features of autism. In an experiment conducted by Baron-Cohen and colleagues (1995) cartoons were presented to normal and autistic children showing a smiley in the centre of each picture and four different sweets in each corner (see picture below). The smiley, named Charlie, was gazing at one of the sweets. The children were asked question as: "Which chocolate does Charlie want?"

Autistic children were able to detect where the smiley was looking but unable to infer its 'desires'. (adapted graphic from Ward, J. (2006). The Students Guide to Cognitive Neuroscience. Hove: Psychology Press. page 316)

Normal children could easily infer Charlie's desires from Charlie's gaze direction whereas autistic children would not guess the answer.

Additional evidence from other experiments suggest that autists are unable to use eye gaze information to interpret people's desires and predict their behaviour which would be crucial for social interaction. Another proposal to explain autistic characteristics suggests that autists lack representations of other people's mental states (mindblindness - proposed by Baron-Cohen, 1995b).

### Depression

Depression is a disorder that leads to an emotional disfunction characterized by a state of intensive sadness, melancholia and despair. The disorder affects social and everyday life. There are many different forms of depression that differ in strength and duration. People affected by depression suffer from anxiety, distorted thinking, dramatic mood changes and many other symptoms. They feel sad, and everything seems to be bleak. This leads to an extremely negative view of themselves and their current and future situation. These factors can lead to a loss of a normal social life that might affect the depressed person even further. Suffering from depression and losing your social network can thereby lead to a vicious circle.

### Psychopathy and Sociopathy

Psychopathy and sociopathy are nowadays subsumed under the notion of antisocial behaviour disorders but experts are still quite discordant whether both are really separated disturbances or rather forms of other personal disorders e.g. autism. Psychopaths and sociopaths often get into conflict with their social environment because they repeatedly violate social and moral rules. Acquired sociopathy manifests in the inability to form lasting relationships, irresponsible behaviour as well as getting angry quite fast and exceptional strong egocentric thinking. While acquired sociopathy might be characterised by impulsive antisocial behaviour often having no personal advantage, developmental psychopathy manifests in goal directed and self-initiated aggression. Acquired sociopathy is caused by brain injury especially found in the orbitofrontal lobe (frontal lobe) and is thought to be a failure to use emotional cues and the loss of social knowledge. Therefore sociopaths are unable to control and plan their behaviour in a socially adequate manner. In contrast to sociopaths psychopaths are not getting angry because of minor reasons but they act aggressively without understandable reasons at all which might be due to their inability to understand and distinguish between moral rules (concerning the welfare of others) and conventions (consensus rules of society). Furthermore it even happens that they feel no guilt or empathy for their victims. Psychopathy is probably caused by a failure to process distress cues of others, meaning that they are unable to understand sad and fearful expressions and consequently suppress their aggression (Blair 1995). It is important to mention that they are nevertheless able to detect stimuli being threatening for themselves.

## Summary

We hope that this chapter gave you an overview and answered the question we posed at the beginning. As one can see this young field of cognitive is wide and not yet completely researched. Many different theories were proposed to explain emotions and motivation like the James-Lange Theory which claims that bodily changes lead to emotional experiences. This theory led to the Two-Factor-Theory which in contrast says that bodily changes only support emotional experiences. Whereas the newest theory (Somatic marker) states that somatic markers support decision making. While analyzing emotions, one has to distinguish between conscious emotions, like a feeling, and unconscious aspects, like the detection of threat. Presently, researchers distinguish six basic emotions that are independent from cultural aspects. In comparison to this basic emotions other emotions also comprehend social awareness. So, emotions are not only important for our survival but for our social live, too. Reading faces helps us to communicate and interpret behaviour of other people. Many disorders impair this ability leaving the afflicted person with an inability to integrate himself into the social community. Another important part in understanding emotions is awareness; we only pay attention on new things in order to avoid getting unimportant information. Moods also affect our memory - we can remember things better if we are in the same mood as in the situation before and if the things we want to remember are connoted in the same way as our current mood. We also outlined the topic of motivation which is crucial to initiate and uphold our mental and corporal activities. Motivation consists of two parts: drives (biological needs) and motives (primarily social and psychological mechanisms). One important theory is the Maslow Hierarchy of Needs; it states that higher motivations are only aspired if lower needs are satisfied. As this chapter only dealt with mood and memory, the next chapter deals with memory and language.

## References

1. Independent variables are the circumstance of major interest in an experiment. The Participant does only react on them, but cannot actively change them. They are independent of his behaviour.
2. The measured behaviour is called the dependent variable.
3. At the humoral response hypothalamic neurons stimulate or inhibit the release of pituitary hormones into the bloodstream and at the visceromotor response neurons in the hypothalamus adjust the balance of sympathetic and parasympathetic outputs of the autonomic nervous system (ANS).
4. αMSH neurons and CART neurons of the arcuate nucleus. αMSH(alpha-malanocyte-stimulating hormone) and CART(cocaine- and amphetamine-regulated transcript) are anoretic peptides, which activate the pituitary hormones TSH(thyroid-stimulating hormone) and ACTH(adrenocorticotropic hormone), that have the effect of raising the metabolic rate of cells throughout the body.
5. NPY neurons and AgRP neurons. NPY(neuropeptide Y) and AgRP(agouti-related peptide) are orexigenic peptides, which inhibit the secretion of TSH and ACTH.
6. MCH(melanin-concentrating hormone) neurons, which have extremely widespread connections in the brain, including direct monosynaptic innervation of most of the cerebral cortex, that is involved in organizing and initiating goal-directed behaviors, such as raiding the refrigerator.
7. The NPY- and AgRP neurons.
8. The pancreatic hormone insulin, released by β cells of the pancreas, acts directly on the arcuate and ventromedial nuclei of the hypothalamus. It appears that it operates in much the same way as leptin to regulate feeding behavior, with the difference that its primary stimulus for realisng is increased blood glucose level.

### Books

• Zimbardo, Philip G. (1995, 12th edition). Psychology and Life. Inc. Scott, Foresman and Company, Glenview, Illinois. ISBN 020541799X
• Banich,Marie T. (2004). Cognitive Neuroscience and Neuropsychology. Housthon Mifflin Company. ISBN 0618122109
• Robert A. Wilson and Frank C. Keil. (2001). The MIT Encyclopedia of Cognitive Sciences (MITECS). Bradford Book. ISBN 0262731444
• Antonio R. Damasio. (1994) reprinted (2005). Descartes' Error: Emotion, Reason and the Human Brain. Penguin Books. ISBN 014303622X
• Antonio R. Damasio. (1999). The Feeling of what Happens. Body and Emotion in the Making of Consciousness. Harcourt Brace & Company. ISBN 0099288761
• Aaron Ben-Ze'ev (Oct 2001). The Subtlety of Emotions.(MIT CogNet). ISBN 0262523191
• Ward, J. (2006). The Students Guide to Cognitive Neuroscience. Hove: Psychology Press. ISBN 1841695351

### Journals

• The emotional brain. Tim Dalgleish.
• (1) Leonard, C.M., Rolls, E.T., Wilson, F.A.W. & Baylis, C.G. Neurons in the amygdala of the monkey with responses selective for faces.

Behav. Brain Res. 15, 159-176 (1985)

• (2)Adolphs, R., Tranel, D., Damasio, H. & Damasio, A. Impaired recognition of emotion in facial expressions following bilateral damage of the human amygdala.

Nature 372, 669-672 (1994)

• (3)Young, A. W. et al. Face processing impairments after amygdalotomy.

Brain 118, 15-24 (1995)

• (4)Calder, A. J. et al. Facial emotion recognition after bilateral amygdala damage: Differentially severe impairment of fear.

Cognit. Neuropsychol. 13, 699-745 (1996)

• (5)Scott, S. K. et al. Impaired auditory recognition of fear and anger following bilateral amygdala lesions.

Nature 385, 254-257 (1997)

• (6)Cahill, L., Babinsky, R., Markowitsch, H. J. & McGaugh, J. L. The amygdala and emotional memory.

Nature 377, 295-296 (1995)

• (7)Wood, Jacqueline N. and Grafman, Jordan (02/2003). Human Prefrontal Cotex.

Nature Reviews/ Neuroscience

• (8)Brothers, L. , Ring, B. & Kling, A. Response of neurons in the macaque amygdala to complex social stimuli.

Behav. Brain Res. 41, 199-213 (1990)

• (9)Bear, M.F., Connors, B.W., Paradiso, M.A. (2006, 3rd edition). Neuroscience. Exploring the Brain. Lippincott Wiliams & Wilkins. ISBN 0-7817-6003-8

# Introduction

Imagine our friend Knut, who we have already introduced in earlier chapters of this book, hastily walking through his apartment looking everywhere for a gold medal that he has won many years ago at a swimming contest. The medal is very important to him, since it was his recently deceased mother who had insisted on him participating. The medal reminds him of the happy times in his life. But now he does not know where it is. He is sure that he had last seen it two days ago but, searching through his recent experiences, he is not able to recall where he has put it.
So what exactly enables Knut to remember the swimming contest and why does the medal trigger the remembrance of the happy times in his life? Also, why is he not able to recall where he has put the medal, even though he is capable of scanning through most of his experiences of the last 48 hours?
Memory, with all of its different forms and features, is the key to answering these questions. When people talk about memories, they are subconsciously talking about "the capacity of the nervous system to acquire and retain usable skills and knowledge, which allows living organisms to benefit from experience".[1] Yet, how does this so-called memory function? In the process of answering this question, many different models of memory have evolved. Distinctions are drawn between Sensory Memory, Short Term Memory, and Long Term Memory based on the period of time information is accessible after it is first encountered. Sensory Memory, which can further be divided into Echoic and Iconic Memory, has the smallest time span for accessibility of information. With Short Term and Working Memory, information is accessible seconds to minutes after it is first encountered. While Long Term Memory, has an accessibility period from minutes to years to decades. This chapter discusses these different types of memory and further gives an insight into memory phenomena like False Memory and Forgetting. Finally, we will consider biological foundations that concern memory in human beings and the biological changes that occur when learning takes place and information is stored.

# Types of Memory

In the following section we will discuss the three different types of memory and their respective characteristics: Sensory Memory, Short Term (STM) or Working Memory (WM) and Long Term Memory (LTM).

## Sensory Memory

This type of memory has the shortest retention time, only milliseconds to five seconds. Roughly, Sensory Memory can be subdivided into two main kinds:

Sensory Memory
• Iconic Memory (visual input)
• Echoic Memory (auditory input)

While Iconic and Echoic Memory have been well researched, there are other types of Sensory Memory, like haptic, olfactory, etc., for which no sophisticated theories exist so far.
It should be noted, though, that according to the Atkinson and Shiffrin (1968)[2] Sensory Memory was considered to be the same thing as Iconic Memory. Echoic Memory was added to the concept of Sensory Memory due to research done by Darwin and others (1972).[3] Let us consider the following intuitive example for Iconic Memory: Probably we all know the phenomenon that it seems possible to draw lines, figures or names with lighted sparklers by moving the sparkler fast enough in a dark environment. Physically, however, there are no such things as lines of light. So why can we nevertheless see such figures? This is due to Iconic Memory. Roughly speaking, we can think of this subtype of memory as a kind of photographic memory, but one which only lasts for a very short time (milliseconds, up to a second). The image of the light of a sparkler remains in our memory (persistence of vision) and thus makes it seem to us like the light leaves lines in the dark. The term "Echoic Memory", as the name already suggests, refers to auditory input. Here the persistence time is a little longer than with Iconic Memory (up to five seconds).
At the level of Sensory Memory no manipulation of the incoming information occurs, it is transferred to the Working Memory. By ‘transfer’ it is meant that the amount of information is reduced because the capacity of the working memory is not large enough to cope with all the input coming from our sense organs. The next paragraph will deal with the different theories of selection when transferring information from Sensory Memory to Working Memory.
One of the first experiments researching the phenomenon of Attention was the Shadowing Task (Cherry et al., 1953).[4] This experiment deals with the filtering of auditory information. The subject is wearing earphones, getting presented a different story on each ear. He or she has to listen to and repeat out loud the message on one ear (shadowing). When asked for the content of the stories of both ears only the story of the shadowed side can be repeated; participants do not know about the content of the other ear’s story. From these results Broadbent concluded the Filter Theory (1958).[5] This theory proposes that the filtering of information is based on specific physical properties of stimuli. For every frequency there exists a distinct nerve pathway. The attention control selects which pathway is active and can thereby control which information is passed to the Working Memory. This way it is possible to follow the utterance of one person with a certain voice frequency even though there are many other sounds in the surrounding. But imagine a situation in which the so called cocktail party effect applies: having a conversation in a loud crowd at a party and listening to your interlocutor you will immediately switch to listening to another conversation if the content of it is semantically relevant to you, e.g. if your name is mentioned.
So it is found that filtering also happens semantically. The above mentioned Shadowing Task was changed so that the semantic content of a sentence was split up between the ears, and the subject, although shadowing, was able to repeat the whole sentence because he or she was following the semantic content unconsciously.
Reacting to the effect of semantic filtering, new theories were developed. Two important theories are the Attenuation Theory (Treisman, 1964)[6] and the Late Selection Theory (Deutsch & Deutsch, 1963).[7] The former proposes that we attenuate information which is less relevant, but do not filter it out completely. Thereby also semantic information of ignored frequencies can be analyzed but not as efficiently as those of the relevant frequencies. The Late Selection Theory presumes that all information is analyzed first and afterwards the decision of the importance of information is made. Treisman and Geffen did an experiment to find out which one of the theories holds. The experiment was a revision of the Shadowing Task. Again the subjects have to shadow one ear but in contrast they also have to pay attention to a certain sound which could appear on either ear. If the sound occurs the subject has to react in a certain way (for example knock on the table). The result is that the subject identifies the sound on the shadowed ear in 87% of all cases and can only do this in 8% of the cases on the ignored side. This shows that the information on the ignored side must be attenuated since the rate of identification is lower. If the Late Selection Theory were to hold then the subject would have to analyze all information and would have to be able to identify the same amount on the ignored side as on the shadowed side. Since this is not the case the Attenuation Theory by Treisman explains the empirical results more accurately.

Illustration of the Attention Control Model by a) Treisman - Attenuation Theory and b) Deutsch & Deutsch – Late Selection Theory.

## Short Term Memory

The Short Term Memory (STM) was initially discussed by Attkinson and Shiffrin (1968).[8] The Short Term Memory is the link between Sensory Memory and Long Term Memory (LTM). Later Baddeley proposed a more sophisticated approach and called the interface Working Memory (WM). We will first look at the classical Short Term Memory Model and then go on to the concept of Working Memory.

As the name suggests, information is retained in the Short Term Memory for a rather short period of time (15–30 seconds).

Short Term Memory

If we look up a phone number in the phone book and hold it in mind long enough for dialling the number, it is stored in the Short Term Memory. This is an example of a piece of information which can be remembered for a short period of time. According to George Miller (1956)[9] the capacity of the Short Term Memory is five to nine pieces of information (The magical number seven, plus or minus two). The term "pieces of information” or, as it is also called, chunk might strike one as a little vague. All of the following are considered as chunks: single digits or letters, whole words or even sentences and the like. It has been shown by experiments also done by Miller that chunking (the process of bundeling information) is a useful method to memorize more than just single items in the common sense. Gobet et al. defined a chunk as "a collection of elements that are strongly associated with one another but are weakly associated with other chunks" (Goldstein, 2005).[10] A very intuitive example of chunking information is the following:
Try to remember the following digits:

• 0 3 1 2 1 9 8 2

But you could also try another strategy to remember these digits:

• 03. 12. 1982.

With this strategy you bundeled eight pieces of information (eight digits) to three pieces with help to remember them as a date schema.
A famous experiment concerned with chunking was conducted by Chase and Simon (1973)[11] with novices and experts in chess playing. When asked to remember certain arrangements of chess pieces on the board, the experts performed significantly better that the novices. However, if the pieces were arranged arbitrarily, i.e. not corresponding to possible game situations, both the experts and the novices performed equally poorly. The experienced chess players do not try to remember single positions of the figures in the correct game situation, but whole bundles of figures as already seen before in a game. In incorrect game situations this strategy cannot work which shows that chunking (as done by experienced chess players) enhances the performance only in specific memory tasks.

From Short Term Memory to Baddeley’s Working Memory Model

## Working Memory

According to Baddeley, Working Memory is limited in its capacity (the same limitations hold as for Short Term Memory) and the Working Memory is not only capable of storage, but also of the manipulation of incoming information. Working Memory consists of three parts:

File:WorkingMemory.jpg
Working Memory
• Phonological Loop
• Central Executive

We will consider each module in turn:
The Phonological Loop is responsible for auditory and verbal information, such as phone numbers, people’s names or general understanding of what other people are talking about. We could roughly say that it is a system specialized for language. This system can again be subdivided into an active and a passive part. The storage of information belongs to the passive part and fades after two seconds if the information is not rehearsed explicitly. Rehearsal, on the other hand, is regarded as the active part of the Phonological Loop. The repetition of information deepens the memory. There are three well-known phenomena that support the idea that the Phonological Loop is specialized for language: The phonological similarity effect, the word-length effect and articulatory suppression. When words that sound similar are confused, we speak of the phonological similarity effect. The word-length effect refers to the fact that it is more difficult to memorize a list of long words and better results can be achieved if a list of short words is memorized. Let us look at the phenomenon of articulatory suppression in a little more detail. Consider the following experiment:
Participants are asked to memorize a list of words while saying "the, the, the ...“ out loud. What we find is that, with respect to the word-length effect, the difference in performance between lists of long and short words is levelled out. Both lists are memorized equally poorly. The explanation given by Baddeley et al. (1986),[13] who conducted this experiment, is that the constant repetition of the word "the" prevents the rehearsal of the words in the lists, independent of whether the list contains long or short words. The findings become even more drastic if we compare the memory-performance in the following experiment (also conducted by Baddeley and his co-workers in 1986):

• Initiating movement
• Control of conscious attention
Problems which arise with the Working Memory approach

In theory all information has to pass the Working Memory in order to be stored in the Long Term Memory. However, cases have been reported where patients could form Long Term Memories even though their STM-abilities were severely reduced. This clearly poses a problem to the modal model approach. It was suggested by Shallice and Warrington (1970)[15] that there must be another possible way for information to enter Long Term Memory than via Working Memory.

## Long Term Memory

As the name already suggest, Long Term Memory is the system where memories are stored for a long time. "Long" in this sense means something between a few minutes and several years or even decades to lifelong.

Long Term Memory

Similar to Working Memory, Long Term Memory can again be subdivided into different types. Two major distinctions are made between Declarative (conscious) and Implicit (unconscious) Memory. Those two subtypes are again split into two components each: Episodic and Semantic Memory with respect to Declarative Memory and Priming Effects, and Procedural Memory with respect to Implicit Memory. In contrast to Short Term or Working Memory, the capacity of Long Term Memory is theoretically infinite. The opinions as to whether information remains in the Long Term Memory forever or whether information can get deleted differ. The main argument for the latter opinion is that apparently not all information that was ever stored in LTM can be recalled. However, theories that regard Long Term Memories as not being subject to deletion emphasize that there might be a useful distinction between the existence of information and the ability to retrieve or recall that information at a given moment. There are several theories about the “forgetting” of information. These will be covered in the section “Forgetting and False Memory”.

#### Declarative Memory

Let us now consider the two types of Declarative Memory. As noted above, those two types are Episodic and Semantic Memory. Episodic Memory refers to memories for particular events that have been experienced by somebody (autobiographical information). Typically, those memories are connected to specific times and places. Semantic Memory, on the other hand, refers to knowledge about the world that is not connected to personal events. Vocabularies, concepts, numbers or facts would be stored in the Semantic Memory. Another subtype of memories stored in Semantic Memory is that of the so called Scripts. Scripts are something like blueprints of what happens in a certain situation. For example, what usually happens if you visit a restaurant (You get the menu, you order your meal, eat it and you pay the bill). Semantic and Episodic Memory are usually closely related to one another, i.e. memory of facts might be enhanced by interaction with memory about personal events and vice versa. For example, the answer to the factual question of whether people put vinegar on their chips might be answered positively by remembering the last time you saw someone eating fish and chips. The other way around, good Semantic Memory about certain things, such as football, can contribute to more detailed Episodic Memory of a particular personal event, like watching a football match. A person that barely knows the rules of that game will most probably have a less specific memory for the personal event of watching the game than a football-expert will.

#### Implicit Memory

We now turn to the two different types of Implicit Memory. As the name suggests, both types are usually active when unconscious memories are concerned. This becomes most evident for Procedural Memory, though it must be said that the distinction between both types is not as clearly cut as in the case of Declarative Memory and that often both categories are collapsed into the single category of Procedural Memory. But if we want to draw the distinction between Priming Effects and Procedural Memory, the latter category is responsible for highly skilled activities that can be performed without much conscious effort. Examples would be the tying of shoelaces or the driving of a car, if those activities have been practiced sufficiently. It is some kind of movement plan. As regards the Priming Effect, consider the following experiment conducted by Perfect and Askew (1994):[16]

Final overview of all different types of memory and their interaction

# Forgetting and False Memory

As important as memory is, also the process of Forgetting is present to everybody.
Therefore one might wonder:

• Why do we forget at all?
• What do we forget?
• How do we forget?

Why do we forget at all?

One might come up with something you could call “mental hygiene”. It is not useful to remember every little detail of your life and your surrounding, but rather a disadvantage because you maybe would not be able to remember the important things as quickly or even quick enough but have an overload of facts in your memory. Therefore it is important that unused memories are “cleaned up” so that only relevant information is stored.

What do we forget and how?

There are different theories about how things are forgotten. One theory proposes that the capacity of the Long Term Memory is infinite. This would mean that actually all memories are stored in the LTM but some information cannot be recalled (anymore) due to factors to be mentioned in the following paragraphs:

There are two main theories about the causes of forgetting:

• The Trace Decay Theory states that you need to follow a certain path, or trace, to recall a memory. If this path has not been used for some time, one would say that the activity of the information decreases (it fades (->decays)), which leads to difficulty or the inability to recall the memory.
• The Interference Theory proposes that all memories interfere with each other. One distinguishes between two kinds of interferences:
• Proactive Interference:
Earlier memories influence new ones or hinder one to make new ones.
• Retroactive Interference:
Old memories are changed by new ones, maybe even so much that the original one is completely ‘lost’.
• Which of the two theories applies in your opinion?
• Do you agree with a mixture of the two?

In 1885 Herrmann Ebbinghaus did several self-experiments to research human forgetting. He memorized a list of meaningless syllables, like “WUB” and “ZOF”, and tried to recall as many as possible after certain intervals of time for several weeks. He found out that forgetting can be described with an almost logarithmic curve, the so called forgetting curve which you can see on the left.

These theories about forgetting already make clear that memory is not a reliable recorder but it is a construction based on what actually happened plus additional influences, such as other knowledge, experiences, and expectations. Thus false memories are easily created.

In general there are three types of tendencies towards which people’s memories are changed. These tendencies are called

## Biases in memory

One distinguishes between three major types:

• Egocentric Bias
It makes one see his or herself in the best possible light.
• Consistency Bias
Because of which one perceives his or her basic attitudes to remain persistent over time.
• Positive Change Bias
It is cause for the fact that one perceives things to be generally improving.

(For a list of more known memory biases see: List of memory biases)

There are moments in our lives that we are sure we will never forget. It is generally perceived that the memories of events that we are emotionally involved with are remembered for a longer time than others and that we know every little detail of them. These kinds of memories are called Flashbulb Memories.
The accuracy of the memories is an illusion, though. The more time passes, the more these memories have changed while our feeling of certainty and accuracy increases. Examples for Flashbulb Memories are one’s wedding, the birth of one’s child or tragedies like September 11th.

Interesting changes in memory can also occur due to Misleading Postevent Information (MPI). After an event information given another person can so to say intensify your memory in a certain respect. This effect was shown in an experiment by Loftus and Palmer (1974):[17] The subjects watched a film in which there were several car accidents. Afterwards they were divided into three groups that were each questioned differently. While the control group was not asked about the speed of the cars at all, in the other groups questions with a certain key word were posed. One group was asked how fast the cars were going when they hit each other, while in the other question the verb “smashed” was used. One week later all participants were asked whether they saw broken glass in the films. Both the estimation of speed and the amount of people claiming to have seen broken glass increased steadily from the control group to the third group.
Based on this Misinformation Effect the Memory Impairment Hypothesis was proposed.
This hypothesis states that suggestible and more detailed information that one receives after having made the actual memory can replace the old memory.
Keeping the possible misleading information in mind, one can imagine how easily eyewitness testimony can be (purposely or accidentally) manipulated. Depending on which questions the witnesses are asked they might later on remember to see, for example, a weapon or not.

These kinds of changes in memory are present in everyone on a daily basis. But there are other cases: People with a lesion in the brain sometimes suffer from Confabulation. They construct absurd and incomplete memories that can even contradict with other memories or with what they know. Although the people might even be aware of the absurdness of their memories they are still firmly convinced of them. (See Helen Phillips' article Mind fiction: Why your brain tells tall tales)

## Repressed and Recovered Memories

If one cannot remember an event or detail it does not mean that the memory is completely lost. Instead one would say that these memories are repressed, which means that they cannot easily be remembered. The process of remembering in these cases is called recovery.
Recovering of a repressed memory usually occurs due to a retrieval cue. This might be an object or a scene that reminds one of something which has happened long ago.
Traumatic events, which happened during childhood for example, can be recovered with the help of a therapist. This way, perpetrators have been brought to trial after decades.
Still, the correctness of the “recovered” memory is not guaranteed: as we know, memory is not reliable and if the occurrence of an event is suggestible one might produce a false memory.
Look at the illustration to the right to be able to relate to these processes.

How did the memory for an event become what it is?

Other than on a daily basis errors in memory and amnesia are due to damages in the brain. The following paragraphs will present the most important brain regions enabling memory and mention effects of damage to them.

# Some neurobiological facts about memory

In this section we will first consider how information is stored in synapses and then talk about two regions of the brain that are mainly involved in forming new memories, namely the amygdala and the hippocampus. To show what effects memory diseases can have and how they are classified, we will discuss a case study of amnesia and two other common examples for amnesic diseases: Karsakoff’s amnesia and Alzheimer’s disease.

## Information storage

The idea that physiological changes at synapses happen during learning and memory was first introduced by Donald Hebb.[18] It was in fact shown that activity at a synapse leads to structural changes at the synapse and to enhanced firing in the postsynaptic neuron. Since this process of enhanced firing lasts for several days or weeks, we talk about Long Term Potentiation (LTP). During this process existing synaptic proteins are altered and new proteins are synthesized at the modified synapse. What does all this have to do with memory? It has been discovered that LTP is most easily generated in regions of the brain which are involved in learning and memory - especially the hippocampus, about which we will talk in more detail later. Donald Hebb found out that not only a synapse of two neurons is involved in LTP but that a particular group of neurons is more likely to fire together. According to this, an experience is represented by the firing of this group of neurons. So it works according to the principle: “what wires together fires together”.

## Amygdala

The amygdala is involved in the modulation of memory consolidation.

Following any learning event, the Long Term Memory for the event is not instantaneously formed. Rather, information regarding the event is slowly assimilated into long term storage over time, a process referred to as memory consolidation, until it reaches a relatively permanent state. During the consolidation period, memory can be modulated. In particular, it appears that emotional arousal following a learning event influences the strength of the subsequent memory for that event. Greater emotional arousal following a learning event enhances a person's retention of that event. Experiments have shown that administration of stress hormones to individuals, immediately after they learn something, enhances their retention when they are tested two weeks later. The amygdala, especially the basolateral nuclei, is involved in mediating the effects of emotional arousal on the strength of the memory for the event. There were experiments conducted by James McGaugh on animals in special laboratories. These laboratories have trained animals on a variety of learning tasks and found that drugs injected into the amygdala after training affect the animal’s subsequent retention of the task. These tasks include basic Pavlovian Tasks such as Inhibitory Avoidance, where a rat learns to associate a mild footshock with a particular compartment of an apparatus, and more complex tasks such as spatial or cued water maze, where a rat learns to swim to a platform to escape the water. If a drug that activates the amygdala is injected into the amygdala, the animals had better memory for the training in the task. When a drug that inactivated the amygdala was injected, the animals had impaired memory for the task. Despite the importance of the amygdala in modulating memory consolidation, however, learning can occur without it, although such learning appears to be impaired, as in fear conditioning impairments following amygdala damage. Evidence from work with humans indicates a similar role of the amygdala in humans . Amygdala activity at the time of encoding information correlates with retention for that information. However, this correlation depends on the relative “emotionality” of the information. More emotionally-arousing information increases amygdalar activity, and that activity correlates with retention.

## Hippocampus

Psychologists and neuroscientists dispute over the precise role of the hippocampus, but, generally, agree that it plays an essential role in the formation of new memories about experienced events (Episodic or Autobiographical Memory).

Some researchers prefer to consider the hippocampus as part of a larger medial temporal lobe memory system responsible for general declarative memory (memories that can be explicitly verbalized — these would include, for example, memory for facts in addition to episodic memory). Some evidence supports the idea that, although these forms of memory often last a lifetime, the hippocampus ceases to play a crucial role in the retention of the memory after a period of consolidation. Damage to the hippocampus usually results in profound difficulties in forming new memories (anterograde amnesia), and normally also affects access to memories prior to the damage (retrograde amnesia). Although the retrograde effect normally extends some years prior to the brain damage, in some cases older memories remain intact - this sparing of older memories leads to the idea that consolidation over time involves the transfer of memories out of the hippocampus to other parts of the brain. However, researchers have difficulties in testing the sparing of older memories and, in some cases of retrograde amnesia, the sparing appears to affect memories formed decades before the damage to the hippocampus occurred, so its role in maintaining these older memories remains controversial.

## Amnesia

Different types of Amnesia

Amnesia can occur when there is damage to a number of regions in the medial temporal lobe and their surrounding structures. The patient H.M. is probably one of the best known patients who suffered from amnesia. Removing his medial temporal lobes, including the hippocampus, seemed to be a good way to treat the epilepsy. What could be observed after this surgery was that H.M. was no longer able to remember things which happened after his 16th birthday, which was 11 years before the surgery. So given the definitions above one can say that he suffered retrograde amnesia. Unfortunately, he was not able to learn new information due to the fact that his hippocampus was also removed. H.M. therefore suffered not only from retrograde amnesia, but also from anterograde amnesia. His Implicit Memory, however, was still working. In procedural memory tests, for example, he still performed well. When he was asked to draw a star on a piece of paper which was shown to him in a mirror, he performed as bad as every other participant in the beginning. But after some weeks his performance improved even though he could not remember having done the task many times before. Thus, H.M.’s Declarative Memory showed severe deficits but his Implicit Memory was still fine. Another quite common cause of amnesia is the Korsakoff’s syndrome or also called Korsakoff’s amnesia. Long term alcoholism usually elicits this Korsakoff’s amnesia due to a prolonged deficiency of vitamin B1. This syndrome is associated with the pathology of the midline diencephalon including the dorsomedial thalamus. Alzheimer’s disease is probably the best known type of amnesia because it is the most common type in our society. Over 40 percent of the people who are older than 80 are affected by Alzheimer’s disease. It is a neurodegenerative disease and the region in the brain which is most affected is the entorhinal cortex. This cortex forms the main input and output of the hippocampus and so damages here are mostly severe. Knowing that the hippocampus is especially involved in forming new memories one can already guess the patients have difficulties in learning new information. But in late stages of Alzheimer’s disease also retrograde amnesia and even other cognitive abilities, which we are not going to discuss here, might occur.

This figure shows the brain structures which are involved in forming new memories
Final checklist of what you should keep in mind

1. Why does memory exist?
2. What is sensory memory?
3. What is the distinction between Short Term memory and Working Memory?
4. What is Long Term Memory and which brain area(s) are involved in forming new memories?
5. Remember the main results of the theory (For example: What does the Filter Theory show?)
6. Don’t forget why we forget!

# References

1. Quotation from www.wwnorton.com.
2. Atkinson, R. C. & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes.
In K. Spence & J. Spence (Eds.), The psychology of learning and motivation (Volume 2). New York: Academic Press.
3. Darwin, C. J., Turvey, M. T., & Crowder, R. G. (1972). An auditory analogue of the Sperling partial report procedure:
Evidence for brief auditory storage. Cognitive Psychology, 3, 255-267.
4. Cherry, E. C. (1953). Some experiments on the recognition of speech with one and with two ears.
Journal of Accoustical Society of America, 25, 975-979.
5. Broadbent, D. E. (1958). Perception and communication. New York: Pergamon.
6. Treisman, A. M. (1964). Monitoring and storage of irrelevant messages and selective attention.
Journal of Verbal Learning and Verbal Behaviour, 3, 449-459.
7. Deutsch, J. A. & Deutsch, D. (1963). Attention: Some theoretical considerations. Psycological Review, 70, 80-90.
8. Atkinson, R. C. & Shiffrin, R. M. (1968). Human memory: A proposed system and its control processes.
In K. Spence & J. Spence (Eds.), The psychology of learning and motivation (Volume 2). New York: Academic Press.
9. Miller, G. A. (1956). The magical number seven, plus or minus two: Some limits on our capacity for processing information.
Psychological Review, 63, 81-97.
10. Goldstein, E. B. (2005). Cognitive Psychology. London: Thomson Leaning, page 157.
11. Chase, W. G. & Simon, H.A. (1973). The mind’s eye in chess. In W. G. Chase (Ed.), Visual information processing.
12. Baddeley, A. D. & Hitch, G. (1974). Working memory. In G. A. Bower (Ed.), Recent advances in learning and motivation (Vol. 8).
13. Baddeley, A. D. (1986). Working Memory. Oxford: Oxford University Press.
14. Brandimonte, M. A., Hitch, G. J., & Bishop, D. V. M. (1992). Influence of short-term memory codes on visual image processing:
Evidence from image transformation tasks. Journal of Experimental Psychology: Learing, Memory, and Cognition, 18, 157-165.
15. Shallice, T., & Warrington, E. K. (1970). Independent functioning of verbal memory stores: A neuropsychological study.
Quarterly Journal of Experimental Psychology,22, 261-273.
16. Perfect, T. J., & Askew, C. (1994). Print adverts: Not remembered but memorable. Applied Cognitive Psychology, 8, 693-703.
17. Loftus, E. F., & Palmer, J. C. (1974). Reconstruction of an automobile destruction:
An example of the interaction between language and memory. Journal of Verbal Learning and Verbal Behavior, 13, 585-589.
18. Hebb, D. O. (1948). Organization of behavior. New York: Wiley.

Everyday memory - Eyewitness testimony

Introduction

Witness psychology is the study of human as an observer and reporter of events in life. It’s about how detailed and accurate we register what is happening, how well we remember what we observed, what causes us to forget and remember the mistakes, and our ability to assess the reliability and credibility of others' stories. It is the study of observation and memory for large and small events in life, from everyday trivialities to the dramatic and traumatic events that shook our lives (Magnussen, 2010)

Basic concepts

The eyewitness identification literature has developed a number of definitions and concepts that require explanation. Each definition and concept is described below.

A lineup is a procedure in which a criminal suspect (or a picture of the suspect) is placed among other people (or pictures of other people) and shown to an eyewitness to see if the witness will identify the suspect as the culprit in question. The term suspect should not be confused with the term culprit. A suspect might or might not be the culprit, a suspect is suspected of being the culprit (Wells & Olson, 2003)

Fillers are people in the lineup who are not suspects. Fillers, sometimes called foils or distractors, are known-innocent members of the lineup. Therefore, the identification of filler would not result in charges being brought against the filler. A culprit-absent lineup is one in which an innocent suspect is embedded among fillers and a culprit-present lineup is one in which a guilty suspect (culprit) is embedded among fillers. The primary literature sometimes calls these target-present and target-absent lineups (Wells & Olson, 2003).

A simultaneous lineup is one in which all lineup members are presented to the eyewitness at once and is the most common lineup procedure in use by law enforcement. A sequential lineup, on the other hand, is one in which the witness is shown only one person at a time but with the expectation that there are several lineup members to be shown (Wells & Olson, 2003).

A lineup’s functional size is the number of lineup members who are “viable” choices for the eyewitness. For example, if the eyewitness described the culprit as being a tall male with dark hair and the suspect is the only lineup member who is tall with dark hair, then the lineup’s functional size would be 1.0 even if there were 10 fillers. Today functional size is used generically to mean the number of lineup members who fit the eyewitness’s description of the culprit (Wells & Olson, 2003).

Mock witnesses are people who did not actually witness the crime but are asked to pick a person from the lineup based on the eyewitness’s verbal description of the culprit. They are shown the lineup and are asked to indicate who is the offender. Mock witnesses are used to test the functional size of the lineup (Wells & Olson, 2003).

The diagnosticity of suspect identification is the ratio of accurate identification rate with a culprit-present lineup to the inaccurate identification rate with a culprit- absent lineup. The diagnosticity of “not there” is the ratio of “not there” response rates with culprit-absent lineups to “not there” response rates with culprit-present lineups. The diagnosticity of filler identifications is the ratio of filler identification rates with culprit-absent lineups to filler identification rates with culprit-present lineups (Wells & Olson, 2003)

Among variables that affect eyewitness identification accuracy, a system variable is one that is, or could be, under control of the criminal justice system, while an estimator variable is one that is not. Estimator variables include lighting conditions at the time of witnessing and whether the witness and culprit are of the same or of different races. System variables include instructions given to eyewitnesses prior to viewing a lineup and the functional size of a lineup. The distinction between estimator and system variables has assumed great significance in the eyewitness identification literature since it was introduced in the late 1970s . In large part, the prominence of this distinction attests to the applied nature of the eyewitness identification literature. Whereas the development of a literature on estimator variables permits some degree of post diction that might be useful for assessing the chances of mistaken identification after the fact, the development of a system variable literature permits specification of how eyewitness identification errors might be prevented in the first place (Wells & Olson, 2003).

History and Reliability

The criminal justice system relies heavily on eyewitness identification for investigating and prosecuting crimes. Psychology has built the only scientific literature on eyewitness identification and has warned the justice system of problems with eyewitness identification evidence. Recent DNA exoneration cases have corroborated the warnings of eyewitness identification researchers by showing that mistaken eyewitness identification was the largest single factor contributing to the conviction of innocent people (Wells & Olson, 2003).

Psychological researchers who began programs in the 1970s, however, have consistently articulated concerns about the accuracy of eyewitness identification. Using various methodologies, such as filmed events and live staged crimes, eyewitness researchers have noted that mistaken identification rates can be surprisingly high and that eyewitnesses often express certainty when they mistakenly select someone from a lineup. Although their findings were quite compelling to the researchers themselves, it was not until the late 1990s that criminal justice personnel began taking the research seriously. This change in attitude about the psychological literature on eyewitness identification arose primarily from the development of forensic DNA tests in the 1990s (Wells & Olson, 2003). More than 100 people who were convicted prior to the advent of forensic DNA have now been exonerated by DNA tests, and more than 75% of these people were victims of mistaken eyewitness. The apparent prescience of the psychological literature regarding problems with eyewitness identification has created a rising prominence of eyewitness identification research in the criminal justice system. Because most crimes do not include DNA-rich biological traces, reliance on eyewitness identification for solving crimes has not been significantly diminished by the development of forensic DNA tests. The vast criminal justice system itself has never conducted an experiment on eyewitness identification (Wells & Olson, 2003).

Research

The experimental method has dominated the eyewitness literature, and most of the experiments are lab based. Lab-based experimental methods for studying eyewitness issues have strengths and weaknesses. The primary strength of experimental methods is that they are proficient at establishing cause–effect relations. This is especially important for research on system variables, because one needs to know in fact whether a particular system manipulation is expected to cause better or worse performance. In the real world, many variables can operate at the same time and in interaction with one another (Wells, Memon & Penrod, 2006)

Multicollinearity can be quite a problem in archival/field research, because it can be very difficult to sort out which (correlated) variables are really responsible for observed effects. The control of variables that is possible in experimental research can bring clarity to causal relationships that are obscured in archival research. For example, experiments on stress during witnessing have shown, quite compellingly, that stress interferes with the ability of eyewitnesses to identify a central person in a stressful situation. However, when Yuille and Cutshall (1986) studied multiple witnesses to an actual shooting, they found that those who reported higher stress had better memories for details than did those who reported lower stress. Why the different results? In the experimental setting, stress was manipulated while other factors were held constant; in the actual shooting, those who were closer to the incident reported higher levels of stress (presumably because of their proximity) but also had a better view. Thus, in the actual case, stress and view covaried. The experimental method is not well suited to post diction with estimator variables—that is, there may be limits to generalizing from experiments to actual cases. One reason is that levels of estimator variables in experiments are fixed and not necessarily fully representative of the values observed in actual cases. In addition, it is not possible to include all interesting and plausible interactions among variables in any single experiment (or even in a modest number of experiments). Clearly, generalizations to actual cases are best undertaken on the basis of a substantial body of experimental research conducted across a wide variety of conditions and employing a wide variety of variables. Nevertheless, the literature is largely based on experiments due to a clear preference by eyewitness researchers to learn about cause and effect. Furthermore, ‘‘ground truth’’ (the actual facts of the witnessed event) is readily established in experiments, because the witnessed events are creations of the experimenters. This kind of ground truth is difficult, if not impossible, to establish when analyzing actual cases (Wells et al. 2006).

Memory

The world is complex. All natural situations or scenes contains infinitely more physical and social information than the brain is able to detect. The brain’s ability to record information is limited. In studies of immediate memory for strings of numbers that have been read once, it turns out that most people begin to go wrong if the number of single digits exceeds five (Nordby, Raanaas & Magnussen, 2002). The limitations of what humans are capable to process, leads to an automatically selection of information. This selection is partially controlled by external factors, the factors in our environment that captures our attention (Magnussen, 2010). In the witness psychology we often talk about the weapon focus, in which eyewitnesses attend to the weapon, which reduces their memory for other information (Eysenck & Keane, 2010). The selection of information in a cognitive overload situation is also governed by psychological factors, the characteristics of the person who is observing. It is about the emotional state and the explicit and implicit expectations of what will happen. Psychologists call such expectations cognitive schemas. Cognitive schemas forms a sort of hypotheses or map of the world based on past experiences. These hypotheses or mental maps of the world determines what the brain chooses of the information, and how it interprets and if it will be remembered. When information is uncertain or ambiguous, the psychological factors are strong (Magnussen, 2010).

Eyewitness testimony can be distorted via confirmation bias, i.e., event memory is influenced by the observer’s expectation. A study made by Lindholm and Christianson (1998), Swedish and immigrant students saw a videotaped simulated robbery in which the perpetrator seriously wounded a cashier with a knife. After watching the video, participants were shown color photographs of eight men – four Swedes and the remainder immigrants. Both Swedish and immigrant participants were twice as likely to select an innocent immigrant as an innocent swede. Immigrants are overrepresented in Swedish crime statistics, and this influenced participants’ expectations concerning the likely ethnicity of the criminal (Eysenck & Keane, 2010)

Bartlett (1932) explained why our memory is influenced by our expectations. He argued that we possess numerous schemas or packets of knowledge stored in long-term memory. These schemas lead us to form a certain expectations and can distort our memory by causing us to reconstruct an event details based on “what must have been true”(Eysenck & Keane, 2010). What we select of information, and how we interpret information is partially controlled by cognitive schemas. Many cognitive schemas are generalized, and for a large automated and non-conscious, as the expectation that the world around us is stable and does not change spontaneously. Such generalized expectations are basic economic and making sure we do not have to devote so much energy to monitor the routine events of daily life, but they also contribute to the fact that we in certain situations may overlook important, but unexpected information, or supplement the memory with details who is form consistent, but who actually don´t exist (Magnussen, 2010).

Estimator variables

First, estimator variables are central to our understanding of when and why eyewitnesses are most likely to make errors. Informing police, prosecutors, judges, and juries about the conditions that can affect the accuracy of an eyewitness account is important. Second, our understanding of the importance of any given system variable is, at least at the extreme, dependent on levels of the estimator variables. Consider a case in which a victim eyewitness is abducted and held for 48 hours by an unmasked perpetrator; the witness has repeated viewings of the perpetrator, lighting is good, and so on. We have every reason to believe that this witness has a deep and lasting memory of the perpetrator’s face. Then, within hours of being released, the eyewitness views a lineup. Under these conditions, we would not expect system variables to have much impact. For instance, a lineup that is biased against an innocent suspect is not likely to lead this eyewitness to choose the innocent person, because her memory is too strong to be influenced by lineup bias. On the other hand, when an eyewitness’s memory is weaker, system variables have a stronger impact. Psychologists have investigated the effects on identification accuracy of a large number of estimator variables, witness, crime, and perpetrator characteristics. Here we recount findings concerning several variables that have received significant research attention and achieved high levels of consensus among experts (based on items represented in a survey by Kassin, Tubb, Hosch, & Memon, 2001) or have been the subject of interesting recent research (Wells et al. 2006).

References

Eysenck, M.E., & Keane, M.T., (2010). Cognitive psychology. A student´s Handbook (6th Edn). New York: Psychological Press

Magnussen, S., (2010). Vitnepsykologi. Pålitelighet og troverdighet I dagligliv og rettssal. Oslo: Abstrakt forlag as.

Nordby, K., Raanaas, R.K. & Magnussen, S. (2002). The expanding telephone number. I: Keying briefly presented multiple-digit numbers. Behavior and Information Technology, 21, 27-38.

Wells, G.L., Memon, A., & Penrod, S.D. ( 2006). Eyewitness Evidence. Improving Its Probative Value, 7(2), 45-75.

Wells, G.L., & Olson, E.A., (2003). Eyewitness Testimony, 54:277-95. Doi: 10.1146/annurev.psych.54.101601.145028

# Memory and Language

## Introduction

Introduction

"You need memory to keep track of the flow of conversation" [1]

Maybe the interaction between memory and language does not seem very obvious at first, but this interaction is necessary when trying to lead a conversation properly. Memory is the component for storing and retrieving information. So to remember both things just said and information heard before which might be important for the conversation. Whereas language serves for following the conversational partner, to understand what he says and to reply to him in an understandable way.
This is not a simple process which can be learned within days. In childhood everybody learns to communicate, a process lasting for years.
So how does this work? Possible responses to the question of language acquisition are presented in this chapter.The section also provides an insight into the topic of malfunctions in the brain. Concerning dysfunctions the following questions arise: How can the system of language and memory be destroyed? What causes language impairments? How do the impairments become obvious? These are some of the topics dealt with in this chapter.

Up to now, the whole profoundness of memory and language cannot be explored because the present financial resources are insufficient. And the connection between memory and language mostly becomes obvious when an impairment arises. So certain brain areas are explored when having a comparison between healthy brain and impaired brain. Then it is possible to find out what function this brain area has and how a dysfunction becomes obvious.

## Basics

### Memory

Memory is the ability of the nervous system to receive and keep information. It is divided into three parts: Sensory memory, Short-term memory and Long-term memory. Sensory memory holds information for milliseconds and is separated into two components. The iconic memory is responsible for visual information, whereas auditory information is processed in the echoic memory. Short-term memory keeps information for at most half a minute. Long-term memory, which can store information over decades, consists of the conscious explicit and the unconscious implicit memory. Explicit memory, also known as declarative, can be subdivided into semantic and episodic memory. Procedural memory and priming effects are components of the implicit memory.

Brain regions:

Brain regions Memory
Frontal lobe, parietal lobe, dorsolateral prefrontal cortex Short-term Memory/ Working Memory
Hippocampus Short-term Memory → Long-term Memory
Medial temporal lobe (neocortex) Declarative Memory
Amygdala, Cerebellum Procedural Memory

For detailed information see chapter Memory

### Language

Language is an essential system for communication which highly influences our life. This system uses sounds, symbols and gestures for the purpose of communication. Visual and auditory systems of a human body are the entrance-pathway for language to enter the brain. The motor system is responsible for speech and writing production, it serves as exit-pathway for language. The nature of language exists in the brain processes between the sensory and motor systems, especially between visual or auditory income and written or spoken outcome. The biggest part of the knowledge about brain mechanism for language is deduced from studies of language deficits resulting from brain damage. Even if there are about 10 000 different languages and dialects in the world, all of them express the subtleties of human experience and emotion.

For detailed information see chapters Comprehension and Neuroscience of Comprehension

## Acquisition of language

A phenomenon which occurs daily and in everybody’s life is the acquisition of language. Anyhow scientists are not yet able to explain the underlying processes in detail or to define the point when language acquisition commences, even if they agree that it happens long before the first word is spoken.
Theorists like Catherine Snow and Michael Tomasello think that the acquisition of language skills begins at birth. Others claim, it already commences in the womb. Newborns are not able to speak, even if babbling activates the brain regions later involved in speech production.
The ability to understand the meaning of words already begins before the first birthday, even if they cannot be pronounced till then. The phonological representation of words in the memory changes between the stage of repetitive syllable-babbling and the one-word stage. At first children associate words with concrete objects, followed by an extension to the class of objects. After a period of overgeneralisation the children’s system of concept approaches to the adults’ one. To prove the assumption of understanding the meaning of words that early, researches at MIT let children watch two video clips of “Sesame Street”. Simultaneously the children heard the sentences “Cookie Monster is tickling Big Bird” or “Big Bird is tickling Cookie Monster”. The babies consistently looked more at the video corresponding to the sentence, what is an evidence for comprehension of more complex sentences, than they are able to produce during the one-word period.
The different stages of speech production are listed in the table below.

Age Stage of Acquisition Example
6th month Stage of babbling:

- systematic combining of vowels and consonants

7th – 10th month Stage of repetitive syllable-babbling:

- higher part of consonants → paired with a vowel – monosyllabic

reduplicated babbling

da, ma, ga

11th – 12th month Stage of variegated babbling:

- combination of different consonants and vowels

12th month Usage of first words - John Locke(1995):

- prephonological → consonant-vowel(-consonant)

car, hat

Locke’s theory about the usage of the first word is only a general tendency. Other researchers like Charlotte Bühler (1928), a German psychologist, think that the age of speaking the first word is around the tenth month, whereas Elizabeth Bates et al. (1992) proposed a period between eleven and 13 months. The one-word stage described above can last from two till ten months. Until the second year of life a vocabulary of about 50 words evolves, four times more than the child utilises. Two thirds of the language processed is still babbling. After this stage of learning the vocabulary increases rapidly. The so called vocabulary spurt causes an increment of about one word every two hours. From now on children learn to have fluent conversations with a simple grammar containing errors.

As you can see in the following example, the length of the sentences and the grammatical output changes a lot. While raising his son, Knut keeps a tally of his son’s speech production, to see how fast the language develops:

Speech diary of Knut’s son Andy:
(Year; Month)
2;3: Play checkers. Big drum. I got horn. A bunny rabbit walk.
2;4: See marching bear go? Screw part machine. That busy bulldozer truck.
2;5: Now put boots on. Where wrench go? Mommy talking bout lady. What that paper clip doing?
2;6: Write a piece a paper. What that egg doing? I lost a shoe. No, I don't want to sit seat.
2;7: Where piece a paper go? Ursula has a boot on. Going to see kitten. Put the cigarette down. Dropped a rubber band. Shadow has hat just like that. Rintintin don't fly, Mommy.
2;8: Let me get down with the boots on. Don't be afraid a horses. How tiger be so healthy and fly like kite? Joshua throw like a penguin.
2;9: Where Mommy keep her pocket book? Show you something funny. Just like turtle make mud pie.
2;10: Look at that train Ursula brought. I simply don't want put in chair. You don't have paper. Do you want little bit, Cromer? I can't wear it tomorrow.
2;11: That birdie hopping by Missouri in bag? Do want some pie on your face? Why you mixing baby chocolate? I finish drinking all up down my throat. I said why not you coming in? Look at that piece a paper and tell it. We going turn light on so you can't see.
3;0: I going come in fourteen minutes. I going wear that to wedding. I see what happens. I have to save them now. Those are not strong mens. They are going sleep in wintertime. You dress me up like a baby elephant.
3;1: I like to play with something else. You know how to put it back together. I gon' make it like a rocket to blast off with. I put another one on the floor. You went to Boston University? You want to give me some carrots and some beans? Press the button and catch it, sir. I want some other peanuts. Why you put the pacifier in his mouth? Doggies like to climb up.
3;2: So it can't be cleaned? I broke my racing car. Do you know the light wents off? What happened to the bridge? When it's got a flat tire it's need a go to the station. I dream sometimes. I'm going to mail this so the letter can't come off. I want to have some espresso. The sun is not too bright. Can I have some sugar? Can I put my head in the mailbox so the mailman can know where I are and put me in the mailbox? Can I keep the screwdriver just like a carpenter keep the screwdriver? [2]

Obviously children are able to conjugate verbs and to decline nouns using regular rules. To produce irregular forms is more difficult, because they have to be learnt and stored in Long-term memory one by one. Rather than the repetition of words, the observation of speech is important to acquire grammatical skills. Around the third birthday the complexity of language increases exponentially and reaches a rate of about 1000 syntactic types.
Another interesting field concerning the correlation between Memory and Language is Multilingualism. Thinking about children educated bilingual, the question arises how the two languages are separated or combined in the brain. Scientists assume that especially lexical information is stored independently for each language; the semantic and syntactic levels rather could be unified. Experiments have shown that bilinguals have a more capacious span of memory when they listen to words not only in one but in both languages.

## Disorders and Malfunctions

Reading about the disorders concerning memory and language one might possibly think about amnesia or aphasia, both common diseases in the concerned brain regions. But when dealing with the correlation of memory and language we want to introduce only diseases which affect loss of memory as well as loss of language.

### Alzheimer's Disease

Discovered in 1906 by Alois Alzheimer this disease is the most common type of dementia. Alzheimer’s is characterised by symptoms like loss of memory, loss of language skills and impairments in skilled movements. Additionally other cognitive functions such as planning or decision-making which are connected to the frontal and temporal lobe can be reduced. The correlation between memory and language in this context is very important because they work together in order to establish conversations. When both are impaired, communication becomes a difficult task. People with alzheimer’s have reduced working memory capability, so they cannot keep in mind all of the information they heard during a conversation. They also forget words which they need to denote items, their desires and to understand what they are told. Affected persons also change their behaviour, they become anxious, suspicious or restless and they may have delusions or hallucinations. In the early stages of the disorder sick persons become less energetic or suffer little loss of memory. But they are still able to dress themselves, to eat and to communicate. Middle stages of the disease are characterised by problems of navigation and orientation. They do not find their way home or even forget where they live. In the late stages of the disease the patients’ ability to speak, read and write decreases enormously. They are no longer able to denote objects and to talk about their feelings and desires. So their family and the nursing staff have great problems to find out what the patients want to tell them. In the end-state the sick persons do not show any response or reaction. They lie in bed, have to be fed and are totally helpless. Most of them die after four to six years after diagnosis, although the disease can endure from three to twenty years. A cause for this is the difficulty to distinguish Alzheimer’s from other related disorders. Only after death when seeing the shrinkage of the brain one can definitely say that the person was affected by Alzheimer’s disease.

"Genetic Science Learning Center, University of Utah, http://learn.genetics.utah.edu/ A comparison of the two brains:
In the Alzheimer brain:
· The cortex shrivels up, damaging areas involved in thinking, planning and remembering.
· Shrinkage is especially severe in the hippocampus, an area of the cortex that plays a key role in formation of new memories.
· Ventricles (fluid-filled spaces within the brain) grow larger.

Scientists say that long before the first symptoms appear nerve cells that store and retrieve information have already begun to degenerate. There are two theories giving an explanation for the causes of Alzheimer’s disease. The first describes plaques as protein fragmens which defect the connection between nerve cells. They arise when little fragments release from nerve cell walls and associate with other fragments from outside the cell. These combinded fragments, called plaques, append to the outside of nerve cells and destroy the connections. Then the nerve cells start to die because they are no longer provided with nutrients. As a conclusion the stimuli are no longer transferred. The second theory explains that tangles limit the functions of nerve cells. They are twisted fibers of another protein that form inside brain cells and destroy the vital cell transport made of proteins. But scientists have not yet found out the exact role of plaques and tangles.

"Genetic Science Learning Center, University of Utah, http://learn.genetics.utah.edu/
- Alzheimer tissue has many fewer nerve cells and synapses than a healthy brain.
- Plaques, abnormal clusters of protein fragments, build up between nerve cells.
Dead and dying nerve cells contain tangles, which are made up of twisted fibers of another protein.

Alzheimer’s progress is separated into three stages: In the early stages (1) tangles and plaques begin to evolve in brain areas where learning, memory, thinking and planning takes place. This may begin 20 years before diagnosis. In the middle stages(2), plaques and tangles start to spread to areas of speaking and understanding speech. Also the sense of where your body is in relation to objects around you is reduced. This may last from 2–10 years. In advanced Alzheimer’s disease(3) most of the cortex is damaged, so that the brain starts to shrink seriously and cells begin to die. The people lose their ability to speak and communicate and they do not recognise their family or people they know. This stage may generally last from one to five years.

Today more than 18 million people suffer from Alzheimer’s disease, in Germany there are nearly 800,000 people. The number of affected persons increases enormously. Alzheimer’s is often only related to old people. Five percent of the people older than 65 years and fifteen to twenty percent of the people older than 80 years suffer from Alzheimer’s. But also people in the late thirties and forties can be affected by this heritable disease. The probability to suffer from Alzheimer’s when parents have the typicall old-generation-Alzheimer’s is not very high.

### Autism

Autism is a neurodevelopment condition, which causes neurodevelopmental disorders in several fields. For the last decade, autism has been studied in light of Autistic Spectrum Disorders, including mild and severe autism, as well as Asperger's syndrome. Autistic people for example have restricted perception and problems in information processing. The often associated intellectual giftedness only holds for a minority of people with autism, whereas the majority possesses a normal amount of intelligence or is below the average.
There are different types of autism, i.a.:

• Asperger’s syndrome – usually arising at the age of three
• infantile autism – arising between nine and eleven months after birth

The latter is important because it shows the correlation between memory and language in the children's behaviour very clearly. Two different types of infantile autism are the low functioning autism (LFA) and the high functioning autism (HFA). The LFA describes children with an IQ lower than 80, the HFA those with an IQ higher than 80. The disorders in both types are similar, but they are more strongly developed in children with LFA.
The disorders are mainly defined by the following aspects:

1. the inability of normal social interaction, e.g. amicable relations to other children
2. the inability of ordinary communication, e.g. disorder of spoken language/idiosyncratic language
3. stereotypical behaviour, e.g. stereotypical and restricted interests with an atypical content

To demonstrate the inability to manage normal communication and language, the University of Pittsburgh and the ESRC performed experiments to provide possible explanations. Sentences, stories or numbers were presented to children with autism and to normal children. The researchers concluded that the disorders in people with HFA and LFA are caused by an impairment in declarative memory. This impairment leads to difficulties in learning and remembering sentences, stories or personal events, whereas the ability to learn numbers is available. It has been shown that these children are not able to link words they heard to their general knowledge, thus the words are only partially learnt, with an idiosyncratic meaning. This explains why LFA and HFA affected children differ in their way of thinking from sane children. It is often difficult for them to understand others and vice versa. Furthermore scientists believe that the process of language learning depends on an initial vocabulary of fully meaningful words. It is assumed that these children do not possess such a vocabulary, thus their language development is impaired. In a few cases the acquisition of language fails completely, therefore in some cases the children are not able to use language in general. The inability of learning and using language can be a consequence of an impairment of declarative memory. It might also cause a low IQ because the process of learning is language-mediated. In HFA the IQ is not significantly lower than the IQ of sane children. This correlates well with their better understanding of word meanings. They have a milder form of autism. The experiments have also shown that adults do not have problems with the handling of language. A reason for that might be that they have been taught to use it during development or maybe they acquired this ability through reading and writing. The causes of autism are not yet explored appropriately to get some idea how to help and support those people having autism in everyday-life. It is still not clear whether the diseases are really caused by genetic disorders. It is also possible that other neurological malfunctions like brain damages or biochemical specialties are responsible for autism. The research just started to get answers to those questions.

## References and Resources

1. E. G. Goldstein, "Cognitive Psychology - Connecting Mind, Research, and Everyday Experience", page 137, THOMSON WADSWORTH TM 2005
2. S. Pinker, The Language Instinct, p.269f

Books

Steven Pinker: The Language Instinct; The Penguin Press, 1994, ISBN 0140175296
Gisela Klann-Delius: Spracherwerb; Sammlung Metzler, Bd 325; Verlag J.B.Metzler; Stuttgart, Weimar, 1999; ISBN 3476103218
Arnold Langenmayr: Sprachpsychologie - Ein Lehrbuch; Verlag für Psychologie, Hogrefe, 1997; ISBN 3801710440
Mark F. Bear, Barry W. Connors, Michael A. Paradiso: Neuroscience - Exploring The Brain; Lippincott Williams & Wilkins, 3rd edition, 2006; ISBN 0781760038

# Imagery

Note: Some figures are not included yet because of issues concerning their copyright.

## Introduction & History

Mental imagery was already discussed by the early Greek philosophers. Socrates sketched a relation between perception and imagery by assuming that visual sensory experience creates images in the human's mind, which are representations of the real world. Later on, Aristoteles stated that "thought is impossible without an image". At the beginning of the 18th century, Bishop Berkeley proposed another role of mental images - similar to the ideas of Sokrates - in his theory of idealism. He assumed that our whole perception of the external world consists only of mental images.

At the end of the 19th century Wilhelm Wundt - the generally acknowledged founder of experimental psychology and cognitive psychology - called imagery, sensations and feelings the basic elements of consciousness. Furthermore, he had the idea that the study of imagery supports the study of cognition because thinking is often accompanied by images. This remark was taken up by some psychologists and gave rise to the imageless-thought debate, which discussed the same question Aristoteles already had asked: Is thought possible without imagery?

In the early 20th century, when Behaviourism became the main stream of psychology, Watson argued that there is no visible evidence of images in human brains and therefore, the study of imagery is worthless. This general attitude towards the value of research on imagery did not change until the birth of cognitive psychology in the 1950s and -60s.

Later on, imagery has often been believed to play a very large, even pivotal, role in both memory (Yates, 1966; Paivio, 1986) and motivation (McMahon, 1973). It is also commonly believed to be centrally involved in visuo-spatial reasoning and inventive or creative thought.

## Concept

Imagination and imagination, also known as imagination, are the ability to form images, perceptions, and concepts. In fact, images, perceptions and concepts are not perceived through sight, hearing or other senses. Imagine the work of the mind and help create fantasy. Imagination helps to provide meaning and provide an understanding of knowledge; imagination is the basic ability of people to create meaning for the world; imagination also plays a key role in the learning process. The basic training method for cultivating imagination is to listen to stories; when listening to stories, the accuracy of wording is the basic factor of “generating the world”. Imagine any power we face. We combine what we touch, see and hear into a "picture" by imagination. It is widely believed that as an intrinsic ability, as a factor in perceiving the public world from the senses, in the process of inventing a partial or complete personal domain in the mind, the term is used in the professional use of psychology, meaning the mind. In the recovery process, [source request] restores the sensory object previously presented to the perception. Because the use of this term contradicts the use of everyday language, some psychologists tend to describe this process as an "image process" or "image", or as a "regeneration" imaginary process, using "generating or constructive" The imaginary process "images of imagination" are seen with the "eye of the soul." Imagination can also be expressed through fairy tales or imaginary situations. The most famous invention or entertainment product is created by one's imagination. One hypothesis about the evolutionary process of human imagination is that imagination allows conscious species to solve problems by using psychological simulations. Children are often considered to be imaginative groups. Because their way of thinking has not yet formed, there are fewer ideological restrictions and rules than adults. Therefore, it is often imaginative. Children often use stories or pretend games to practice their imagination. When children have fantasies, they play a role on two levels: on the first level, they use role-play to achieve what they create with imagination; at the second level, they pretend to believe that the situation is to play games.

In terms of behavior, what they create seems to be a real reality, and this already exists in the myth of the story. Artists often need a lot of imagination because they are engaged in creative work, and imagination can help them break existing rules and bring aesthetics into a new model. Imagine the universal use of this term, which means the process of forming an image that has not been experienced before in the mind, or the process of forming an image that is at least partially experienced or formed by different combinations. Some typical examples are as follows: fairy tale Fiction The illusion of forms inspired by fantasy novels and science fiction spurs readers to pretend that these stories are real, and by resorting to fantasy objects such as books or fantasy, these objects are not in the fantasy world. Imagination in this sense (not limited to the precise knowledge gained from actual needs) is somewhat exempt from objective limitations. Imagine being in the position of another person. This ability is very important for social relationships and social understanding. Einstein said: "Imagination... is more important than knowledge. Knowledge is limited. Imagination encompasses the world." [8] However, in all fields, even imagination is limited: therefore, one Human imagination does violate the basic law, or the inevitable principle of violating the actual possibility, or the possibility of rationality in a certain situation, is considered a mental disorder. The same limitations place imagination in the realm of scientific assumptions. The advancement of scientific research is largely due to some temporary explanations; these explanations are based on imagination. However, these assumptions must be made up of previously identified facts and must be coordinated with the principles of a particular science. Imagination is an experimental part of thinking that creates functional-based theories and ideas. Separate objects from real perceptions and use complex "features" to envision creating new ideas or changing ideas. This part of the thinking is crucial to improving and improving the way new and old tasks are completed. These experimental ideas can be implemented steadily in the field of simulation; then, if the concept is very likely and its function is real, then the concept can be implemented in reality. Imagination is the key to the new development of the soul. It shares with others and progresses together. Imagination can be divided into: Automatic imagination (from dreams and daydreams) Don't automatically imagine (renewable imagination, creative imagination, future dreams)

## The Imagery Debate

Imagine yourself back on vacation again. You are now walking along the beach, while projecting images of white benzene-molecules onto the horizon. At once you are realizing that there are two real little white dots under your projection. Couriously you are walking towards them, until your visual field is filled by two seriously looking, but fiercely debating scientists. As they take notice of your presence, they invite you to take a seat and listen to the still unsolved imagery debate.

Today’s imagery debate is mainly influenced by two opposing theories: On the one hand Zenon Pylyshyn’s (left) propositional theory and on the other hand Stephen Kosslyn’s (right) spatial representation theory of imagery processing.

### Theory of propositional representation

The theory of Propositional Representation was founded by Dr. Zenon Pylyshyn who invented it in 1973. He described it as an epiphenomenon which accompanies the process of imagery, but is not part of it. Mental images do not show us how the mind works exactly. They only show us that something is happening. Just like the display of a compact disc player. There are flashing lights that display that something happens. We are also able to conclude what happens, but the display does not show us how the processes inside the compact disc player work. Even if the display would be broken, the compact disc player would still continue to play music.

#### Representation

The basic idea of the propositional representation is that relationships between objects are represented by symbols and not by spatial mental images of the scene. For example, a bottle under a table would be represented by a formula made of symbols like UNDER(BOTTLE,TABLE). The term proposition is lend from the domains of Logic and Linguistics and means the smallest possible entity of information. Each proposition can either be true or false.

If there is a sentence like "Debby donated a big amount of money to Greenpeace, an organisation which protects the environment", it can be recapitulated by the propositions "Debby donated money to Greenpeace", "The amount of money was big" and "Greenpeace protects the environment". The truth value of the whole sentence depends on the truth values of its constituents. Hence, if one of the propositions is false, so is the whole sentence.

#### Propositional networks

This last model does not imply that a person remembers the sentence or its single propositions in its exact literal wording. It is rather assumed that the information is stored in the memory in a propositional network.

In Figure 1 each circle represents a single proposition. Regarding the fact that some components are connected to more than one proposition, they construct a network of propositions. Propositional networks can also have a hierarchy, if a single component of a proposition is not a single object, but a proposition itself. An example of a hierarchical propositional network describing the sentence "John believes that Anna will pass her exam" is illustrated in Figure 2.

#### Complex objects and schemes

Even complex objects can be generated and described by propositional representation. A complex object like a ship would consist of a structure of nodes which represent the ships properties and the relationship of these properties.

Almost all humans have concepts of commonly known objects like ships or houses in their mind. These concepts are abstractions of complex propositional networks and are called schemes. For example our concept of a house includes propositions like:

Houses have rooms.
Houses can be made from wood.
Houses have walls.
Houses have windows.
...


Listing all of these propositions does not show the structure of relationships between these propositions. Instead, a concept of something can be arranged in a schema consisting of a list of attributes and values, which describe the properties of the object. Attributes describe possible forms of categorisation, while values rep- resent the actual value for each attribute. The schema-representation of a house looks like this:

House
Category: building
Material: stone, wood
Contains: rooms
Function: shelter for humans
Shape: rectangular
...


The hierarchical structure of schemes is organised in categories. For example, "house" belongs to the category "building" (which has of course its own schema) and contains all attributes and values of the parent schema plus its own specific values and attributes. This way of organising objects in our environment into hierarchical models enables us to recognise objects we have never seen before in our life, because they can possibly be related to categories we already know.

#### Experimental support

In an experiment performed by Wisemann und Neissner in 1974, people are shown a picture which, on first sight, seems to consist of random black and white shapes. After some time the subjects realise that there is a dalmatian dog in it. The results of this show that people who recognise the dog remember the picture better than people who do not recognise him. An possible explanation is that the picture is stored in the memory not as a picture, but as a proposition.

In an experiment by Weisberg in 1969 subjects had to memorise sentences like "Children who are slow eat bread that is cold". Then the subjects were asked to associate the first word from the sentence that comes in their mind to a word given by the experiment conductor. Almost all subjects associated the word "children" to the given word "slow", although the word "bread" has a position that is more close to the given word "slow" than the word "children". An explanation for this is that the sentence is stored in the memory using the three propositions "Children are slow", "Children eat bread" and "Bread is cold". The subjects associated the word "children" with the given word "slow", because both belong to one proposition, while "bread" and "slow" belong to different ones. The same evidence was proven in another experiment by Ratcliff and McKoon in 1978.

### Theory of spatial representation

Stephen Kosslyn's theory opposing Pylyshyn's propositional approach implies that images are not only represented by propositions. He tried to find evidence for a spatial representation system that constructs mental, analogous, three-dimensional models.

The primary role of this system is to organize spatial information in a general form that can be accessed by either perceptual or linguistic mechanisms. It also provides coordinate frameworks to describe object locations, thus creating a model of a perceived or described environment. The advantage of a coordinate representation is that it is directly analogous to the structure of real space and captures all possible relations between objects encoded in the coordinate space. These frameworks also reflect differences in the salience of objects and locations consistent with the properties of the environment, as well as the ways in which people interact with it. Thus, the representations created are models of physical and functional aspects of the environment.

#### Encoding

What, then, can be said about the primary components of cognitive spatial representation? Certainly, the distinction between the external world and our internal view of it is essential, and it is helpful to explore the relationship between the two further from a process-oriented perspective.

The classical approach assumes a complex internal representation in the mind that is constructed through a series of specific perceived stimuli, and that these stimuli generate specific internal responses. Research dealing specifically with geographic-scale space has worked from the perspective that the macro-scale physical environment is extremely complex and essentially beyond the control of the individual. This research, such as that of Lynch and of Golledge (1987) and his colleagues, has shown that there is a complex of behavioural responses generated from corresponding complex external stimuli, which are themselves interrelated. Moreover, the results of this research offers a view of our geographic knowledge as a highly interrelated external/internal system. Using landmarks encountered within the external landscape as navigational cues is the clearest example of this interrelationship.

The rationale is as follows: We gain information about our external environment from different kinds of perceptual experience; by navigating through and interacting directly with geographic space as well as by reading maps, through language, photographs and other communication media. Within all of these different types of experience, we encounter elements within the external world that act as symbols. These symbols, whether a landmark within the real landscape, a word or phrase, a line on a map or a building in a photograph, trigger our internal knowledge representation and generate appropriate responses. In other words, elements that we encounter within our environment act as external knowledge stores.

Each external symbol has meaning that is acquired through the sum of the individual perceiver's previous experience. That meaning is imparted by both the specific cultural context of that individual and by the specific meaning intended by the generator of that symbol. Of course, there are many elements within the natural environment not "generated" by anyone, but that nevertheless are imparted with very powerful meaning by cultures (e.g. the sun, moon and stars). Man-made elements within the environment, including elements such as buildings, are often specifically designed to act as symbols as at least part of their function. The sheer size of downtown office buildings, the pillars of a bank facade and church spires pointing skyward are designed to evoke an impression of power, stability or holiness, respectively.

These external symbols are themselves interrelated, and specific groupings of symbols may constitute self-contained external models of geographic space. Maps and landscape photographs are certainly clear examples of this. Elements of differing form (e.g., maps and text) can also be interrelated. These various external models of geographic space correspond to external memory. From the perspective just described, the total sum of any individual's knowledge is contained in a multiplicity of internal and external representations that function as a single, interactive whole. The representation as a whole can therefore be characterised as a synergistic, self-organising and highly dynamic network.

#### Experimental support

##### Interaction

Early experiments on imagery were already done in 1910 by Perky. He tried to find out, if there is any interaction between imagery and perception by a simple mechanism. Some subjects are told to project an image of common objects like a ship onto a wall. Without their knowledge there is a back projection, which subtly shines through the wall. Then they have to describe this picture, or are questioned about for example the orientation or the colour of the ship. In Perkys experiment, none of the 20 subjects recognised that the description of the picture did not arise from their mind, but were completely influenced by the picture shown to them.

##### Image Scanning

Another seminal research in this field were Kosslyn's image-scanning experiments in the 1970s. Referring to the example of the mental representation of a ship, he experienced another linearity within the move of the mental focus from one part of the ship to another. The reaction time of the subjects increased with distance between the two parts, which indicates, that we actually create a mental picture of scenes while trying to solve small cognitive tasks. Interestingly, this visual ability can be observed also with congenitally blind, as Marmor and Zaback (1976) found out. Presuming, that the underlying processes are the same of sighted subjects, it could be concluded that there is a deeper encoded system that has access to more than the visual input.

Other advocates of the spatial representation theory, Shepard and Metzler, developed the mental rotation task in 1971. Two objects are presented to a participant in different angles and his job is to decide whether the objects are identical or not. The results show that the reaction times increases linearly with the rotation angle of the objects. The participants mentally rotate the objects in order to match the objects to one another. This process is called "mental chronometry".

Together with Paivio's memory research, this experiment was crucial for the importance of imagery within cognitive psychology, because it showed the similarity of imagery to the processes of perception. For a mental rotation of 40° the subjects needed two seconds in average, whereas for a 140° rotation the reaction time increased to four seconds. Therefore it can be concluded that people in general have a mental object rotation rate of 50° per second.

##### Spatial Frameworks

Although most research on mental models has focussed on text comprehension, researchers generally believe that mental models are perceptually based. Indeed, people have been found to use spatial frameworks like those created for texts to retrieve spatial information about observed scenes (Bryant, 1991). Thus, people create the same sorts of spatial memory representations no matter if they read about an environment or see it themselves.

##### Size and the visual field

If an object is observed from different distances, it is harder to perceive details if the object is far away because the objects fill only a small part of the visual field. Kosslyn made an experiment in 1973 in which he wanted to find out if this is also true for mental images, to show the similarity of the spatial representation and the perception of real environment. He told participants to imagine objects which are far away and objects which are near. After asking the participants about details, he supposed that details can be observed better if the object is near and fills the visual field. He also told the participants to imagine animals with different sizes near by another. For example an elephant and a rabbit. The elephant filled much more of the visual field than the rabbit and it turned out that the participants were able to answer questions about the elephant more rapidly than about the rabbit. After that the participants had to imagine the small animal besides an even smaller animal, like a fly. This time, the rabbit filled the bigger part of the visual field and again, questions about the bigger animal were answered faster. The result of Kosslyn's experiments is that people can observe more details of an object if it fills a bigger part of their mental visual field. This provides evidence that mental images are represented spatial.

### Discussion

Since the 1970s many experiments enriched the knowledge about imagery and memory to a great extend in the course of the two opposing point of views of the imagery debate. The seesaw of assumed support was marked of lots of smart ideas. The following section is an example of the potential of such controversities.

In 1978, Kossylyn expanded his image screening experiment from objects to real distances represented on maps. In the picture you see our island with all the places you encountered in this chapter. Try to imagine, how far away from each other they are. This is exactly the experiment performed by Kossylyn. Again, he predicted successfully a linear dependency between reaction time and spatial distance to support his model.

In the same year, Pylyshyn answered with what is called the "tacit-knowledge explanation", because he supposed that the participants include knowledge about the world without noticing it. The map is decomposed into nodes with edges in between. The increase of time, he thought, was caused by the different quantity of nodes visited until the goal node is reached.

Only four years later, Finke and Pinker published a counter model. Picture (1) shows a surface with four dots, which were presented to the subjects. After two seconds, it was replaced by picture (2), with an arrow on it. The subjects had to decide, if the arrow pointed at a former dot. The result was, that they reacted slower, if the arrow was farer away from a dot. Finke and Pinker concluded, that within two seconds, the distances can only be stored within a spatial representation of the surface.

To sum it up, it is commonly believed, that imagery and perception share certain features but also differs in some points. For example, perception is a bottom-up process that originates with an image on the retina, whereas imagery is a top-down mechanism which originates when activity is generated in higher visual centres without an actual stimulus. Another distinction can be made by saying that perception occurs automatically and remains relatively stable, whereas imagery needs effort and is fragile. But as psychological discussions failed to point out one right theory, now the debate is translocated to neuroscience, which methods had promising improvements throughout the last three decades.

## Neuropsychological approach

### Investigating the brain - a way to resolve the imagery debate?

Visual imagery was investigated by psychological studies relying solely on behavioural experiments until the late 1980s. By that time, research on the brain by electrophysiological measurements such as the event-related potential (ERP) and brain-imaging techniques (fMRI, PET) became possible. It was therefore hoped that neurological evidence how the brain responds to visual imagery would help to resolve the imagery debate.

We will see that many results from neuroscience support the theory that imagery and perception are closely connected and share the same physiological mechanisms. Nevertheless the contradictory phenomena of double dissociations between imagery and perception shows that the overlap is not perfect. A theory that tries to take into account all the neuropsychological results and gives an explanation for the dissociations will therefore be presented in the end of this section.

### Support for shared physiological mechanisms of imagery and perception

Brain imaging experiments in the 1990s confirmed the results which previous electrophysiological measurements had already made. Therein brain activity of participants was measured, using either PET or fMRI, both when they were creating visual images and when they were not creating images. These experiments showed that imagery creates activity in the striate cortex which is, being the primary visual receiving area, also active during visual perception. Figure 8 (not included yet due to copyright issues) shows how activity in the striate cortex increased both when a person perceived an object (“stimulus on”) and when the person created a visual image of it (“imagined stimulus”). Although the striate cortex has not become activated by imagery in all brain-imaging studies, most results indicate that it is activated when participants are asked to create detailed images.

Another approach to understand imagery has been made by studies of people with brain damage in order to determine if both imagery and perception are affected in the same way. Often, patients with perceptual problems also have problems in creating images like in the case of people having both lost the ability to see colour and to create colours through imagery. Another example is that of a patient with unilateral neglect, which is due to damage to the parietal lobes and causes that the patient ignores objects in one half of his visual field. By asking the patient to imagine himself standing at a place that is familiar to him and to describe the things he is seeing, it was found out that he did not only neglect the left side of his perceptions but also the left side of his mental images, as he could only name objects that were on the right hand side of his mental image.

The idea that mental imagery and perception share physiological mechanisms is thus supported by both brain imaging experiments with normal participants and effects of brain damage like in patients with unilateral neglect. However, also contradictory results have been observed, indicating that the underlying mechanisms of perception and imagery cannot be identical.

### Double dissociation between imagery and perception

A double dissociation exists when a single dissociation (one function is present another is absent) can be demonstrated in one person and the complementary type of single dissociation can be demonstrated in another person. Regarding imagery and perception a double dissociation has been observed as there are both patients with normal perception but impaired imagery and patients with impaired perception but normal imagery. Accordingly, one patient with damage to his occipital and parietal lobes was able to recognise objects and draw accurate pictures of objects placed before him, but was unable to draw pictures from memory, which requires imagery. Contrary, another patient suffering from visual agnosia was unable to identify pictures of objects even though he could recognise parts of them. For example, he did not recognise a picture of an asparagus but labelled it as “rose twig with thorns”. On the other hand, he was able to draw very detailed pictures from memory which is a task depending on imagery.

As double dissociation usually suggests that two functions rely on different brain regions or physiological mechanisms, the described examples imply that imagery and perception do not share exactly the same physiological mechanisms. This of course conflicts with the evidence from brain imaging measurements and other cases of patients with brain damage mentioned above that showed a close connection between imagery and perception.

### Interpretation of the neuropsychological results

A possible explanation for the paradox that on the one hand there is great evidence for parallels between perception and imagery but on the other hand the observed double dissociation conflicts with these results goes as follows. Mechanisms of imagery and perception overlap only partially so that the mechanisms responsible for imagery are mainly located in higher visual centres and the mechanisms underlying perception are located at both lower and higher centres (Figure 9, not included yet due to copyright issues). Accordingly, perception is thought to constitute a bottom-up-process that starts with an image in the retina and involves processing in the retina, the Lateral Geniculate Nucleus, the striate cortex and higher cortical areas. In contrast, imagery is said to start as a top-down process, as its activity is generated in higher visual centres without any actual stimulus, that is without an image on the retina. This theory provides explanations for both the patient with impaired perception but normal imagery and the patient with normal perception but impaired imagery. In the first case, the patient’s perceptual problems could be explained by damage to early processing in the cortex and his ability to still create images by the intactness of higher areas of the brain. Similarly, in the latter case, the patient's impaired imagery could be caused by damage to higher-level areas whereas the lower centres could still be intact. Even though this explanation fits several cases it does not fit all cases. Consequently, further research has to accomplish the task of developing an explanation that is able to explain the relation between perception and imagery sufficiently.

## Imagery and memory

Besides the imagery debate, which is concerned with the question how we imagine for example objects, persons, situations and involve our senses in these mental pictures, questions concerning the memory are still untreated. In this part of the chapter about imagery we are dealing with the questions how images are encoded in the brain, and how they are recalled out of our memory. In search of answering these questions three major theories evolved. All of them explain the encoding and recalling processes different, and as usual validating experiments were realised for all these theories.

In search of answering these questions three major streams evolved. All of them try to explain the encoding and recalling processes differently and, as usual, validating experiments were realised in all streams.

### The common-code theory

This view of memory and recall theories that images and words access semantic information in a single conceptual system that is neither word-like nor spatial-like. The model of common-code hypothesis that for example images and words both require analogous processing before accessing semantic information. So the semantic information of all sensational input is encoded in the same way. The consequence is that when you remember for example a situation where you were watching an apple falling down a tree, the visual information about the falling of the apple and the information about the sound, which appeared when the apple hit the ground, both are constructed on – the – fly in the specific brain regions (e.g. visual images in the visual cortex) out of one code stored in the brain. Another difference of this model is that it claims images require less time than words for accessing the common conceptual system. Therefore images need less time to be discriminated, because they share a smaller set of possible alternatives than words. Apart from that words have to be picked out of a much larger set of ambiguous possibilities in the mental dictionary. The heaviest point of criticism on this model is that it does not declare where this common code is stored at the end.

### The abstract-propositional theory

This theory rejects any notion of the distinction between verbal and non - verbal modes of representation, but instead describes representations of experience or knowledge in terms of an abstract set of relations and states, in other words propositions. This theory postulates that the recall of images is better if the one who is recalling the image has some connection to the meaning of the image which is recalled. For example if you are looking at an abstract picture on which a bunch of lines is drawn, which you cannot combine in a meaningful way with each other, the recall process of this picture will be very hard (if not impossible). As reason for this it is assumed, that there is no connection to propositions, which can describe some part of the picture, and no connection to a propositional network, which reconstructs parts of the picture. The other case is, that you look at a picture with some lines in it, which you can combine in a meaningful way with each other. The recall process should be successful, because in this case you can scan for a proposition which has at least one attribute with the meaning of the image you recognised. Then this proposition returns the information which is necessary to recall it.

### The dual-code theory

Unlike the common-code and abstract-propositional approaches, this model postulates that words and images are represented in functionally distinct verbal and non - verbal memory systems. To establish this model, Roland and Fridberg (1985) had run an experiment, in which the subjects had either to imagine a mnemonic or how they walk the way to their home through their neighbourhoods. While the subjects did one of this tasks, their brain was scanned with the positron emission tomography (PET). Figure 10 is a picture combining the brains of the subjects, which achieved the first and the second task.

Figure 10: Green dots represent regions which showed a higher activity during the walking home task; yellow dots represent regions which showed a higher activity during the mnemonic task.

As we can see on the picture, for the processing of verbal and spatial information different brain areas are involved. The brain areas, which were active during the walking home task, are the same areas which are active during the visual perception and the information processing. And among those areas which showed activity while the mnemonic task was carried out, the Broca-centre is included, where normally language processing is located. This can be considered as an evidence for both representation types to be somehow connected with the modalities, as Paivio’s theory about dual-coding suggests Anderson (1996). Can you imagine other examples, which argue for the dual-code theory? For example, you walk along the beach in the evening, there are some beach bars ahead. You order a drink, and next to you, you see a person, which seems to be familiar to you. While you drink your drink, you try to remember the name of this person, but you fail stranded, even if you can remember where you have seen the person the last time, and perhaps what you have talked about in that situation. Now imagine another situation. You walk through the city, and you pass some coffee bars, out of one of them you hear a song. You are sure that you know that song, but you cannot remember the name of the interpreter, nor the name of the song either where you have heard it. Both examples can be interpreted as indicators for the assumption that in these situations you can recall the information which you perceived in the past, but you fail in remembering the propositions you connected to them.

In this area of research there are of course other unanswered questions, for example why we cannot imagine smell, how the recall processes are performed or where the storage of images is located. The imagery debate is still going on, and ultimate evidence showing which of the models explains the connection between imagery and memory are missing. For now the dual-code theory seems to be the most promising model.

## References

Anderson, John R. (1996). Kognitive Psychlogie: eine Einfuehrung. Heidelberg: Spektrum Akademischer Verlag.

Bryant, D. J., B. Tversky, et al. (1992). ”Internal and External Spatial Frameworks for Representing Described Scenes.” Jornal of Memory and Language 31: 74-98.

Coucelis, H., Golledge, R., and Tobler, W. (1987). Exploring the anchor- point hypothesis of spatial cognition. Journal of Environmental Psychol- ogy, 7, 99-122.

E.Bruce Goldstein, Cognitive Psychology, Connecting Mind, Research, and Everyday Experience (2005) - ISBN 0-534-57732-6.

Marmor, G.S. and Zaback, L.A. (1976). Mental Rotation in the blind: Does mental rotation depend on visual imagery?. Journal of Experimental Psychology: Human Perception and Performance, 2, 515-521.

Roland, P. E. & Fridberg, L. (1985). Localization of critical areas activated by thinking. Journal of Neurophysiology, 53, 1219 – 1243.

Paivio, A. (1986). Mental representation: A dual-coding approach. New York: Oxford University Press.

Articles

Cherney, Leora (2001): Right Hemisphere Brain Damage

Grodzinsky, Yosef (2000): The neurology of syntax: Language use without Broca’s area.

Mueller, H. M., King, J. W. & Kutas, M. (1997). Event-related potentials elicited by spoken relative clauses; Cognitive Brain Research 4:193-203.

Mueller, H.M. & Kutas, M. (1996). What’s in a name? Electrophysiological differences between spoken nouns, proper names and one’s own name; NeuroReport 8:221-225.

Revised in July 2007 by: Alexander Blum (Spatial Representation, Discussion of the Imagery Debate, Images), Daniel Elport (Propositional Representation), Alexander Lelais (Imagery and Memory), Sarah Mueller (Neuropsychological approach), Michael Rausch (Introduction, Publishing)

Authors of the first version (2006): Wendy Wilutzky, Till Becker, Patrick Ehrenbrink (Propositional Representation), Mayumi Koguchi, Da Shengh Zhang (Spatial Representation, Intro, Debate).

# Comprehension

"Language is the way we interact and communicate, so, naturally, the means of communication and the conceptual background that’s behind it, which is more important, are used to try to shape attitudes and opinions and induce conformity and subordination. Not surprisingly, it was created in the more democratic societies." - Noam Chomsky

Language is a central part of everyday life and communication a natural human necessity. For those reasons there has been a high interest in their properties. However describing the processes of language turns out to be quite hard.

We can define language as a system of communication through which we code and express our feelings, thoughts, ideas and experiences.

Already Plato was concerned with the nature of language in his dialogue “Cratylus”, where he discussed first ideas about nowadays important principles of linguistics namely morphology and phonology. Gradually philosophers, natural scientists and psychologists became interested in features of language.

Since the emergence of the cognitive science in the 50's and Chomsky´s criticism on the behaviourist view, language is seen as a cognitive ability of humans, thus incorporating linguistics in other major fields like computer science and psychology. Today, psycho-linguistics is a discipline on its own and its most important topics are acquisition, production and comprehension of language.

Especially in the 20th century many studies concerning communication have been conducted, evoking new views on old facts. New techniques, like CT, MRI and fMRI or EEG, as described in Methods of Behavioural and Neuroscience Methods, made it possible to observe brain during communication processes in detail.

Later on an overview of the most popular experiments and observed effects is presented. But in order to understand those one needs to have a basic idea of semantics and syntax as well as of linguistic principles for processing words, sentences and full texts.

Finally some questions will arise: How is language affected by culture? Or in philosophical terms, the discussion about the relationship between language and thoughts has to be developed.

## Historical review on Psycholinguistics & Neurolinguistics

Starting with philosophical approaches, the nature of the human language had ever been a topic of interest. Galileo in the 16th century saw the human language as the most important invention of humans. Later on in the 18th century the scientific study of language began by psychologists. Wilhelm Wundt (founder of the first laboratory of psychology) saw language as the mechanism by which thoughts are transformed into sentences. The observations of Wernike and Broca (see chapter 9) were milestones in the studies of language as a cognitive ability. In the early 1900s the behaviouristic view influenced the study of language very much. In 1957 B.F.Skiner published his book "Verbal Behaviour", in which he proposed that learning of language can be seen as a mechanism of reinforcement. Noam Chomsky (quoted at the beginning of this chapter) published in the same year "Syntactic Structures". He proposed that the ability to invent language is somehow coded in the genes. That led him to the idea that the underlying basis of language is similar across cultures. There might be some kind of universal grammar as a base, independent of what kind of language (including sign language) might be used by humans. Further on Chomsky published a review of Skinner´s "Verbal Behaviour" in which he presented arguments against the behaviouristic view. There are still some scientists who are convinced that it does not need a mentalist approach like Chomsky proposed, but in the meantime most agree that human language has to be seen as a cognitive ability.  Current goals of Psycholinguistics

A natural language can be analysed at a number of different levels. In linguistics we differ between phonology (sounds), morphology (words), syntax (sentence structure), semantics (meaning), and pragmatics (use). Linguists try to find systematic descriptions capturing the regularities inherent in the language itself. But a description of natural language just as a abstract structured system, can not be enough. Psycholinguists rather ask, how the knowledge of language is represented in the brain, and how it is used. Today's most important research topics are:

1. comprehension: How humans understand spoken as well as written language, how language is processed and what interactions with memory are involved.
2. speech production: Both the physical aspect of speech production, and the mental process that stands behind the uttering of a sentence.
3. acquisition: How people learn to speak and understand a language.

## Characteristic features

What is a language? What kinds of languages do exist? Are there characteristic features that are unique in human language?

There are plenty of approaches how to describe languages. Especially in computational linguistics researchers try to find formal definitions for different kinds of languages. But for psychology other aspects of language than its function as pure system of communication are of central interest. Language is also a tool we use for social interactions starting with the exchange of news up to the identification of social groups by their dialect. We use it for expressing our feelings, thoughts, ideas etc.

Although there are plenty ways to communicate (consider Non-Human-Language) humans expect their system of communication - the human language to be unique. But what is it that makes the human language so special and unique?

Four major criteria have been proposed by Professor Franz Schmalhofer from the University of Osnabrück as explained below:

• semanticity
• displacement
• creativity
• structure dependency

Semanticity means the usage of symbols. Symbols can either refer to objects or to relations between objects. In the human language words are the basic form of symbols. For example the word "book" refers to an object made of paper on which something might be written. A relation symbol is the verb "to like" which refers to the sympathy of somebody to something or someone.

The criterion of displacement means that not only objects or relations at presence can be described but there are also symbols which refer to objects in another time or place. The word "yesterday" refers to day before and objects mentioned in a sentence with "yesterday" refer to objects from another time than the present one. Displacement is about the communication of events which had happened or will happen and the objects belonging to that event.

Having a range of symbols to communicate these symbols can be newly combined. Creativity is the probable most important feature. Our communication is not restricted to a fixed set of topics or predetermined messages. The combination of a finite set of symbols to an infinite number of sentences and meaning. With the infinite number of sentences the creation of novel messages is possible. How creative the human language is can be illustrated by some simple examples like the process that creates verbs from nouns. New words can be created, which do not exist so far, but we are able to understand them.

Examples:

leave the boat on the beach -> beach the boat

keep the aeroplane on the ground -> ground the aeroplane

write somebody an e-mail -> e-mail somebody

Creative systems are also found in other aspects of language, like the way sounds are combined to form new words. i.e. prab, orgu, zabi could be imagined as names for new products.

To avoid an arbitrary combination of symbols without any regular arrangement "true" languages need structure dependency. Combining symbols the syntax is relevant. A change in the symbol order might have an impact on the meaning of the sentence. For example “The dog bites the cat” has obviously a different meaning than “The cat bites the dog” based on the different word arrangement of the two sentences.  Non-Human Language - Animal Communication  Forms of Communication

As mentioned before human language is just one of quite a number of communication forms. Different forms of communication can be found in the world of animals. From a little moth to a giant whale, all animals appear to have the use of communication.

Not only humans use facial expression for stressing utterances or feeling, facial expressions can be found among apes. The expression, for example "smiling" indicates cooperativeness and friendliness in both the human and the ape world. On the other hand an ape showing teeth indicates the willingness to fight.

Posture is a very common communicative tool among animals. Lowering the front part of the body and extending the front legs is a sign of dogs that they are playful whereas lowering the full body is a dog’s postural way to show its submissiveness. Postural communication is known in both human and non-human primates.

Besides facial expression, gesture and posture that are found in human communication, there are other communicative devices which are either just noticeable by the sub-consciousness of humans like scent or cannot be found amongst humans like light, colour and electricity. The chemicals which are used for a communicative function are called pheremones. Those pheremones are used to mark territorial or to signal its reproductive readiness. For animals scent is a very important tool which predominates their mating behaviour. Humans are influenced in their mating behaviour by scent as well but there are more factors to that behaviour so that scent is not predominating.

The insects use species-dependent light patterns to signal identity, sex and location. For example the octopus changes colour for signalling territorial defence and mating readiness. In the world of birds colour is wide spread, too. The male peacock has colourful feathering to impress female peahens as a part of mating behaviour. These ways of communication help to live in a community and survive in certain environment.  Characteristic Language Features in Animal Communication

As mentioned above it is possible to describe the uniqueness of human language by four criteria (semanticity, displacement, creativity and structural dependency) which are important devices in the human language to form a clear communication between humans. To see if these criteria exist in animal communication - i.e. if animals possess a "true" language - several experiments with non-human primates were performed. Non-human primates were taught American Sign Language (ASL) and a specially developed token language to detect in how far they are capable of linguistic behaviour. Can semanticity, displacement, creativity and structure dependency be found in non-human language?

Experiments

1. In 1931, comparative psychologist Winthrop Niles Kellogg and his wife started an experiment with a chimpanzee, which he raised alongside his own child. The purpose was of course to see how environment influenced development, could chimpanzee be more like human? Eventually the experiment failed, the factors are: the behavior of their son started to become more and more chimpanzee-like and also the tiredness of having to do both experiment and raising two babies at the same time.
2. Human language In 1948, in Orange Park, Florida, Keith and Cathy Hayes tried to teach English words to a chimpanzee named Viki. She was raised as if she were a human child. The chimpanzee was taught to "speak" easy English words like "cup". The experiment failed since with the supralanyngal anatomy and the vocal fold structure that chimpanzees have it is impossible for them to produce human speech sounds. The failure of the Viki experiment made scientists wonder how far are non-human primates able to communicate linguistically.
3. Sign language From 1965 to 1972 the first important evidence showing rudiments of linguistic behaviour was "Washoe", a young female chimpanzee. The experimenters Allen and Beatrice Gardner conducted an experiment where Washoe learned 130 signs of the American Sign Language within three years. Showing pictures of a duck to Washoe and asking WHAT THAT? she combined the symbols of WATER and BIRD to create WATER BIRD as she had not learned the word DUCK (the words in capital letters refer to the signs the apes use to communicate with the experimenter).

It was claimed that Washoe was able to arbitrarily combine signs spontaneously and creatively. Some scientists criticised the ASL experiment of Washoe because they claimed that ASL is a loose communicative system and strict syntactic rules are not required. Because of this criticism different experiments were developed and performed which focus on syntactic rules and structure dependency as well as on creative symbol combination.

A non-human primate named "Kanzi" was trained by Savage-Rumbaugh in 1990. Kanzi was able to deal with 256 geometric symbols and understood complex instructions like GET THE ORANGE THAT IS IN THE COLONY ROOM. The experimenter worked with rewards.

A question which arose was whether these non-human primates were able to deal with human-like linguistic capacities or if they were just trained to perform a certain action to get the reward.

For more detailed explanations of the experiments see The Mind of an Ape.

Can the characteristic language features be found in non-human communication?

Creativity seems to be present in animal communication as amongst others Washoe showed with the creation of WATER BIRD for DUCK. Although some critics claimed that creativity is often accidental or like in the case of Washoe’s WATER BIRD the creation relays on the fact that water and bird were present. Just because of this presence Washoe invented the word WATER BIRD.

In the case of Kanzi a certain form of syntactic rules was observed. In 90% of Kanzi’s sentences there was first the invitation to play and then the type of game which Kanzi wanted to play like CHASE HIDE, TICKLE SLAP and GRAB SLAP. The problem which was observed was that it is not always easy to recognise the order of signs. Often facial expression and hand signs are performed at the same time. One ape signed the sentence I LIKE COKE by hugging itself for “like” and forming the sign for “coke” with its hands at the same time. Noticing an order in this sign sentence was not possible.

A certain structural dependency could be observed at Kanzi’s active and passive sentences. When Matata, a fellow chimpanzee was grabbed Kanzi signed GRAB MATATA and when Matata was performing an action such as biting Kanzi produced MATATA BITE. It has not yet been proved that symbolic behaviour is occurring. Although there are plenty evidences that creativity and displacement occur in animal communication some critics claim that these evidences can be led back to dressage and training. It was claimed that linguistic behaviour cannot be proved as it is more likely to be a training to correctly use linguistic devices. Apes show just to a little degree syntactic behaviour and they are not able to produce sentences containing embedded structures. Some linguists claim that because of such a lack of linguistic features non-human communication cannot be a “true” language. Although we do not know the capacity of an ape's mind it does not seem that the range of meanings observed in ape's wild life approach the capaciousness of semanticity of human communication. Furthermore apes seem not to care to much about displacement as it appears that they do not communicate about imaginary pasts or futures.

All in all non-human primate communication consisting of graded series of communication shows little arbitrariness. The results with non-human primates led to a controversial discussion about linguistic behaviour. Many researchers claimed that the results were influenced by dressage.

For humans language is a communication form suited to the patterns of human life. Other communication systems are better suited for fellow creatures and their mode of existence.

Now that we know that there is a difference between animal communication and human language we will see detailed features of the human language.  Language Comprehension & Production  Language features – Syntax and Semantics

In this chapter the main question will be “how do we understand sentences?”. To find an answer to that problem it is necessary to have a closer look at the structure of languages. The most important properties every human language provides are rules which determine the permissible sentences and a hierarchical structure (phonemes as basic sounds, which constitute words, which in turn constitute phrases, which constitute sentences, which constitute texts). These feature of a language enable humans to create new unique sentences. The fact that all human languages have a common ground even if they developed completely independent from one another may lead to the conclusion that the ability to process language must be innate. Another evidence of a inborn universal grammar is that there were observations of deaf children who were not taught a language and developed their own form of communication which provided the same basic constituents. Two basic abilities human beings have to communicate is to interpret the syntax of a sentence and the knowledge of the meaning of single words, which in combination enables them to understand the semantic of whole sentences. Many experiments have been done to find out how the syntactical and semantical interpretation is done by human beings and how syntax and semantics works together to construct the right meaning of a sentence. Physiological experiments had been done in which for example the event-related potential (ERP) in the brain was measured as well as behavioristic experiments in which mental chronometry, the measurement of the time-course of cognitive processes, was used. Physiological experiments showed that the syntactical and the semantical interpretation of a sentence takes place separately from each other. These results will be presented below in more detail.

## Physiological Approach

Semantical incorrectness in a sentence evokes an N400 in the ERP The exploration of the semantic sentence processing can be done by the measurement of the event-related potential (ERP) when hearing a semantical correct sentence in comparison to a semantical incorrect sentence. For example in one experiment three reactions to sentences were compared:

Semantically correct: “The pizza was too hot to eat.” Semantically wrong: “The pizza was too hot to drink.” Semantically wrong: “The pizza was too hot to cry.”

In such experiments the ERP evoked by the correct sentence is considered to show the ordinary sentence processing. The variations in the ERP in case of the incorrect sentences in contrast to the ERP of the correct sentence show at what time the mistake is recognized. In case of semantic incorrectness there was observed a strong negative signal about 400ms after perceiving the critical word which did not occure, if the sentence was semantically correct. These effects were observed mainly in the paritial and central area. There was also found evidence that the N400 is the stronger the less the word fits semantically. The word “drink” which fits a little bit more in the context caused a weaker N400 than the word “cry”. That means the intensity of the N400 correlates with the degree of the semantic mistake. The more difficult it is to search for a semantic interpretation of a sentence the higher is the N400 response.

To examine the syntactical aspects of the sentence processing a quite similar experiment as in the case of the semantic processing was done. There were used syntactical correct sentences and incorrect sentences, such as (correct:)“The cats won´t eat…” and (incorrect:)“The cats won´t eating…”. When hearing or reading a syntactical incorrect sentence in contrast to a syntactical correct sentence the ERP changes significantly on two different points of time. First of all there a very early increased response to syntactical incorrectness after 120ms. This signal is called the ‘early left anterior negativity’ because it occurs mainly in the left frontal lobe. This advises that the syntactical processing is located amongst others in Broca's area which is located in the left frontal lobe. The early response to syntactical mistakes also indicates that the syntactical mistakes are detected earlier than semantic mistakes.

The other change in the ERP when perceiving a syntactical wrong sentence occurs after 600ms in the paritial lobe. The signal is increasing positively and is therefore called P600. Possibly the late positive signal is reflecting the attempt to reconstruct the grammatical problematic sentence to find a possible interpretation. File:Cpnp3001.jpg Syntactical incorrectness in a sentence evokes after 600ms a P600 in the electrodes above the paritial lobe.

To summarize the three important ERP-components: First of all there occurs the ELAN at the left frontal lobe which shows a violation of syntactical rules. After it follows the N400 in central and paritial areas as a reaction to a semantical incorrectness and finally there occurs a P600 in the paritial area which probably means a reanalysis of the wrong sentence.

## Behavioristic Approach – Parsing a Sentence

Behavioristic experiments about how human beings parse a sentence often use syntactically ambiguous sentences. Because it is easier to realize that sentence-analysing mechanisms called parsing take place when using sentences in which we cannot automatically constitute the meaning of the sentence. There are two different theories about how humans parse sentences. The syntax-first approach claims that syntax plays the main part whereas semantics has only a supporting role, whereas the interactionist approach states that both syntax and semantics work together to determine the meaning of a sentence. Both theories will be explained below in more detail.

The Syntax-First Approach of Parsing The syntax-first approach concentrates on the role of syntax when parsing a sentence. That humans infer the meaning of a sentence with help of its syntactical structure (Kako and Wagner 2001) can easily be seen when considering Lewis Carroll´s poem ‘Jabberwocky’:

"Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe."

Although most of the words in the poems have no meaning one may ascribe at least some sense to the poem because of its syntactical structure.

There are many different syntactic rules that are used when parsing a sentence. One important rule is the principle of late closure which means that a person assumes that a new word he perceives is part of the current phrase. That this principle is used for parsing sentences can be seen very good with help of a so called garden-path sentence. Experiments with garden-path sentences have been done by Frazier and Fayner 1982. One example of a garden-path sentence is: “Because he always jogs a mile seems a short distance to him.” When reading this sentence one first wants to continue the phrase “Because he always jogs” by adding “a mile” to the phrase, but when reading further one realizes that the words “a mile” are the beginning of a new phrase. This shows that we parse a sentence by trying to add new words to a phrase as long as possible. Garden-path sentences show that we use the principle of late closure as long it makes syntactically sense to add a word to the current phrase but when the sentence starts to get incorrect semantics are often used to rearrange the sentence. The syntax-first approach does not disregard semantics. According to this approach we use syntax first to parse a sentence and semantics is later on used to make sense of the sentence.

Apart from experiments which show how syntax is used for parsing sentences there were also experimens on how semantics can influence the sentence processing. One important experiment about that issue has been done by Daniel Slobin in 1966. He showed that passive sentences are understood faster if the semantics of the words allow only one subject to be the actor. Sentences like “The horse was kicked by the cow.” and “The fence was kicked by the cow.” are grammatically equal and in both cases only one syntactical parsing is possible. Nevertheless the first sentence semantically provides two subjects as possible actors and therefore it needs longer to parse this sentence. By measuring this significant difference Daniel Slobin showed that semantics play an important role in parsing a sentence, too.

## The Interactionist Approach of Parsing

The interactionist approach ascribes a more central role to semantics in parsing a sentence. In contrast to the syntax-first approach, the interactionist theory claims that syntax is not used first but that semantics and syntax are used simultaneously to parse the sentence and that they work together in clarifying the meaning. There have been several experiments which provide evidence that semantics are taken into account from the very beginning reading a sentence. Most of these experiments are working with the eye-tracking techniques and compare the time needed to read syntactical equal sentences in which critical words cause or prohibit ambiguity by semantics. One of these experiments was done by John Trueswell and coworkers in 1994. He measured the eye movement of persons when reading the following two sentences:

The defendant examined by the lawyer turned out to be unreliable. The evidence examined by the lawyer turned out to be unreliable.

He observed that the time needed to read the words “by the lawyer” took longer in case of the first sentence because in the first sentence the semantics first allow an interpretation in which the defendant is the one who examines, while the evidence only can be examined. This experiment shows that the semantics also play a role while reading the sentence which supports the interactionist approach and argues against the theory that semantics are only used after a sentence has been parsed syntactically.  Inferences Creates Coherence

Coherence is the semantic relation of information in different parts of a text to each other. In most cases coherence is achieved by inference; that means that a reader draws information out of a text that is not explicitly stated in this text. For further information the chapter [Neuroscience of Text Comprehension] should be considered.

## Situation Model

A situation model is a mental representation of what a text is about. This approach proposes that the mental representation people form as they read a story does not indicate information about phrases, sentences, paragraphs, but a representation in terms of the people, objects, locations, events described in the story (Goldstein 2005, p. 374)

For a more detailed description of situation models, see Situation Models

## Using Language

Conversations are dynamic interactions between two or more people (Garrod &Pickering, 2004 as cited in Goldstein 2005). The important thing to mention is that conversation is more than the act of speaking. Each person brings in his or her knowledge and conversations are much easier to process if participants bring in shared knowledge. In this way, participants are responsible of how they bring in new knowledge. H.P. Grice proposed in 1975 a basic principle of conversation and four “conversational maxims.” His cooperative principle states that “the speaker and listener agree that the person speaking should strive to make statements that further the agreed goals of conversation.” The four maxims state the way of how to achieve this principle.

1. Quantity: The speaker should try to be informative, no over-/underinformation.

2. Quality: Do not say things which you believe to be false or lack evidence of.

3. Manner: Avoiding being obscure or ambiguous.

4. Relevance: Stay on topic of the exchange.

An example of a rule of conversation incorporating three of those maxims is the given-new-contract. It states that the speaker should construct sentences so that they include given and new information. (Haviland & Clark, 1974 as cited in Goldstein, 2005). Consequences of not following this rule were demonstrated by Susan Haviland and Herbert Clark by presenting pairs of sentences (either following or ignoring the given-new-contract) and measuring the time participants needed until they fully understood the sentence. They found that participants needed longer in pairs of the type:

   We checked the picnic supplies.
The beer was warm.

Rather than:
We got some beer out of the trunk.
The beer was warm.


The reason that it took longer to comprehend the second sentence of the first pair is that inferencing has to be done (beer has not been mentioned as being part of the picnic supplies). (Goldstein, 2005, p. 377-378)

## Language, Culture and Cognition

In the parts above we saw that there has been a lot of research of language, from letters through words and sentences to whole conversations. Most of the research described in the parts above was processed by English speaking researchers and the participants were English speaking as well. Can those results be generalised for all languages and cultures or might there be a difference between English speaking cultures and for example cultures with Asian or African origin?

Imagine our young man from the beginning again: Knut! Now he has to prepare a presentation with his friend Chang for the next psychology seminar. Knut arrives at his friend’s flat and enters his living-room, glad that he made it there just in time. They have been working a few minutes when Chang says: ”It has become cold in here!“ Knut remembers that he did not close the door, stands up and...”stop! What is happening here?!“

This part is concerned with culture and its connection to language. Culture, not necessarily in the sense of "high culture" like music, literature and arts but culture is the "know-how" a person must have to tackle his or her daily life. This know-how might include high culture but it is not necessary.

## Culture and Language

Scientists wondered in how far culture affects the way people use language. In 1991 Yum studied the indirectness of statements in Asian and American conversations. The statement "Please shut the door" was formulated by Americans in an indirect way. They might say something like "The door is open" to signal that they want to door to be shut. Even more indirect are Asian people. They often do not even mention the door but they might say something like "It is somewhat cold today". Another cultural difference affecting the use of language was observed by Nisbett in 2003 in observation about the way people pose questions. When American speaker ask someone if more tea is wanted they ask something like "More tea?". Different to this Asian people would ask if the other one would like more drinking as for Asians it seems obvious that tea is involved and therefore mentioning the tea would be redundant. For Americans it is the other way round. For them it seems obvious that drinking is involved so they just mention the tea.

This experiment and similar ones indicate that people belonging to Asian cultures are often relation orientated. Asians focus on relationships in groups. Contrasting, the Americans concentrate on objects. The involved object and its features are more important than the object's relation to other objects. These two different ways of focusing shows that language is affected by culture.

A experiment which clearly shows these results is the mother-child interaction which was observed by Fernald and Morikawa in 1993.They studied mother-child talk of Asian and American mothers. An American mother trying to show and explain a car to her child often repeated the object "car" and wants the child to repeat it as well. The mother focuses on the features of the car and labels the importance of the object itself. The Asian mother shows the toy car to her child, gives the car to the child and wants it to give the car back. The mother shortly mentions that the object is a car but concentrates on the importance of the relation and the politeness of giving back the object.

Realising that there are plenty differences in how people of different cultures use language the question arises if languages affects the way people think and perceive the world.

## What is the connection between language and cognition?

### Sapir-Whorf Hypothesis

In the 1950s Edward Sapir and Benjamin Whorf proposed the hypothesis that the language of a culture affects the way people think and perceive. The controversial theory was question by Elenor Rosch who studied colour perception of Americans and Danis who are members of an stone-age agricultural culture in the Iran. Americans have several different categories for colour as for example blue, red, yellow and so on. Danis just have two main colour categories. The participants were ask to recall colours which were shown to them before. That experiment did not show significant differences in colour perception and memory as the Sapir-Whorf hypothesis presumes. File:Color-naming exp.jpg Color-naming experiment by Roberson et al. (2000)

### Categorical Perception

Nevertheless a support for the Sapir-Whorf hypothesis was Debi Roberson's demonstration for categorical perception based on the colour perception experiment by Rosch. The participants, a group of English-speaking British and another group of Berinmos from New Guinea were ask to name colours of a board with colour chips. The Berinmos distinguish between five different colour categories and the denotation of the colour names is not equivalent to the British colour denotation. Apart from these differences there are huge differences in the organisation of the colour categories. The colours named green and blue by British participants where categorised as nol which also covers colours like light-green, yellow-green, and dark blue. Other colour categories differ similarly.

The result of Roberson's experiment was that it is easier for British people to discriminate between green and blue whereas Berinmos have less difficulties distinguishing between Nol and Wap. The reaction to colour is affected by language, by the vocabulary we have for denoting colours. It is difficult for people to distinguish colours from the same colour category but people have less trouble differentiating between colours from different categories. Both groups have categorical colour perception but the results for naming colours depends on how the colour categories were named. All in all it was shown that categorical perception is influenced by the language use of different cultures.

These experiments about perception and its relation to cultural language usage leads to the question whether thought is related to language with is cultural differences.

## Is thought dependent on, or even caused by language?

### Historical theories

An early approach was proposed by J.B. Watson‘s in 1913. His peripheralist approach was that thought is a tiny not noticeable speech movement. While thinking a person performs speech movements as he or she would do while talking. A couple year later, in 1921 Wittgenstein poses the theory that the limits of a person's language mean the limits of that person's world. As soon as a person is not able to express a certain content because of a lack of vocabulary that person is not able to think about those contents as they are outside of his or her world. Wittgenstein's theory was doubted by some experiments with babies and deaf people.

### Present research

To find some evidence for the theory that language and culture is affecting cognition Lian-hwang Chiu designed an experiment with American and Asian children. The children were asked to group objects in pairs so that these objects fit together. On picture that was shown to the children there was a cow, a chicken and some grass. The children had to decided which of the two objects fitted together. The American children mostly grouped cow and chicken because of group of animals they belong to. Asian children more often combined the cow with the grass as there is the relation of the cow normally eating grass.

In 2000 Chui repeated the experiment with words instead of pictures. A similar result was observed. The American children sorted their pairs taxonomically. Given the words "panda", "monkey" and "banana" American children paired "panda" and monkey". Chinese children grouped relationally. They put "monkey" with "banana". Another variation of this experiment was done with bilingual children. When the task was given in English to the children they grouped the objects taxonomically. A Chinese task caused a relational grouping. The language of the task clearly influenced on how to group the objects. That means language may affects the way people think.

The results of plenty experiments regarding the relation between language, culture and cognition let assume that culture affects language and cognition is affected by language.Our way of thinking is influenced by the way we talk and thought can occur without language but the exact relation between language and thought remains to be determined.

## Introduction

"Language is the way we interact and communicate, so, naturally, the means of communication and the conceptual background that’s behind it, which is more important, are used to try to shape attitudes and opinions and induce conformity and subordination. Not surprisingly, it was created in the more democratic societies." - Chomsky

Language is a central part of everyday life and communication a natural human necessity. For those reasons there has been a high interest in their properties. However describing the processes of language turns out to be quite hard.

We can define language as a system of communication through which we code and express our feelings, thoughts, ideas and experiences.[1]

Already Plato was concerned with the nature of language in his dialogue “Cratylus”, where he discussed first ideas about nowadays important principles of linguistics namely morphology and phonology. Gradually philosophers, natural scientists and psychologists became interested in features of language.

Since the emergence of the cognitive science in the 50's and Chomsky´s criticism on the behaviourist view, language is seen as a cognitive ability of humans, thus incorporating linguistics in other major fields like computer science and psychology. Today, psycho-linguistics is a discipline on its own and its most important topics are acquisition, production and comprehension of language.

Especially in the 20th century many studies concerning communication have been conducted, evoking new views on old facts. New techniques, like CT, MRI and fMRI or EEG, as described in Methods of Behavioural and Neuroscience Methods, made it possible to observe brain during communication processes in detail.

Later on an overview of the most popular experiments and observed effects is presented. But in order to understand those one needs to have a basic idea of semantics and syntax as well as of linguistic principles for processing words, sentences and full texts.

Finally some questions will arise: How is language affected by culture? Or in philosophical terms, the discussion about the relationship between language and thoughts has to be developed.

## Language as a cognitive ability

### Historical review on Psycholinguistics & Neurolinguistics

Starting with philosophical approaches, the nature of the human language had ever been a topic of interest. Galileo in the 16th century saw the human language as the most important invention of humans. Later on in the 18th century the scientific study of language began by psychologists. Wilhelm Wundt (founder of the first laboratory of psychology) saw language as the mechanism by which thoughts are transformed into sentences. The observations of Wernike and Broca (see chapter 9) were milestones in the studies of language as a cognitive ability. In the early 1900s the behaviouristic view influenced the study of language very much. In 1957 B.F.Skiner published his book "Verbal Behaviour", in which he proposed that learning of language can be seen as a mechanism of reinforcement. Noam Chomsky (quoted at the beginning of this chapter) published in the same year "Syntactic Structures". He proposed that the ability to invent language is somehow coded in the genes. That led him to the idea that the underlying basis of language is similar across cultures. There might be some kind of universal grammar as a base, independent of what kind of language (including sign language) might be used by humans. Further on Chomsky published a review of Skinner´s "Verbal Behaviour" in which he presented arguments against the behaviouristic view. There are still some scientists who are convinced that it does not need a mentalist approach like Chomsky proposed, but in the meantime most agree that human language has to be seen as a cognitive ability.

### Current goals of Psycholinguistics

A natural language can be analysed at a number of different levels. In linguistics we differ between phonology (sounds), morphology (words), syntax (sentence structure), semantics (meaning), and pragmatics (use). Linguists try to find systematic descriptions capturing the regularities inherent in the language itself. But a description of natural language just as a abstract structured system, can not be enough. Psycholinguists rather ask, how the knowledge of language is represented in the brain, and how it is used. Today's most important research topics are:

1) comprehension: How humans understand spoken as well as written language, how language is processed and what interactions with memory are involved.

2) speech production: Both the physical aspect of speech production, and the mental process that stands behind the uttering of a sentence.

3) acquisition: How people learn to speak and understand a language.

### Characteristic features

What is a language? What kinds of languages do exist? Are there characteristic features that are unique in human language?

There are plenty of approaches how to describe languages. Especially in computational linguistics researchers try to find formal definitions for different kinds of languages. But for psychology other aspects of language than its function as pure system of communication are of central interest. Language is also a tool we use for social interactions starting with the exchange of news up to the identification of social groups by their dialect. We use it for expressing our feelings, thoughts, ideas etc.

Although there are plenty ways to communicate (consider Non-Human-Language) humans expect their system of communication - the human language to be unique. But what is it that makes the human language so special and unique?

Four major criteria have been proposed by Professor Franz Schmalhofer from the University of Osnabrück as explained below:

-semanticity

-displacement

-creativity

-structure dependency

Semanticity means the usage of symbols. Symbols can either refer to objects or to relations between objects. In the human language words are the basic form of symbols. For example the word "book" refers to an object made of paper on which something might be written. A relation symbol is the verb "to like" which refers to the sympathy of somebody to something or someone.

The criterion of displacement means that not only objects or relations at presence can be described but there are also symbols which refer to objects in another time or place. The word "yesterday" refers to day before and objects mentioned in a sentence with "yesterday" refer to objects from another time than the present one. Displacement is about the communication of events which had happened or will happen and the objects belonging to that event.

Having a range of symbols to communicate these symbols can be newly combined. Creativity is the probable most important feature. Our communication is not restricted to a fixed set of topics or predetermined messages. The combination of a finite set of symbols to an infinite number of sentences and meaning. With the infinite number of sentences the creation of novel messages is possible. How creative the human language is can be illustrated by some simple examples like the process that creates verbs from nouns. New words can be created, which do not exist so far, but we are able to understand them.

Examples:

leave the boat on the beach -> beach the boat

keep the aeroplane on the ground -> ground the aeroplane

write somebody an e-mail -> e-mail somebody

Creative systems are also found in other aspects of language, like the way sounds are combined to form new words. i.e. prab, orgu, zabi could be imagined as names for new products.

To avoid an arbitrary combination of symbols without any regular arrangement "true" languages need structure dependency. Combining symbols the syntax is relevant. A change in the symbol order might have an impact on the meaning of the sentence. For example “The dog bites the cat” has obviously a different meaning than “The cat bites the dog” based on the different word arrangement of the two sentences.

## Non-Human Language - Animal Communication

### Forms of Communication

As mentioned before human language is just one of quite a number of communication forms. Different forms of communication can be found in the world of animals. From a little moth to a giant whale, all animals appear to have the use of communication.

Not only humans use facial expression for stressing utterances or feeling, facial expressions can be found among apes. The expression, for example "smiling" indicates cooperativeness and friendliness in both the human and the ape world. On the other hand an ape showing teeth indicates the willingness to fight.

Posture is a very common communicative tool among animals. Lowering the front part of the body and extending the front legs is a sign of dogs that they are playful whereas lowering the full body is a dog’s postural way to show its submissiveness. Postural communication is known in both human and non-human primates.

Besides facial expression, gesture and posture that are found in human communication, there are other communicative devices which are either just noticeable by the sub-consciousness of humans like scent or cannot be found amongst humans like light, colour and electricity. The chemicals which are used for a communicative function are called pheremones. Those pheremones are used to mark territorial or to signal its reproductive readiness. For animals scent is a very important tool which predominates their mating behaviour. Humans are influenced in their mating behaviour by scent as well but there are more factors to that behaviour so that scent is not predominating.

The insects use species-dependent light patterns to signal identity, sex and location. For example the octopus changes colour for signalling territorial defence and mating readiness. In the world of birds colour is wide spread, too. The male peacock has colourful feathering to impress female peahens as a part of mating behaviour. These ways of communication help to live in a community and survive in certain environment.

### Characteristic Language Features in Animal Communication

As mentioned above it is possible to describe the uniqueness of human language by four criteria (semanticity, displacement, creativity and structural dependency) which are important devices in the human language to form a clear communication between humans. To see if these criteria exist in animal communication - i.e. if animals possess a "true" language - several experiments with non-human primates were performed. Non-human primates were taught American Sign Language (ASL) and a specially developed token language to detect in how far they are capable of linguistic behaviour. Can semanticity, displacement, creativity and structure dependency be found in non-human language?

Experiments

1. Human language In 1948, in Orange Park, Florida, Keith and Cathy Hayes tried to teach English words to a chimpanzee named Viki. She was raised as if she were a human child. The chimpanzee was taught to "speak" easy English words like "cup". The experiment failed since with the supralanyngal anatomy and the vocal fold structure that chimpanzees have it is impossible for them to produce human speech sounds. The failure of the Viki experiment made scientists wonder how far are non-human primates able to communicate linguistically.

2. Sign language From 1965 to 1972 the first important evidence showing rudiments of linguistic behaviour was "Washoe", a young female chimpanzee. The experimenters Allen and Beatrice Gardner conducted an experiment where Washoe learned 130 signs of the American Sign Language within three years. Showing pictures of a duck to Washoe and asking WHAT THAT? she combined the symbols of WATER and BIRD to create WATER BIRD as she had not learned the word DUCK (the words in capital letters refer to the signs the apes use to communicate with the experimenter).

It was claimed that Washoe was able to arbitrarily combine signs spontaneously and creatively. Some scientists criticised the ASL experiment of Washoe because they claimed that ASL is a loose communicative system and strict syntactic rules are not required. Because of this criticism different experiments were developed and performed which focus on syntactic rules and structure dependency as well as on creative symbol combination.

A non-human primate named "Kanzi" was trained by Savage-Rumbaugh in 1990. Kanzi was able to deal with 256 geometric symbols and understood complex instructions like GET THE ORANGE THAT IS IN THE COLONY ROOM. The experimenter worked with rewards.

A question which arose was whether these non-human primates were able to deal with human-like linguistic capacities or if they were just trained to perform a certain action to get the reward.

For more detailed explanations of the experiments see The Mind of an Ape.

Can the characteristic language features be found in non-human communication?

Creativity seems to be present in animal communication as amongst others Washoe showed with the creation of WATER BIRD for DUCK. Although some critics claimed that creativity is often accidental or like in the case of Washoe’s WATER BIRD the creation relays on the fact that water and bird were present. Just because of this presence Washoe invented the word WATER BIRD.

In the case of Kanzi a certain form of syntactic rules was observed. In 90% of Kanzi’s sentences there was first the invitation to play and then the type of game which Kanzi wanted to play like CHASE HIDE, TICKLE SLAP and GRAB SLAP. The problem which was observed was that it is not always easy to recognise the order of signs. Often facial expression and hand signs are performed at the same time. One ape signed the sentence I LIKE COKE by hugging itself for “like” and forming the sign for “coke” with its hands at the same time. Noticing an order in this sign sentence was not possible.

A certain structural dependency could be observed at Kanzi’s active and passive sentences. When Matata, a fellow chimpanzee was grabbed Kanzi signed GRAB MATATA and when Matata was performing an action such as biting Kanzi produced MATATA BITE. It has not yet been proved that symbolic behaviour is occurring. Although there are plenty evidences that creativity and displacement occur in animal communication some critics claim that these evidences can be led back to dressage and training. It was claimed that linguistic behaviour cannot be proved as it is more likely to be a training to correctly use linguistic devices. Apes show just to a little degree syntactic behaviour and they are not able to produce sentences containing embedded structures. Some linguists claim that because of such a lack of linguistic features non-human communication cannot be a “true” language. Although we do not know the capacity of an ape's mind it does not seem that the range of meanings observed in ape's wild life approach the capaciousness of semanticity of human communication. Furthermore apes seem not to care to much about displacement as it appears that they do not communicate about imaginary pasts or futures.

All in all non-human primate communication consisting of graded series of communication shows little arbitrariness. The results with non-human primates led to a controversial discussion about linguistic behaviour. Many researchers claimed that the results were influenced by dressage.

For humans language is a communication form suited to the patterns of human life. Other communication systems are better suited for fellow creatures and their mode of existence.

Now that we know that there is a difference between animal communication and human language we will see detailed features of the human language.

## Language Comprehension & Production

### Language features – Syntax and Semantics

In this chapter the main question will be “how do we understand sentences?”. To find an answer to that problem it is necessary to have a closer look at the structure of languages. The most important properties every human language provides are rules which determine the permissible sentences and a hierarchical structure (phonemes as basic sounds, which constitute words, which in turn constitute phrases, which constitute sentences, which constitute texts). These feature of a language enable humans to create new unique sentences. The fact that all human languages have a common ground even if they developed completely independent from one another may lead to the conclusion that the ability to process language must be innate. Another evidence of a inborn universal grammar is that there were observations of deaf children who were not taught a language and developed their own form of communication which provided the same basic constituents. Two basic abilities human beings have to communicate is to interpret the syntax of a sentence and the knowledge of the meaning of single words, which in combination enables them to understand the semantic of whole sentences. Many experiments have been done to find out how the syntactical and semantical interpretation is done by human beings and how syntax and semantics works together to construct the right meaning of a sentence. Physiological experiments had been done in which for example the event-related potential (ERP) in the brain was measured as well as behavioristic experiments in which mental chronometry, the measurement of the time-course of cognitive processes, was used. Physiological experiments showed that the syntactical and the semantical interpretation of a sentence takes place separately from each other. These results will be presented below in more detail.

Physiological Approach

Semantics

File:Cpnp2.jpg
Semantical incorrectness in a sentence evokes a N400 in the ERP

Semantical incorrectness in a sentence evokes an N400 in the ERP The exploration of the semantic sentence processing can be done by the measurement of the event-related potential (ERP) when hearing a semantical correct sentence in comparison to a semantical incorrect sentence. For example in one experiment three reactions to sentences were compared:

Semantically correct: “The pizza was too hot to eat.” Semantically wrong: “The pizza was too hot to drink.” Semantically wrong: “The pizza was too hot to cry.”

In such experiments the ERP evoked by the correct sentence is considered to show the ordinary sentence processing. The variations in the ERP in case of the incorrect sentences in contrast to the ERP of the correct sentence show at what time the mistake is recognized. In case of semantic incorrectness there was observed a strong negative signal about 400ms after perceiving the critical word which did not occure, if the sentence was semantically correct. These effects were observed mainly in the paritial and central area. There was also found evidence that the N400 is the stronger the less the word fits semantically. The word “drink” which fits a little bit more in the context caused a weaker N400 than the word “cry”. That means the intensity of the N400 correlates with the degree of the semantic mistake. The more difficult it is to search for a semantic interpretation of a sentence the higher is the N400 response.

Syntax

File:Cpnp4.jpg
Syntactical incorrectness in a sentence can evoce an ELAN (early left anterior negativity) in the electrodes above the left frontal lobe after 120ms.

To examine the syntactical aspects of the sentence processing a quite similar experiment as in the case of the semantic processing was done. There were used syntactical correct sentences and incorrect sentences, such as (correct:)“The cats won´t eat…” and (incorrect:)“The cats won´t eating…”. When hearing or reading a syntactical incorrect sentence in contrast to a syntactical correct sentence the ERP changes significantly on two different points of time. First of all there a very early increased response to syntactical incorrectness after 120ms. This signal is called the ‘early left anterior negativity’ because it occurs mainly in the left frontal lobe. This advises that the syntactical processing is located amongst others in Broca's area which is located in the left frontal lobe. The early response to syntactical mistakes also indicates that the syntactical mistakes are detected earlier than semantic mistakes.

The other change in the ERP when perceiving a syntactical wrong sentence occurs after 600ms in the paritial lobe. The signal is increasing positively and is therefore called P600. Possibly the late positive signal is reflecting the attempt to reconstruct the grammatical problematic sentence to find a possible interpretation.

File:Cpnp3001.jpg
Syntactical incorrectness in a sentence evokes after 600ms a P600 in the electrodes above the paritial lobe.

To summarize the three important ERP-components: First of all there occurs the ELAN at the left frontal lobe which shows a violation of syntactical rules. After it follows the N400 in central and paritial areas as a reaction to a semantical incorrectness and finally there occurs a P600 in the paritial area which probably means a reanalysis of the wrong sentence.

Behavioristic Approach – Parsing a Sentence

Behavioristic experiments about how human beings parse a sentence often use syntactically ambiguous sentences. Because it is easier to realize that sentence-analysing mechanisms called parsing take place when using sentences in which we cannot automatically constitute the meaning of the sentence. There are two different theories about how humans parse sentences. The syntax-first approach claims that syntax plays the main part whereas semantics has only a supporting role, whereas the interactionist approach states that both syntax and semantics work together to determine the meaning of a sentence. Both theories will be explained below in more detail.

The Syntax-First Approach of Parsing The syntax-first approach concentrates on the role of syntax when parsing a sentence. That humans infer the meaning of a sentence with help of its syntactical structure (Kako and Wagner 2001) can easily be seen when considering Lewis Carroll´s poem ‘Jabberwocky’:

"Twas brillig, and the slithy toves Did gyre and gimble in the wabe: All mimsy were the borogoves, And the mome raths outgrabe."

Although most of the words in the poems have no meaning one may ascribe at least some sense to the poem because of its syntactical structure.

There are many different syntactic rules that are used when parsing a sentence. One important rule is the principle of late closure which means that a person assumes that a new word he perceives is part of the current phrase. That this principle is used for parsing sentences can be seen very good with help of a so called garden-path sentence. Experiments with garden-path sentences have been done by Frazier and Fayner 1982. One example of a garden-path sentence is: “Because he always jogs a mile seems a short distance to him.” When reading this sentence one first wants to continue the phrase “Because he always jogs” by adding “a mile” to the phrase, but when reading further one realizes that the words “a mile” are the beginning of a new phrase. This shows that we parse a sentence by trying to add new words to a phrase as long as possible. Garden-path sentences show that we use the principle of late closure as long it makes syntactically sense to add a word to the current phrase but when the sentence starts to get incorrect semantics are often used to rearrange the sentence. The syntax-first approach does not disregard semantics. According to this approach we use syntax first to parse a sentence and semantics is later on used to make sense of the sentence.

Apart from experiments which show how syntax is used for parsing sentences there were also experimens on how semantics can influence the sentence processing. One important experiment about that issue has been done by Daniel Slobin in 1966. He showed that passive sentences are understood faster if the semantics of the words allow only one subject to be the actor. Sentences like “The horse was kicked by the cow.” and “The fence was kicked by the cow.” are grammatically equal and in both cases only one syntactical parsing is possible. Nevertheless the first sentence semantically provides two subjects as possible actors and therefore it needs longer to parse this sentence. By measuring this significant difference Daniel Slobin showed that semantics play an important role in parsing a sentence, too.

The Interactionist Approach of Parsing

The interactionist approach ascribes a more central role to semantics in parsing a sentence. In contrast to the syntax-first approach, the interactionist theory claims that syntax is not used first but that semantics and syntax are used simultanuasly to parse the sentence and that they work together in clearifying the meaning. There have been made several experiments which provide evidence that semantics are taking into account from the very beginning reading a sentence. Most of these experiments are working with the eye-tracking techniques and compare the time needed to read syntactical equal senences in which critical words cause or prohibit ambiguitiy by semantics. One of these experiments has been done by John Trueswell and coworkers in 1994. He measured the eye movement of persons when reading the following two sentences:

The defendant examined by the lawyer turned out to be unreliable. The evidence examined by the lawyer turned out to be unreliable.

He observed that the time needed to read the words “by the lawyer” took longer in case of the first sentence because in the first sentence the semanics first allow an interpretation in which the defendant is the one who examines, while the evidence only can be examined. This experiment shows that the semantics also play a role while reading the sentence which supports the interactionist approach and argues against the theory that semantics are only used after a sentence has been parsed syntactically.

### Inferences Creates Coherence

Coherence is the semantic relation of information in different parts of a text to each other. In most cases coherence is achieved by inference; that means that a reader draws information out of a text that is not explicitly stated in this text. For further information the chapter Psychology and Cognitive Neuroscience/Situation Models and Inferencing#Neuropsychology of Inferencing Neuroscience of Text Comprehension should be considered.

### Situation Model

A situation model is a mental representation of what a text is about. This approach proposes that the mental representation people form as they read a story does not indicate information about phrases, sentences, paragraphs, but a representation in terms of the people, objects, locations, events described in the story (Goldstein 2005, p. 374)

For a more detailed description of situation models, see Psychology and Cognitive Neuroscience/Situation Models and Inferencing Situation Models

## Using Language

Conversations are dynamic interactions between two or more people (Garrod &Pickering, 2004 as cited in Goldstein 2005). The important thing to mention is that conversation is more than the act of speaking. Each person brings in his or her knowledge and conversations are much easier to process if participants bring in shared knowledge. In this way, participants are responsible of how they bring in new knowledge. H.P. Grice proposed in 1975 a basic principle of conversation and four “conversational maxims.” His cooperative principle states that “the speaker and listener agree that the person speaking should strive to make statements that further the agreed goals of conversation.” The four maxims state the way of how to achieve this principle.

1. Quantity: The speaker should try to be informative, no over-/underinformation.

2. Quality: Do not say things which you believe to be false or lack evidence of.

3. Manner: Avoiding being obscure or ambiguous.

4. Relevance: Stay on topic of the exchange.

An example of a rule of conversation incorporating three of those maxims is the given-new-contract. It states that the speaker should construct sentences so that they include given and new information. (Haviland & Clark, 1974 as cited in Goldstein, 2005). Consequences of not following this rule were demonstrated by Susan Haviland and Herbert Clark by presenting pairs of sentences (either following or ignoring the given-new-contract) and measuring the time participants needed until they fully understood the sentence. They found that participants needed longer in pairs of the type:

    We checked the picnic supplies.
The beer was warm.

    Rather than:
We got some beer out of the trunk.
The beer was warm.


The reason that it took longer to comprehend the second sentence of the first pair is that inferencing has to be done (beer has not been mentioned as being part of the picnic supplies). (Goldstein, 2005, p. 377-378)

## Language, Culture and Cognition

In the parts above we saw that there has been a lot of research of language, from letters through words and sentences to whole conversations. Most of the research described in the parts above was processed by English speaking researchers and the participants were English speaking as well. Can those results be generalised for all languages and cultures or might there be a difference between English speaking cultures and for example cultures with Asian or African origin?

Imagine our young man from the beginning again: Knut! Now he has to prepare a presentation with his friend Chang for the next psychology seminar. Knut arrives at his friend’s flat and enters his living-room, glad that he made it there just in time. They have been working a few minutes when Chang says: ”It has become cold in here!“ Knut remembers that he did not close the door, stands up and...”stop! What is happening here?!“

This part is concerned with culture and its connection to language. Culture, not necessarily in the sense of "high culture" like music, literature and arts but culture is the "know-how" a person must have to tackle his or her daily life. This know-how might include high culture but it is not necessary.

Culture and Language

Scientists wondered in how far culture affects the way people use language. In 1991 Yum studied the indirectness of statements in Asian and American conversations. The statement "Please shut the door" was formulated by Americans in an indirect way. They might say something like "The door is open" to signal that they want to door to be shut. Even more indirect are Asian people. They often do not even mention the door but they might say something like "It is somewhat cold today". Another cultural difference affecting the use of language was observed by Nisbett in 2003 in observation about the way people pose questions. When American speaker ask someone if more tea is wanted they ask something like "More tea?". Different to this Asian people would ask if the other one would like more drinking as for Asians it seems obvious that tea is involved and therefore mentioning the tea would be redundant. For Americans it is the other way round. For them it seems obvious that drinking is involved so they just mention the tea.

This experiment and similar ones indicate that people belonging to Asian cultures are often relation orientated. Asians focus on relationships in groups. Contrasting, the Americans concentrate on objects. The involved object and its features are more important than the object's relation to other objects. These two different ways of focusing shows that language is affected by culture.

A experiment which clearly shows these results is the mother-child interaction which was observed by Fernald and Morikawa in 1993.They studied mother-child talk of Asian and American mothers. An American mother trying to show and explain a car to her child often repeated the object "car" and wants the child to repeat it as well. The mother focuses on the features of the car and labels the importance of the object itself. The Asian mother shows the toy car to her child, gives the car to the child and wants it to give the car back. The mother shortly mentions that the object is a car but concentrates on the importance of the relation and the politeness of giving back the object.

Realising that there are plenty differences in how people of different cultures use language the question arises if languages affects the way people think and perceive the world.

### What is the connection between language and cognition?

Sapir-Whorf Hypothesis

In the 1950s Edward Sapir and Benjamin Whorf proposed the hypothesis that the language of a culture affects the way people think and perceive. The controversial theory was question by Elenor Rosch who studied colour perception of Americans and Danis who are members of an stone-age agricultural culture in the Iran. Americans have several different categories for colour as for example blue, red, yellow and so on. Danis just have two main colour categories. The participants were ask to recall colours which were shown to them before. That experiment did not show significant differences in colour perception and memory as the Sapir-Whorf hypothesis presumes.

File:Color-naming exp.jpg
Color-naming experiment by Roberson et al. (2000)

Categorical Perception

Nevertheless a support for the Sapir-Whorf hypothesis was Debi Roberson's demonstration for categorical perception based on the colour perception experiment by Rosch. The participants, a group of English-speaking British and another group of Berinmos from New Guinea were ask to name colours of a board with colour chips. The Berinmos distinguish between five different colour categories and the denotation of the colour names is not equivalent to the British colour denotation. Apart from these differences there are huge differences in the organisation of the colour categories. The colours named green and blue by British participants where categorised as nol which also covers colours like light-green, yellow-green, and dark blue. Other colour categories differ similarly.

The result of Roberson's experiment was that it is easier for British people to discriminate between green and blue whereas Berinmos have less difficulties distinguishing between Nol and Wap. The reaction to colour is affected by language, by the vocabulary we have for denoting colours. It is difficult for people to distinguish colours from the same colour category but people have less trouble differentiating between colours from different categories. Both groups have categorical colour perception but the results for naming colours depends on how the colour categories were named. All in all it was shown that categorical perception is influenced by the language use of different cultures.

These experiments about perception and its relation to cultural language usage leads to the question whether thought is related to language with is cultural differences.

### Is thought dependent on, or even caused by language?

Historical theories

An early approach was proposed by J.B. Watson‘s in 1913. His peripheralist approach was that thought is a tiny not noticeable speech movement. While thinking a person performs speech movements as he or she would do while talking. A couple year later, in 1921 Wittgenstein poses the theory that the limits of a person's language mean the limits of that person's world. As soon as a person is not able to express a certain content because of a lack of vocabulary that person is not able to think about those contents as they are outside of his or her world. Wittgenstein's theory was doubted by some experiments with babies and deaf people.

Present research

To find some evidence for the theory that language and culture is affecting cognition Lian-hwang Chiu designed an experiment with American and Asian children. The children were asked to group objects in pairs so that these objects fit together. On picture that was shown to the children there was a cow, a chicken and some grass. The children had to decided which of the two objects fitted together. The American children mostly grouped cow and chicken because of group of animals they belong to. Asian children more often combined the cow with the grass as there is the relation of the cow normally eating grass.

In 2000 Chui repeated the experiment with words instead of pictures. A similar result was observed. The American children sorted their pairs taxonomically. Given the words "panda", "monkey" and "banana" American children paired "panda" and monkey". Chinese children grouped relationally. They put "monkey" with "banana". Another variation of this experiment was done with bilingual children. When the task was given in English to the children they grouped the objects taxonomically. A Chinese task caused a relational grouping. The language of the task clearly influenced on how to group the objects. That means language may affects the way people think.

The results of plenty experiments regarding the relation between language, culture and cognition let assume that culture affects language and cognition is affected by language.Our way of thinking is influenced by the way we talk and thought can occur without language but the exact relation between language and thought remains to be determined.

## References

1. E. B. Goldstein, "Cognitive Psychology - Connecting Mind, Research, and Everyday Experience" (2005), page 346

Books

• O'Grady, W.; Dobrovolsky, M.; Katamba, F.: Contemporary Linguistics. Copp Clark Pittmann Ltd. (1996)
• Banich, Marie T. : Neuropsychology. The neural bases of mental function. (1997)
• Goldstein, E.B.: Cognitive Psychology: Connecting Mind, Research and Everyday Experience. (2005)
• Akmajian, A.; Demers, R. A.; Farmer, A. K.; Harnish R. M.: Linguistics - An Introductin to Language and Communication, fifth Edition; the MIT Press Cambridge, Massachusetts, London, England; (2001)
• Yule, G.: The study of language, second edition, Cambridge University Press; (1996)
• Premack, D.; Premack, A.J.: The Mind of an Ape. W W Norton & Co Ltd.(1984)

Journals

• MacCorquodale, K.: On Chomsky´s Review of Skinner´s verbal Behavior. Journal of experimental analysis of behaviour. (1970) Nr.1 Chap. 13, p. 83-99,
• Stemmer, N: Skinner's verbal behaviour, Chomsky's review, and mentalism. Journal of experimental analysis of behaviour. (1990) Nr.3 Chap. 54, p. 307-315
• Chomsky, N.: Collateral Language. TRANS, Internet journal for cultural sciences.(2003) Nr. 15

Cognitive Psychology and Cognitive Neuroscience/Comprehension

# Neuroscience of Text Comprehension

## Introduction

What is happening inside my head when I listen to a sentence? How do I process written words? This chapter will take a closer look on brain processes concerned with language comprehension. Dealing with natural language understanding, we distinguish between the neuroscientific and the psycholinguistic approach. As text understanding spreads through the broad field of cognitive psychology, linguistics, and neurosciences, our main focus will lay on the intersection of two latter, which is known as neurolinguistics.

Different brain areas need to be examined in order to find out how words and sentences are being processed. For long time scientist were restricted to draw conclusions from certain brain lesions to the functions of corresponding brain areas. During the last 40 years techniques for brain imaging and ERP-measurement have been established which allow for a more accurate identification of brain parts involved in language processing.

Scientific studies on these phenomena are generally divided into research on auditory and visual language comprehension; we will discuss both. Not to forget is that it is not enough to examine English: To understand language processing in general, we have to look at non-Indo-European and other language systems like sign language. But first of all we will be concerned with a rough localization of language in the brain.

## Lateralization of language

Although functional lateralization studies and analyses find individual differences in personality or cognitive style don't favor one hemisphere or the other, some brain functions occur in one or the other side of the brain. Language tends to be on the left and attention on the right (Nielson, Zielinski, Ferguson, Lainhart & Anderson, 2013). There is a lot of evidence that each brain hemisphere has its own distinct functions in language comprehension. Most often, the right hemisphere is referred to as the non-dominant hemisphere and the left is seen as the dominant hemisphere. This distinction is called lateralization (from the Latin word lateral, meaning sidewise) and reason for it first was raised by experiments with split-brain patients. Following a top-down approach we will then discuss the right hemisphere which might have the mayor role in higher level comprehension, but is not quite well understood. Much research has been done on the left hemisphere and we will discuss why it might be dominant before the following sections discuss the fairly well understood fundamental processing of language in this hemisphere of the brain.

### Functional asymmetry

Anatomical differences between left and right hemisphere

Initially we will consider the most apparent part of a differentiation between left and right hemisphere: Their differences in shape and structure. As visible to the naked eye there exists a clear asymmetry between the two halves of the human brain: The right hemisphere typically has a bigger, wider and farther extended frontal region than the left hemisphere, whereas the left hemisphere is bigger, wider and extends farther in it’s occipital region (M. T. Banich,"Neuropsychology", ch.3, pg.92). Significantly larger on the left side in most human brains is a certain part of the temporal lobe’s surface, which is called the planum temporale. It is localized near Wernicke’s area and other auditory association areas, wherefore we can already speculate that the left hemisphere might be stronger involved in processes of language and speech treatment.

In fact such a left laterality of language functions is evident in 97% of the population (D. Purves, "Neuroscience", ch.26, pg.649). But actually the percentage of human brains, in which a "left-dominance" of the planum temporale is traceable, is only 67% (D. Purves, "Neuroscience", ch.26, pg.648). Which other factors play aunsolved yet.

Evidence for functional asymmetry from "split brain" patients

In hard cases of epilepsy a rarely performed but popular surgical method to reduce the frequency of epileptic seizures is the so-called corpus callosotomy. Here a radical cut through the connecting "communication bridge" between right and left hemisphere, the corpus callosum, is done; the result is a "split-brain". For patients whose corpus callosum is cut, the risk of accidental physical injury is mitigated, but the side-effect is striking: Due to this eradicative transection of left and right half of the brain these two are not longer able to communicate adequately. This situation provides the opportunity to study differentiation of functionality between the hemispheres. First experiments with split-brain patients were performed by Roger Sperry and his colleagues at the California Institute of Technology in 1960 and 1970 (D. Purves, "Neuroscience", ch.26, pg.646). They lead researchers to sweeping conclusions about laterality of speech and the organization of the human brain in general.

 A digression on the laterality of the visual system Visual systemA visual stimulus, located within the left visual field, projects onto the nasal (inner) part of the left eye’s retina and onto the temporal (outer) part of the right eye’s retina. Images on the temporal retinal region are processed in the visual cortex of the same side of the brain (ipsilateral), whereas nasal retinal information is mapped onto the opposite half of the brain (contralateral). The stimulus within the left visual field will completely arrive in the right visual cortex to be processed and worked up. In "healthy" brains this information furthermore attains the left hemisphere via the corpus callosum and can be integrated there. In split-brain patients this current of signals is interrupted; the stimulus remains "invisible" for the left hemisphere.

Split Brain Experiments

The experiment we consider now is based on the laterality of the visual system: What is seen in the left half of the visual field will be processed in the right hemisphere and vice versa. Aware of this principle a test operator presents the picture of an object to one half of the visual field while the participant is instructed to name the seen object, and to blindly pick it out of an amount of concrete objects with the contralateral hand.

It can be shown that a picture, for example the drawing of a die, which has only been presented to the left hemisphere, can be named by the participant ("I saw a die"), but is not selectable with the right hand (no idea which object to choose from the table). Contrarily the participant is unable to name the die, if it was recognized in the right hemisphere, but easily picks it out of the heap of objects on the table with the help of the left hand.

These outcomes are clear evidence of the human brain’s functional asymmetry. The left hemisphere seems to dominate functions of speech and language processing, but is unable to handle spatial tasks like vision-independent object recognition. The right hemisphere seems to dominate spatial functions, but is unable to process words and meaning independently. In a second experiment evidence arose that a split-brain patient can only follow a written command (like "get up now!"), if it is presented to the left hemisphere. The right hemisphere can only "understand" pictorial instructions.

The following table (D. Purves, "Neuroscience", ch.26, pg.647) gives a rough distinction of functions:

 Left Hemisphere Right Hemisphere analysis of right visual field language processing writing speech analysis of left visual field spatial tasks visuospatial tasks object and face recognition

First it is important to keep in mind that these distinctions comprise only functional dominances, no exclusive competences. In cases of unilateral brain damage, often one half of the brain takes over tasks of the other one. Furthermore it should be mentioned that this experiment works only for stimuli presented for less than a second. This is because not only the corpus callosum, but as well some subcortical comissures serve for interhemispheric transfer. In general both can simultaneously contribute to performance, since they use complement roles in processing.

 A digression on handedness An important issue, when exploring the different brain organization, is handedness, which is the tendency to use the left or the right hand to perform activities. Throughout history, left-handers, which only comprise about 10% of the population, have often been considered being something abnormal. They were said to be evil, stubborn, defiant and were, even until the mid 20th century, forced to write with their right hand. One most commonly accepted idea, as to how handedness affects the hemispheres, is the brain hemisphere division of labour. Since both speaking and handiwork require fine motor skills, the presumption here is that it would be more efficient to have one brain hemisphere do both, rather than having it divided up. Since in most people, the left side of the brain controls speaking, right-handedness predominates. The theory also predicts that left-handed people have a reversed brain division of labour. In right handers, verbal processing is mostly done in the left hemisphere, whereas visuospatial processing is mostly done in the opposite hemisphere. Therefore, 95% of speech output is controlled by the left brain hemisphere, whereas only 5% of individuals control speech output in their right hemisphere. Left-handers, on the other hand, have a heterogeneous brain organization. Their brain hemisphere is either organized in the same way as right handers, the opposite way, or even such that both hemispheres are used for verbal processing. But usually, in 70% of the cases, speech is controlled by the left-hemisphere, 15% by the right and 15% by either hemisphere. When the average is taken across all types of left-handedness, it appears that left-handers are less lateralized. After, for example, damage occurs to the left hemisphere, it follows that there is a visuospatial deficit, which is usually more severe in left-handers than in right-handers. Dissimilarities may derive, in part, from differences in brain morphology, which concludes from asymmetries in the planum temporale. Still, it can be assumed that left-handers have less division of labour between their two hemispheres than right-handers do and are more likely to lack neuroanatomical asymmetries. There have been many theories as to find out why people are left-handed and what its consequences may be. Some people say that left-handers have a shorter life span or higher accident rates or autoimmune disorders. According to the theory of Geschwind and Galaburda, there is a relation to sex hormones, the immune system, and profiles of cognitive abilities that determine, whether a person is left-handed or not. Concludingly, many genetic models have been proposed, yet the causes and consequences still remain a mystery (M.T.Banich, "Neuropsychology", ch.3, pg. 119).

### The right hemisphere

The role of the right hemisphere in text comprehension

The experiments with "split-brain" patients and evidence that will be discussed soon suggest that the right hemisphere is usually not (but in some cases, e.g. 15% of left handed people) dominant in language comprehension. What is most often ascribed to the right hemisphere is cognitive functioning. When damage is done to this part of the brain or when temporal regions of the right hemisphere are removed, this can lead to cognitive-communication problems, such as impaired memory, attention problems, and poor reasoning (L. Cherney, 2001). Investigations lead to the conclusion that the right hemisphere processes information in a gestalt and holistic fashion, with a special emphasis on spatial relationships. Here, an advantage arises for differentiating two distinct faces because it examines things in a global manner and it also reacts to lower spatial, and also auditory, frequency. The former point can be undermined with the fact that the right hemisphere is capable of reading most concrete words and can make simple grammatical comparisons (M. T. Banich,“Neuropsychology“, ch.3, pg.97). But in order to function in such a way, there must be some sort of communication between the brain halves.

Prosody - the sound envelope around words

Consider how different the simple statement "She did it again" could be interpreted in the following context taken from Banich: LYNN: Alice is way into this mountain-biking thing. After breaking her arm, you'd think she'd be a little more cautious. But then yesterday, she went out and rode Captain Jack's. That trail is gnarly - narrow with lots of tree roots and rocks. And last night, I heard that she took a bad tumble on her way down. SARA: She did it again Does Sara say that with rising pitch or emphatically and with falling intonation? In the first case she would ask whether Alice has injured herself again. In the other case she asserts something she knows or imagines: That Alice managed to hurt herself a second time. Obviously the sound envelope around words - prosody - does matter.

Reason to belief that recognition of prosodic patterns appears in the right hemisphere arises when you take into account patients that have damage to an anterior region of the right hemisphere. They suffer from aprosodic speech, that is, their utterances are all at the same pitch. They might sound like a robot from the 80ties. There is another phenomena appearing from brain damage: dysprosodic speech. In that case the patient speaks with disordered intonation. This is not due to a right hemisphere lesion, but arises when damage to the left hemisphere is suffered. The explanation is that the left hemisphere gives ill-timed prosodic cues to the right hemisphere, thus proper intonation is affected.

Beyond words: Inference from a neurological point of view

On the word level, the current studies are mostly consistent with each other and with findings from brain lesion studies. But when it comes to the more complex understanding of whole sentences, texts and storylines, the findings are split. According to E. C. Ferstl’s review “The Neuroanatomy of Text Comprehension. What’s the story so far?” (2004), there is evidence for and against right hemisphere regions playing the key role in pragmatics and text comprehension. On the current state of knowledge, we cannot exactly say how and where cognitive functions like building situation models and inferencing work together with “pure” language processes.

As this chapter is concerned with the neurology of language, it should be remarked that patients with right hemisphere damage have difficulties with inferencing. Take into account the following sentence:

With mosquitoes, gnats, and grasshoppers flying all about, she came across a small black bug that was being used to eavesdrop on her conversation.

You might have to reinterpret the sentence until you realize that "small black bug" does not refer to an animal but rather to a spy device. People with damage in the right hemisphere have problems to do so. They have difficulty to follow the thread of a story and to make inferences about what has been said. Furthermore they have a hard time understanding non-literal aspects of sentences like metaphors, so they might be really horrified when they hear that someone was "Crying her eyes out".

The reader is referred to the next chapter for a detailed discussion of Situation Models

### The left hemisphere

Further evidence for left hemisphere dominance: The Wada technique

Before concerning concrete functionality of the left hemisphere, further evidence for the dominance of the left hemisphere is provided. Of relevance is the so-called Wada technique, allowing testing which hemisphere is responsible for speech output and usually being used in epilepsy patients during surgery. It is not a brain imaging technique, but simulates a brain lesion. One of the hemispheres is anesthetized by injecting a barbiturate (sodium amobarbital) in one of the patient’s carotid arteries. Then he is asked to name a number of items on cards. When he is not able to do that, despite the fact that he could do it an hour earlier, the concerned hemisphere is said to be the one responsible for speech output. This test must be done twice, for there is a chance that the patient produces speech bilaterally. The probability for that is not very high, in fact, according to Rasmussen & Milner 1997a (as referred to in Banich, p. 293) it occurs only in 15% of the left-handers and none of the right-handers. (It is still unclear where these differences in left-handers’ brains come from.)

That means that in most people, only one hemisphere “produces” speech output – and in 96% of right-handers and 70% of left-handers, it is the left one. The findings of the brain lesion studies about asymmetry were confirmed here: Normally (in healthy right-handers), the left hemisphere controls speech output.

Explanations of left hemisphere dominance

Two theories why the left hemisphere might have special language capacities are still discussed. The first states that dominance of the left hemisphere is due to a specialization for precise temporal control of oral and manual articulators. Here the main argument is that gestures related to a story line are most often made with the right and therefore by the left hemisphere controlled hand whilst other hand movements appear equally often with both hands. The other theory says that the left hemisphere is dominant because it is specialized for linguistic processing and is due to a single patient - a speaker of American Sign Language with a left hemisphere lesion. He could neither produce nor comprehend ASL, but could still communicate by using gestures in non-linguistic domains.

How innate is the organisational structure of the brain?

Not only cases of left-handers but also brain imaging techniques have shown examples of bilateral language processing: According to ERP studies (by Bellugi et al. 1994 and Neville et al. 1993 as cited in E. Dabrowska, "Language, Mind an Brain" 2004, p. 57), people with the Williams’ syndrome (WS) also have no dominant hemisphere for language. WS patients have a lot of physical and mental disorders, but show, compared to their other (poor) cognitive abilities, very good linguistic skills. And these skills do not rely on one dominant hemisphere, but both of them contribute equally. So, whilst the majority of the population has a dominant left hemisphere for language processing there are a variety of exceptions to that dominance. That there are different “organisation possibilities” in individual brains Dabrowska (p. 57) suggests that the organisational structure in the brain could be less innate and fixed as it is commonly thought.

## Auditory Language Processing

This section will explain where and how language is processed. To avoid intersections with visual processes we will firstly concentrate on spoken language. Scientists have developed three approaches of conceiving information about this issue. The first two approaches are based upon brain lesions, namely aphasia, whereas the recent approach relies on results of on modern brain-image techniques.

### Neurological Perspective

The Neurological Perspective describes which pathways language follows in order to be comprehended. Scientists revealed that there are concrete areas inside the brain where concrete tasks of language processing are taking place. The most known areas are the Broca and the Wernicke Area.

Broca’s aphasia

Broca's and Wernicke's area

One of the most well-known aphasias is Broca’s aphasia that causes patients to be unable to speak fluently. Moreover they have a great difficulty producing words. Comprehension, however, is relatively intact in those patients. Because these symptoms do not result from motoric problems of the vocal musculature, a region in the brain that is responsible for linguistic output must be lesioned. Broca discovered that the brain region causing fluent speech is responsible for linguistic output, must be located ventrally in the frontal lobe, anterior to the motor strip. Recent research suggested that Brocas aphasia results also from subcortical tissue and white matter and not only cortical tissue.

Example of spontaneous Speech - Task: What do you see on this picture?
„O, yea. Det‘s a boy an‘ girl... an‘ ... a ... car ... house... light po‘ (pole). Dog an‘ a ... boat. ‚N det‘s a ... mm ... a ... coffee, an‘ reading. Det‘s a ... mm ... a ... det‘s a boy ... fishin‘.“ (Adapted from „Principles of Neuroscience“ 4th edition, 2000, p 1178)

Wernicke‘s aphasia

Another very famous aphasia, known as Wernickes aphasia, causes opposite syndromes. Patients suffering from Wernickes aphasia usually speak very fluently, words are pronounced correctly, but they are combined senselessly – “word salad” is the way it is most often described. Understanding what patients of Wernickes aphasia say is especially difficult, because they use paraphasias (substitution of a word in verbal paraphasia, of word with similar meaning in semantic paraphasia, and of a phoneme in phonemic paraphasia) and neologisms. With Wernickes aphasia the comprehension of simple sentences is a very difficult task. Moreover their ability to process auditory language input and also written language is impaired. With some knowledge about the brainstructure and their tasks one is able to conclude that the area that causes Wernickes aphasia, is situated at the joint of temporal, parietal and occipital regions, near Heschls gyrus (primary auditory area), because all the areas receiving and interpreting sensory information (posterior cortex), and those connecting the sensory information to meaning (parietal lobe) are likely to be involved.

Example of spontaneous Speech - Task: What do you see on this picture?
„Ah, yes, it‘s ah ... several things. It‘s a girl ... uncurl ... on a boat. A dog ... ‘S is another dog ... uh-oh ... long‘s ... on a boat. The lady, it‘s a young lady. An‘ a man a They were eatin‘. ‘S be place there. This ... a tree! A boat. No, this is a ... It‘s a house. Over in here ... a cake. An‘ it‘s, it‘s a lot of water. Ah, all right. I think I mentioned about that boat. I noticed a boat being there. I did mention that before ... Several things down, different things down ... a bat ... a cake ... you have a ...“ (adapted from „Principles of Neuroscience“ 4th edition, 2000, p 1178)

Conduction aphasia

Wernicke supposed that an aphasia between Broca‘s area and Wernicke‘s area, namely conduction aphasia, would lead to severe problems to repeat just heard sentences rather than having problems with the comprehension and production of speech. Indeed patients suffering from this kind of aphasia show an inability to reproduce sentences since they often make phonemic paraphasias, may substitute or leave out words, or might say nothing. Investigations determined that the "connection cable", namely the arcuate fasciculus between Wernicke‘s and Broca‘s area is almost invariably damaged in case of a conduction aphasia. That is why conduction aphasia is also regarded as a disconnection syndrome (the behavioural dysfunction because of a damage to the connection of two connected brain regions).

Example of the repetition of the sentence „The pastry-cook was elated“:
„The baker-er was /vaskerin/ ... uh ...“ (adapted from „Principles of Neuroscience“ 4th edition, 2000, p 1178)

Transcortical motor aphasia and global aphasia

Transcortical motor aphasia, another brain lesion caused by a connection disruption, is very similar to Brocas aphasia, with the difference that the ability to repeat is kept. In fact people with a transcortical motor aphasia often suffer from echolalia, the need to repeat what they just heard. Usually patients brain is damaged outside Brocas area, sometimes more anterior and sometimes more superior. Individuals with transcortical sensory aphasia have similar symptoms as those suffering from Wernickes aphasia, except that they show signs of echolalia. Lesions in great parts of the left hemisphere lead to global aphasia, and thus to an inability of both comprehending and producing language, because not only Brocas or Wenickes area is damaged. (Barnich, 1997, pp. 276–282)

 Type of Aphasia Spontaneous Speech Paraphasia Comprehension Repetition Naming Brocas Wernickes Conduction Transcortical motor Transcortical sensory Global Nonfluent Fluent Fluent Nonfluent Fluent Nonfluent Uncommon Common (verbal) Common (literal) Uncommon Common Variable Good Poor Good Good Poor Poor Poor Poor Poor Good (echolalia) Good (echolalia) Poor Poor Poor Poor Poor Poor Poor

Overview of the effects of aphasia from the neurological perspective

(Adapted from Benson, 1985,p. 32 as cited in Barnich, 1997, p. 287)

### Psychological Perspective

Since the 1960‘s psychologists and psycholinguists tried to resolve how language is organised and represented inside the brain. Patients with aphasias gave good evidence for location and discrimination of the three main parts of language comprehension and production, namely phonology, syntax and semantics.

Phonology

Phonology deals with the processing of meaningful parts of speech resulting from the mere sound. More over there exists a differentiation between a phonemic representation of a speech sound which are the smallest units of sounds that leads to different meanings (e.g. the /b/ and /p/ in bet and pat) and phonetic representation. The latter means that a speech sound may be produced in a different manner at different situations. For instance the /p/ in pill sounds different than the /p/ in spill since the former /p/ is aspirated and the latter is not.

Examining which parts are responsible for phonetic representation, patients with Brocas or Wernickes aphasia can be compared. As the speech characteristic for patients with Brocas aphasia is non-fluent, i.e. they have problems producing the correct phonetic and phonemic representation of a sound, and people with Wernickes aphasia do not show any problems speaking fluently, but also have problems producing the right phoneme. This indicates that Brocas area is mainly involved in phonological production and also, that phonemic and phonetic representation do not take place in the same part of the brain. Scientists examined on a more precise level the speech production, on the level of the distinctive features of phonemes, to see in which features patients with aphasia made mistakes.

A distinctive feature describes the different manners and places of articulation. /t/ (like in touch) and /s/ (like in such) for example are created at the same place but produced in different manner. /t/ and /d/ are created at the same place and in the same manner but they differ in voicing.

Results show that in fluent as well as in non-fluent aphasia patients usually mix up only one distinctive feature, not two. In general it can be said that errors connected to the place of articulation are more common than those linked to voicing. Interestingly some aphasia patients are well aware of the different features of two phonemes, yet they are unable to produce the right sound. This suggests that though patients have great difficulty pronouncing words correctly, their comprehension of words is still quite good. This is characteristic for patients with Brocas aphasia, while those with Wernickes aphasia show contrary symptoms: they are able to pronounce words correctly, but cannot understand what the words mean. That is why they often utter phonologically correct words (neologisms) that are not real words with a meaning.

Syntax

Syntax describes the rules of how words must be arranged to result in meaningful sentences. Humans in general usually know the syntax of their mother tongue and thus slip their tongue if a word happens to be out of order in a sentence. People with aphasia, however, often have problems with parsing of sentences, not only with respect to the production of language but also with respect to comprehension of sentences. Patients showing an inability of comprehension and production of sentences usually have some kind of anterior aphasia, also called agrammatical aphasia. This can be revealed in tests with sentences. These patients cannot distinguish between active and passive voice easily if both agent and object could play an active part. For example patients do not see a difference between “The boy chased the girl” and “The boy was chased by the girl”, but they do understand both “The boy saw the apple” and “The apple was seen by the boy”, because they can seek help of semantics and do not have to rely on syntax alone. Patients with posterior aphasia, like for example Wernickes aphasia, do not show these symptoms, as their speech is fluent. Comprehension by mere syntactic means would be possible as well, but the semantic aspect must be considered as well. This will be discussed in the next part.

Semantics

Semantics deals with the meaning of words and sentences. It has been shown that patients suffering from posterior aphasia have severe problems understanding simple texts, although their knowledge of syntax is intact. The semantic shortcoming is often examined by a Token Test, a test in which patients have to point to objects referred to in simple sentences. As might have been guessed, people with anterior aphasia have no problems with semantics, yet they might not be able to understand longer sentences because the knowledge of syntax then is involved as well.

 anterior Aphasia (e.g. Broca) posterior Aphasia (e.g. Wernicke) Phonology phonetic and phonemic representation affected phonemic representation affected Syntax affected no effect Syntax no effect affected

Overview of the effects of aphasia from the psychological perspective

In general studies with lesioned people have shown that anterior areas are needed for speech output and posterior regions for speech comprehension. As mentioned above anterior regions are also more important for syntactic processing, while posterior regions are involved in semantic processing. But such a strict division of the parts of the brain and their responsibilities is not possible, because posterior regions must be important for more than just sentence comprehension, as patients with lesions in this area can neither comprehend nor produce any speech. (Barnich, 1997, pp. 283–293)

### Evidence from Advanced Neuroscience Methods

Measuring the functions of both normal and damaged brains has been possible since the 1970s, when the first brain imaging techniques were developed. With them, we are able to “watch the brain working” while the subject is e.g. listening to a joke. These methods (further described in chapter 4) show whether the earlier findings are correct and precise.

Generally, imaging shows that certain functional brain regions are much smaller than estimated in brain lesion studies, and that their boundaries are more distinct (cf. Banich p. 294). The exact location varies individually, therefore bringing the results of many brain lesion studies together caused too big estimated functional regions before. For example, stimulating brain tissue electrically (during epilepsy surgery) and observing the outcome (e.g. errors in naming tasks) led to a much better knowledge where language processing areas are located.

PET studies (Fiez & Petersen, 1993, as cited in Banich, p. 295) have shown that in fact both anterior and posterior regions were activated in language comprehension and processing, but with different strengths – in agreement with the lesion studies. The more active speech production is required in experiments, the more frontal is the main activation: For example, when the presented words must be repeated.

Another result (Raichle et al. 1994, as referred to in Banich, p. 295) was that the familiarity of the stimuli plays a big role. When the subjects were presented well-known stimuli sets in well-known experimental tasks and had to repeat them, anterior regions were activated. Those regions were known to cause conduction aphasia when damaged. But when the words were new ones, and/or the subjects never had to do a task like this before, the activation was recorded more posterior. That means, when you repeat an unexpected word, the heaviest working brain tissue is about somewhere under your upper left earlap, but when you knew this word that would be the next to repeat before, it is a bit nearer to your left eye.

## Visual Language Processing

The processing of written language is performed when we are reading or writing and is thought to happen in a distinct neural processing unit than auditory language processing. Reading and writing respectively rely on vision whereas spoken language is first mediated by the auditory system. Language systems responsible for written language processing have to interact with a sensory system different from the one involved in spoken language processing.

Visual language processing in general begins when the visual forms of letters (“c” or “C” or “c”) are mapped onto abstract letter identities. These are then mapped onto a word form and the corresponding semantic representation (the “meaning” of the word, i.e. the concept behind it). Observations of patients that lost a language ability due to a brain damage led to different disease patterns that indicated a difference between perception (reading) and production (writing) of visual language just like it is found in non-visual language processing.

Alexic patients possess the ability to write while not being able to read whereas patients with agraphia are able to read but cannot write. Though alexia and agraphia often occur together as a result of damage to the angular gyrus, there were patients found having alexia without agraphia (e.g. Greenblatt 1973, as cited in M. T. Banich, “Neuropsychology“, p. 296) or having agraphia without alexia (e.g. Hécaen & Kremin, 1976, as cited in M. T. Banich, “Neuropsychology“, p. 296). This is a double dissociation that suggests separate neural control systems for reading and writing.

Since double dissociations are also found in phonological and surface dyslexia, experimental results support the theory that language production and perception respectively are subdivided into separate neural circuits. The two route model shows how these two neural circuits are believed to provide pathways from written words to thoughts and from thoughts to written words.

### Two routes model

1.1. Each route derives the meaning of a word or the word of a meaning in a different way

In essence, the two routes model contains two routes. Each of them derives the meaning of a word or the word of a meaning in a different way, depending on how familiar we are with the word.

Using the phonological route means having an intermediate step between perceiving and comprehending of written language. This intermediate step takes places when we are making use of grapheme-to-phoneme rules. Grapheme-to-phoneme rules are a way of determining the phonological representation for a given grapheme. A grapheme is the smallest written unit of a word (e.g. “sh” in “shore”) that represents a phoneme. A phoneme on the other hand is the smallest phonological unit of a word distinguishing it from another word that otherwise sounds the same (e.g. “bat” and “cat”). People learning to read or are encountering new words often use the phonological route to arrive at a meaning representation. They construct phonemes for each grapheme and then combine the individual phonemes to a sound pattern that is associated with a certain meaning (see 1.1).

The direct route is supposed to work without an intermediate phonological representation, so that print is directly associated with word-meaning. A situation in which the direct route has to be taken is when reading an irregular word like “colonel”. Application of grapheme-to-phoneme rules would lead to an incorrect phonological representation.

According to Taft (1982, as referred to in M. T. Banich,“Neuropsychology“, p. 297) and others the direct route is supposed to be faster than the phonological route since it does not make use of a “phonological detour” and is therefore said to be used for known words ( see 1.1). However, this is just one point of view and others, like Chastain (1987, as referred to in M. T. Banich, “Neuropsychology“, p. 297), postulate a reliance on the phonological route even in skilled readers.

### The processing of written language in reading

1.2. Regularity effects are common in cases of surface alexia

Several kinds of alexia could be differentiated, often depending on whether the phonological or the direct route was impaired. Patients with brain lesions participated in experiments where they had to read out words and non-words as well as irregular words. Reading of non-words for example requires access to the phonological route since there cannot be a “stored” meaning or a sound representation for this combination of letters.

Patients with a lesion in temporal structures of the left hemisphere (the exact location varies) suffer from so called surface alexia. They show the following characteristic symptoms that suggest a strong reliance on the phonological route: Very common are regularity effects, that is a mispronunciation of words in which the spelling is irregular like "colonel" or "yacht" (see 1.2). These words are pronounced according to grapheme-to-phoneme rules, although high-frequency irregularly spelled words may be preserved in some cases, the pronunciation according to the phonological route is just wrong.

Furthermore, the would-be pronunciation of a word is reflected in reading-comprehension errors. When asked to describe the meaning of the word “bear”, people suffering from surface alexia would answer something like “a beverage” because the resulting sound pattern of “bear” was the same for these people as that for “beer”. This characteristic goes along with a tendency to confuse homophones (words that sound the same but are spelled differently and have different meanings associated). However, these people are still able to read non-words with a regular spelling since they can apply grapheme-to-phoneme rules to them.

1.3. Patients with phonological alexia have to rely on the direct route

In contrast, phonological alexia is characterised by a disruption in the phonological route due to lesions in more posterior temporal structures of the left hemisphere. Patients can read familiar regular and irregular words by making use of stored information about the meaning associated with that particular visual form (so there is no regularity effect like in surface alexia). However, they are unable to process unknown words or non-words, since they have to rely on the direct route (see 1.3).

Word class effects and morphological errors are common, too. Nouns, for example, are read better than function words and sometimes even better than verbs. Affixes which do not change the grammatical class or meaning of a word (inflectional affixes) are often substituted (e.g. “farmer” instead of “farming”). Furthermore, concrete words are read with a lower error rate than abstract ones like “freedom” (concreteness effect).

Deep Alexia shares many symptomatic features with phonological alexia such as an inability to read out non-words. Just as in phonological alexia, patients make mistakes on word inflections as well as function words and show visually based errors on abstract words (“desire” → “desert”). In addition to that, people with deep alexia misread words as different words with a strongly related meaning (“woods” instead of “forest”), a phenomenon referred to as semantic paralexia. Coltheart (as referred to in the “Handbook of Neurolinguistics”, ch.41-3, p. 563) postulates that reading in deep dyslexia is mediated by the right hemisphere. He suggests that when large lesions affecting language abilities other than reading prevent access to the left hemisphere, the right-hemispheric language store is used. Lexical entries stored there are accessed and used as input to left-hemisphere output systems.

Overview alexia

### The processing of written language in spelling

The phonological route is supposed to make use of phoneme-to-grapheme rules while the direct route links thought to writing without an intermediary phonetic representation

Just like in reading, two separate routes –a phonological and a direct route- are thought to exist. The phonological route is supposed to make use of phoneme-to-grapheme rules while the direct route links thought to writing without an intermediary phonetic representation (see 1.4).

It should be noted here that there is a difference between phoneme-to-grapheme rules (used for spelling) and grapheme-to-phoneme rules in that one is not simply the reverse of the other. In case of the grapheme “k” the most common phoneme for it is /k/. The most common grapheme for the phoneme /k/, however, is “c”. Phonological agraphia is caused by a lesion in the left supramarginal gyrus, which is located in the parietal lobe above the posterior section of the Sylvian fissure (M. T. Banich, “Neuropsychology“, p. 299). The ability to write regular and irregular words is preserved while the ability to write non-words is not. This, together with a poor retrieval of affixes (which are not stored lexically), indicates an inability to associate spoken words with their orthographic form via phoneme-to-grapheme rules. Patients rely on the direct route, which means that they use orthographic word-form representations that are stored in lexical memory. Lesions at the conjunction of the posterior parietal lobe and the parieto-occipital junction cause so called lexical agraphia that is sometimes also referred to as surface agraphia. As the name already indicates, it parallels surface alexia in that patients have difficulty to access lexical-orthographic representations of words. Lexical agraphia is characterised by a poor spelling of irregular words but good spelling for regular and non-words. When asked to spell irregular words, patients often commit regularization errors, so that the word is spelled phonologically correct (for example, “whisk” would be written as “wisque”). The BEST to CONNECT is to CAPITALISE the WORDS you WANT TO COMMUNICATE for readers to COMPREHEND.

Overview agraphia

### Evidence from Advanced Neuroscience Methods

How can we find evidence for the theory of the two routes. Until now neuroscientific research is not able to ascertain that there are neural circuits representing a system like the one described above. The problem of finding evidence for visual language processing on two routes in contrast to one route ( as stated by e.g. from Seidenberg & McClelland as referred to in M. T. Banich,“Neuropsychology“, p. 308) is that it is not clear what characteristic brain activation would indicate that it is either happening on two or one routes. To investigate whether there are one or two systems, neuroimaging studies examine correlations between the activations of the angular gyrus, which is thought to be a crucial brain area in written language processing and other brain regions. It was found out that during reading of non- words ( which would strongly engage the phonological route) the activation is mostly correlated with brain regions which are involved in phonological processing e.g. superior temporal regions (BA 22) and Boca’s area. During reading of normal words (which would strongly engage the direct route) the highest activation was found in occipital and ventral cortex. That at least can imply that there are two distinct routes. However, these are conclusions drawn from highest correlations which do not ensure this suggestion. What neuroimaging studies do ascertain is that the usage of a phonological and a direct route strongly overlap, which is rather unspectacular since it is quiet reasonable that fluent speaker mix both of the routes. Other studies additionally provide data in which the activated brain regions during reading of non-words and reading of normal words differ. ERP studies suggest that the left hemisphere possesses some sort mechanism which response to combinations of letters in a string, or to its orthography and / or to the phonological representation of the string. ERP waves differ, during early analysis of the visual form of the string, if the string represents a correct word or just pronounceable nonsense (Posner & McCandliss, 1993 as referred in M.T. Banich, „Neuropsychology“p. 307-308). That indicates that this mechanism is sensitive to correct or incorrect words.

The opposite hemisphere, the right hemisphere, is in contrast to the left hemisphere, not involved in abstract mapping of word meaning but is rather responsible for encoding word specific visual forms. ERP and PET studies provides evidence that the right hemisphere responds in a stronger manner than the left hemisphere to letter like strings. Moreover divided visual field studies reveal that the right hemisphere can better distinguish between different shapes of the same letter (e.g. in different handwritings) than the left hemisphere. The contribution of visual language processing on both hemispheres is that the right hemisphere first recognizes a written word as letter sequences, no matter how exactly they look like, then the language network in the left hemisphere builds up an abstract representation of the word, which is the comprehension of the word.

## Other symbolic systems

Most neurolinguistic research is concerned with production and comprehension of English language, either written or spoken. However, looking at different language systems from a neuroscientific perspective can substantiate as well as differentiate acknowledged theories of language processing. The following section shows how neurological research of three symbolic systems, each different from English in some aspect, has made it possible to distinguish - at least to some extent - between brain regions that deal with the modality of the language (and therefore may vary from language to language, depending on whether the language in question is e.g. spoken or signed) from brain regions that seem to be necessary to language processing in general - regardless whether we are dealing with signed, spoken, or even musical language.

### Kana and Kanji

Kana and Kanji are the two writing systems used parallel in the Japanese language. Since different approaches are used in them to represent words, studying Japanese patients with alexia is a great possibility to test the hypothesis about the existence of two different routes to meaning, explicated in the previous section.

The English writing system is phonological – each grapheme in written English roughly represents one speech sound – a consonant or a vowel. There are, however, other possible approaches to writing down a spoken language. In syllabic systems like the Japanese kana, one grapheme stands for one syllable. If written English were syllabic, it could e.g. include a symbol for the syllable “nut”, appearing both in the words “donut” and “peanut”. Syllabic systems are sound-based – since the graphemes represent units of spoken words rather than meaning directly, an auditory representation of the word has to be created in order to arrive at the meaning. Therefore, reading of syllabic systems should require an intact phonological route. In addition to kana, Japanese also use a logographic writing system called kanji, in which one grapheme represents a whole word or a concept. Different from phonological and syllabic systems, logographic systems don’t comprise systematical relationships between visual forms and the way they’re pronounced – instead, visual form is directly associated with the pronunciation and meaning of the corresponding word. Reading kanji should therefore require the direct route to meaning to be intact.

The hypothesis about the existence of two different routes to meaning has been confirmed by the fact that after brain damage, there can be a double dissociation between kana and kanji. Some Japanese patients can thus read kana but not kanji (surface alexia), whereas other can read kanji but not kana (phonological alexia). In addition, there is evidence that different brain regions of Japanese native speakers are active while reading kana and kanji, although like in the case of English native speakers, these regions also overlap.

Since the distinction between direct and phonological route also makes sense in case of Japanese, it may be a general principle common to all written languages that reading them relies on two independent (at least partially) systems, both using different strategies to catch the meaning of a written word – either associating the visual form directly with the meaning (the direct route), or using the auditory representation as an intermediary between the visual form and the meaning of the word (the phonological route).

 The Japanese Kana sign for the syllable "mu" The Japanese Kanji sign for the concept "Book", "writing", or "calligraphy

### Sign Language

From a linguistic perspective, sign languages share many features of spoken languages – there are many regionally bounded sign languages, each with a distinct grammar and lexicon. Since at the same time, sign languages differ from spoken languages in the way the words are “uttered”, i.e. in the modality, neuroscientific research in them can yield valuable insights into the question whether there are general neural mechanisms dealing with language, regardless of its modality.

Structure of SL

Sign languages are phonological languages - every meaningful sign consists of several phonemes (phonemes used to be called cheremes (Greek χερι: hand) until their cognitive equivalence to phonemes in spoken languages was realized) that carry no meaning as such, but are nevertheless important to distinguish the meaning of the sign. One distinctive feature of SL phonemes is the place of articulation – one hand shape can have different meanings depending on whether it’s produced at the eye-, nose-, or chin-level. Other features determining the meaning of a sign are hand shape, palm orientation, movement, and non-manual markers (e.g. facial expressions).

To express syntactic relationships, Sign Languages exploit the advantages of the visuo-spatial medium in which the signs are produced – the syntactic structure of sign languages therefore often differs from that of spoken languages. Two important features of most sign language's grammars (including American Sign Language (ASL), Deutsche Gebärdensprache (DGS) and several other major sign languages) are directionality and simultaneous encoding of elements of information:

• Directionality

The direction in which the sign is made often determines the subject and the object of a sentence. Nouns in SL can be 'linked' to a particular point in space, and later in the discourse they can be referred to by pointing to that same spot again (this is functionally related to pronouns in English). The object and the subject can then be switched by changing the direction in which the sign for a transitive verb is made.

• Simultaneous encoding of elements of information

The visual medium also makes it possible to encode several pieces of information simultaneously. Consider e.g. the sentence "The flight was long and I didn't enjoy it". In English, the information about the duration and unpleasantness of the flight have to be encoded sequentially by adding more words to the sentence. To enrich the utterance "The flight was long” with the information about the unpleasantness of the flight, another sentence (“I did not enjoy it") has to be added to the original one. So, in order to convey more information, the length of the original sentence must grow. In sign language, however, the increase of information in an utterance doesn’t necessarily increase the length of the utterance. To convey information about the unpleasantness of a long flight experienced in the past, one can just use the single sign for "flight" with the past tense marker, moved in a way that represents the attribute "long", combined with the facial expression of disaffection. Since all these features are signed simultaneously, no additional time is needed to utter "The flight was long" as compared to "The flight was long and I didn't enjoy it".

Neurology of SL

Since sentences in SL are encoded visually, and since its grammar is often based on visual rather than sequential relationships among different signs, it could be suggested that the processing of SL mainly depends on the right hemisphere, which is mainly concerned with the performance on visual and spatial tasks. However, there is evidence suggesting that processing of SL and spoken language might be equally dependant on the left hemisphere, i.e. that the same basic neural mechanism may be responsible for all language functioning, regardless of its modality (i.e. whether the language is spoken or signed).

The importance of the left hemisphere in SL processing indicated e.g. by the fact that signers with a damaged right hemisphere may not be aphasiacs, whereas as in case of hearing subjects, lesions in the left hemisphere of signers can result in subtle linguistic difficulties (Gordon, 2003). Furthermore, studies of aphasic native signers have shown that damage to anterior portions of the left hemisphere (Broca’s area) result in a syndrome similar to Broca’s aphasia – the patients lose fluency of communication, they aren’t able to correctly use syntactic markers and inflect verbs, although the words they sign are semantically appropriate. In contrast, patients with damages to posterior portions of the superior temporal gyrus (Wernicke’s area) can still properly inflect verbs, set up and retrieve nouns from a discourse locus, but the sequences they sign have no meaning (Poizner, Klima & Bellugi, 1987). So, like in the case of spoken languages, anterior and posterior portions of the left hemisphere seem to be responsible for the syntax and semantics of the language respectively. Hence, it’s not essential for the "syntax processing mechanisms" of the brain whether the syntax is conveyed simultaneously through spatial markers or successively through word order and morphemes added to words - the same underlying mechanisms might be responsible for syntax in both cases.

Further evidence for the same underlying mechanisms for spoken and signed languages comes from studies in which fMRI has been used to compare the language processing of:

• 1. congenitally deaf native signers of British Sign Language,
• 2. hearing native signers of BSL (usually hearing children of deaf parents)
• 3. hearing signers who have learned BSL after puberty
• 4. non-signing subjects

Investigating language processing in these different groups allows making some distinctions between different factors influencing language organization in the brain - e.g. to what amount does deafness influences the organization of language in the brain as compared to just having SL as a first language(1 vs. 2), or to what amount does learning of SL as a first language differ from learning SL as native language(1,2 vs.3), or to what amount is language organized in speakers as compared to signers(1,2,3 vs.4).

These studies have shown that typical areas in the left hemisphere are activated in both native English speakers given written stimuli and native signers given signs as stimuli. Moreover, there are also areas that are equally activated both in case of deaf subjects processing sign language and hearing subjects processing spoken language – a finding which suggests that these areas constitute the core language system regardless of the language modality(Gordon, 2003).

Different from speakers, however, signers also show a strong activation of the right hemisphere. This is partly due to the necessity to process visuo-spatial information. Some of those areas, however (e.g. the angular gyrus) are only activated in native signers and not in hearing subjects that learned SL after puberty. This suggests that the way of learning sign languages (and languages in general) changes with time: Late learner's brains are unable to recruit certain brain regions specialized for processing this language (Newman et al., 1998).]

We have seen that evidence from aphasias as well as from neuroimaging suggest the same underlying neural mechanisms to be responsible for sign and spoken languages. It ‘s natural to ask whether these neural mechanisms are even more general, i.e. whether they are able to process any type of symbolic system underlying some syntax and semantics. One example of this kind of more general symbolic system is music.

### Music

Like language, music is a human universal involving some combinatorial principles that govern the organizing of discrete elements (tones) into structures (phrases) that convey some meaning – music is a symbolic system with a special kind of syntax and semantics. It’s therefore interesting to ask whether music and natural language share some neural mechanisms: whether processing of music is dependent on processing of language or the other way round, or whether the underlying mechanisms underlying them are completely separate. By investigating the neural mechanisms underlying music we might find out whether the neural processes behind language are unique to the domain of natural language, i.e. whether language is modular. Up to now, research in the neurobiology of music has yielded contradicting evidence regarding these questions.

On the one hand, there is evidence that there is a double dissociation of language and music abilities. People suffering from amusia are unable to perceive harmony, to remember and to recognize even very simple melodies; at the same time they have no problems in comprehending or producing speech. There is even a case of a patient who developed amusia without aprosodia, i.e. although she couldn't recognize tone in musical sequences, she nevertheless could still make use of pitch, loudness, rate, or rhythm to convey meanings in spoken language (Pearce, 2005). This highly selective problem in processing music (amusia) can occur as a result of brain damage, or be inborn; in some cases it runs on families, suggesting a genetic component. The complement syndrome of amusia also exists – after suffering a brain damage in the left hemisphere, the Russian composer Shebalin lost his speech functions, but his musical abilities remained intact (Zatorre, McGill, 2005).

On the other hand, neuroimaging data suggest that language and music have a common mechanism for processing syntactical structures. The P600 ERPs in the Broca area, measured as a response to ungrammatical sentences, is also elicited in subjects listening to musical chord sequences lacking harmony (Patel, 2003) – the expectation of typical sequences in music could therefore be mediated by the same neural mechanisms as the expectation of grammatical sequences in language.

A possible solution to this apparent contradiction is the dual system approach (Patel, 2003) according to which music and language share some procedural mechanisms (frontal brain areas) responsible for processing the general aspects of syntax, but in both cases these mechanisms operate on different representations (posterior brain areas) – notes in case of music and words in case of language.

## Outlook

Many questions are to be answered, for it is e.g. still unclear whether there is a distinct language module (that you could cut out without causing anything in other brain functions) or not. As Evely C. Ferstl points out in her review, the next step after exploring distinct small regions responsible for subtasks of language processing will be to find out how they work together and build up the language network.

Books - english

• Brigitte Stemmer, Harry A. Whitaker. Handbook of Neurolinguistics. Academic Press (1998). ISBN 0126660557
• Marie T. Banich: Neuropsychology. The neural bases of mental function (1997).
• Ewa Dąbrowska: Language, Mind and Brain. Edinburgh University press Ltd.(2004)
• a review: Evelyn C. Ferstl, The functional neuroanatomy of text comprehension. What's the story so far?" from: Schmalhofer, F. & Perfetti, C. A. (Eds.), Higher Level Language Processes in the Brain:Inference and Comprehension Processes. Lawrence Erlbaum. (2004)

Books - german

• Müller,H.M.& Rickert,G. (Hrsg.): Neurokognition der Sprache. Stauffenberg Verlag (2003)
• Poizner, Klima & Bellugi: What the hands reveal about the brain. MIT Press (1987)
• N. Chomsky: Aspects of the Theory of Syntax. MIT Press (1965). ISBN 0262530074
• Neville & Bavelier: Variability in the effects of experience on the development of cerebral specializations: Insights from the study of deaf individuals. Washington, D.C.: US Government Printing Office (1998)
• Newman et al.: Effects of Age of Acquisition on Cortical Organization for American Sign Language: an fMRI Study. NeuroImage, 7(4), part 2 (1998)

### Organizational Issues

Group Members 2007

Group Members 2006

# Situation Models and Inferencing

## Introduction

An important function and property of the human cognitive system is the ability to extract important information out of textually and verbally described situations. This ability plays a vital role in understanding and remembering. But what happens to this information after it is extracted, how do we represent it and how do we use it for inferencing? With this chapter we introduce the concept of a “situation model” (van Dijk&Kintsch, 1983, “mental model”: Johnson-Laird, 1983), which is the mental representation of what a text is about. We discuss what these representations might look like and show the various experiments that try to tackle these questions empirically. By assuming situations to be encoded by perceptual symbols (Barsalou, 1999), the theory of Situation Models touches many aspects of Cognitive Philosophy, Linguistics and Artificial Intelligence. In the beginning of this chapter, we will mention why Situation Models are important and what we use them for. Next we will focus on the theory itself by introducing the four primary types of information - the situation model components, its Levels of Representation and finally two other basic types of knowledge used in situation model construction and processing (general world knowledge and referent specific knowledge).

Situation models not only form a central concept in theories of situated cognition that helps us in understanding how situational information is collected and how new information gets integrated, but they can also explain many other phenomena. According to van Dijk & Kintsch, situation models are responsible for processes like domain-expertise, translation, learning from multiple sources or completely understanding situations just by reading about them. These situation models consist, according to most researches in this area, of five dimensions, which we will explain later. When new information concerning one of these dimensions is extracted, the situation model is changed according to the new information. The bigger the change in the situation model is, the more time the reader needs for understanding the situation with the new information. If there are contradictions, e.g. new information which does not fit into the model, the reader fails to understand the text and probably has to reread parts of the text to build up a better model. It was shown in several experiments that it is easier to understand texts that have only small changes in the five dimensions of text understanding. It also has been found that it is easier for readers to understand a text if the important information is more explicitly mentioned. For this reason several researchers wrote about the importance of fore-grounding important information (see Zwaan&Radvansky 1998 for a detailed list). The other important issue about situation models is the multidimensionality. Here the important question is how are the different dimensions related and what is their weight for constructing the model. Some researchers claim that the weight of the dimensions shifts according to the situation which is described. Introducing such claims will be the final part of this chapter and aims to introduce you to current and future research goals.

#### The VIP: Rolf A. Zwaan

Rolf A. Zwaan, born September 13, 1962 in Rotterdam (the Netherlands), is a very important person for this topic, since he made the most research (92 publications in total), and also because most of our data is taken from his work. Zwaan did his MA (1986) and his Ph.D. (1992) at the Utrecht University (Netherlands), both cum laude. Since then he collected multiple awards like the Developing Scholar Award (Florida state University, 1999) or the Fellow of the Hanse Institute for Advanced Study (Delmenhorst, Germany, 2003) and became member of several Professional Organisations like the Psychonomic Society, the Cognitive Science Society or the American Psychological Society. He works as Chair of the Biology & Cognitive Psychology at the Erasmus University in Rotterdam (Netherlands), since 2007.

## Why do we need situation models?

A lot of tasks which are based on language processing can only be explained by the usage of situation models. The so called situation model or mental model consists of five different dimensions, which refer to different sources. To comprehend a text or just a simple sentence, situation models are useful. Furthermore the comprehension and combination of several texts and sentences can be explained by that theory much better. In the following, some examples are listed why we really need situation models.

#### Integration of information across sentences

Integration of information across sentences is more than just understanding a set of sentences. For example:

“Gerhard Schroeder is in front of some journalists. Looking forward to new ideas is nothing special for the Ex-German chancellor. It is like in the good old days in 1971 when the leader of the Jusos was behind the polls and talked about changes.”

This example only makes sense to the reader if he is aware that “Gerhard Schroeder”, “Ex-German chancellor” and “the leader of the Jusos in 1971” is one and the same person. If we build up a situation model, in this example “Gerhard Schroeder” is our token. Every bit of information which comes up will be linked to this token, based on grammatical and world knowledge. The definite article in the second sentence refers to the individual in the first sentence. This is based on grammatical knowledge. Every definite article indicates a connection to an individual in a previous sentence. If there would be an indefinite article, we have to build a new token for a new individual. The third sentence is linked by domain knowledge to the token. It has to be known that “Gerhard Schroeder” was the leader of the Jusos in 1971. Otherwise the connection can only be guessed. We can see that an integrated situation model is needed to comprehend the connection between the three sentences.

#### Explanation of similarities in comprehension performances across modalities

The explanation of similarities in comprehension performances across modalities can only be done by the usage of situation models. If we read a newspaper article, watch a report on television or listen to a report on radio, we come up with a similar understanding of the same information, which is conveyed through different modalities. Thus we create a mental representation of the information or event. This mental representation does not depend on the modalities itself. Furthermore there is empirical evidence for this intuition. Baggett (1979) found out that students who saw a short film and students who heard a spoken version of the events in the short film finally produced a structurally similar recall protocol. There were differences in the protocols of the two groups but the differences were due to content aspects. Like the text version explicitly stated that a boy was on his way to school and in the movie this had to be inferred.

#### Domain expertise on comprehension

Situation models have a deep influence for effects of domain expertise on comprehension. In detail this means that person A, whose verbal skills are less than from person B, is able to outperform person B, if he has more knowledge of the topic domain. To give evidence for this intuition, there was a study by Schneider and Körkel (1989). They compared the recalls of “experts” and novices of a text about a soccer match. In the study were three different grades: 3rd, 5th and 7th. One important example in that experiment was that the 3rd grade soccer experts outperformed the 7th grade novices. The recall of units in the text was 54% by the 3rd grade experts and 42% by the 7th grade novices. The explanation is quite simple: The 3rd grade experts built up a situation model and used knowledge from their long-term memory (Ericsson & Kintsch, 1995). The 7th grade novices had just the text by which they can come up with a situation model. Some more studies show evidence for the theory that domain expertise may counteract with verbal ability, i.e. Fincher-Kiefer, Post, Greene & Voss, 1988 or Yekovich, Walker, Ogle & Thompson in 1990.

#### Explanation of translation skills

An other example why we need situation models is by trying to explain translation. Translating a sentence or a text from one language to another is not simply done by translating each word and building a new sentence structure until the sentence seems to be sound. If we have a look now at the example of a Dutch sentence:

Now we can conclude that the translation level between Dutch and English is not based on the lexical-semantic level; it is based on the situation level. In this example “don’t do something (action) before you haven’t done something else (another action)”. Other studies came up with findings that the ability to construct situation models during the translation is important for the translation skill (Zwann, Ericsson, Lally and Hill, in 1998).

#### Multiple source learning

People are able to learn about a domain from multiple documents. This phenomenon can be explained by a situation model, too. For example, we try to learn something about the “Cold War” we use different documents with information. The information in one document may be similar to other documents. Referents can be the same or special relationships in the “Cold War” can just be figured out by the usage of different documents. So what we are really doing by learning and reasoning is that we integrate information on the base of different documents into a common situation model, which has got an organized order of the information we’ve learned.

We have seen that we need situation models in different tasks of language processing, but situation models are not needed in all tasks of language processing. An example is proofreading. A proofreader checks every word for its correctness. This ability does not contain the ability to construct situation models. This task uses the resources of the long-term memory in which the correct writing of each word is stored. The procedure is like:

This is done word by word. It is unnecessary to create situation models in this task for language processing.

## Multidimensionality of Situation Models

### Space

Very often, objects that are spatially close to us are more relevant than more distant objects. Therefore, one would expect the same for situation models. consistent with this idea, comprehenders are slower to recognise words denoting objects distant from a protagonist than those denoting objects close to the protagonist (Glenberg, Meyer & Lindem, 1987).

When comprehenders have extensive knowledge of the spatial layout of the setting of the story (e.g., a building), they update their representations according to the location and goals of the protagonist. They have the fastest mental access to the room that the protagonist is currently in or is heading to. For example, they can more readily say whether or not two objects are in the same room if the room mentioned is one of these rooms than if it is some other room in the building (e.g., Morrow, Greenspan, & Bower, 1987). This makes perfect sense intuitively because these are the rooms that would be relevant to us if we were in the situation.

People’s interpretation of the meaning of a verb denoting movement of people or objects in space, such as to approach, depends on their situation models. For example, comprehenders interpret the meaning of approach differently in The tractor is just approaching the fence than in The mouse is just approaching the fence. Specifically, they interpret the distance between the figure and the landmark as being longer when the figure is large (tractor) compared with when it is small (mouse). The comprehenders’ interpretation also depends on the size of the landmark and the speed of the figure (Morrow & Clark, 1988). Apparently, comprehenders behave as if they are actually standing in the situation, looking at the tractor or mouse approaching a fence.

### Time

We assume by default that events are narrated in their chronological order, with nothing left out. Presumably this assumption exists because this is how we experience events in everyday life. Events occur to us in a continuous flow, sometimes in close succession, sometimes in parallel, and often partially overlapping. Language allows us to deviate from chronological order, however. For example, we can say, “Before the psychologist submitted the manuscript, the journal changed its policy.” The psychologist submitting the manuscript is reported first, even though it was the last of the two events to occur. If people construct a situation model, this sentence should be more difficult to process than its chronological counterpart (the same sentence, but beginning with “After”). Recent neuroscientific evidence supports this prediction. Event-related brain potential (ERP) measurements indicate that “before” sentences elicit, within 300 ms, greater negativity than “after” sentences. This difference in potential is primarily located in the left anterior part of the brain and is indicative of greater cognitive effort (Münte, Schiltz, & Kutas, 1998). In real life, events follow each other seamlessly. However, narratives can have temporal discontinuities, when writers omit events not relevant to the plot. Such temporal gaps, typically signalled by phrases such as a few days later, are quite common in narratives. Nonetheless, they present a departure from everyday experience. Therefore, time shifts should lead to (minor) disruptions of the comprehension process. And they do. Reading times for sentences that introduce a time shift tend to be longer than those for sentences that do not (Zwaan, 1996).

All other things being equal, events that happened just recently are more accessible to us than events that happened a while ago.
Thus, in a situation model, enter should be less accessible after An hour ago, John entered the building than after A moment ago, John entered the building.
Recent probe-word recognition experiments support this prediction (e.g., Zwaan, 1996).


### Causation

As we interact with the environment, we have a strong tendency to interpret event sequences as causal sequences. It is important to note that, just as we infer the goals of a protagonist, we have to infer causality; we cannot perceive it directly. Singer and his colleagues (e.g., Singer, Halldorson, Lear, & Andrusiak, 1992) have investigated how readers use their world knowledge to validate causal connections between narrated events. Subjects read sentence pairs, such as 1a and then 1b or 1a’ and then 1b, and were subsequently presented with a question like 1c:

(1a) Mark poured the bucket of water on the bonfire.

(1a’) Mark placed the bucket of water by the bonfire.

(1b) The bonfire went out.

(1c) Does water extinguish fire?

Subjects were faster in responding to 1c after the sequence 1a-1b than after 1a’-1b. According to Singer, the reason for the speed difference is that the knowledge that water extinguishes fire was activated to validate the events described in 1a-1b. However, because this knowledge cannot be used to validate 1a’-1b, it was not activated when subjects read that sentence pair.

### Intentionality

We are often able to predict people’s future actions by inferring their intentionality, i.e. their goals. For example, when we see a man walking over to a chair, we assume that he wants to sit, especially when he has been standing for a long time. Thus, we might generate the inference “He is going to sit.” Keefe and McDaniel (1993) presented subjects with sentences like After standing through the 3-hr debate, the tired speaker walked over to his chair (and sat down) and then with probe words (e.g., sat, in this case). Subjects took about the same amount of time to name sat when the clause about the speaker sitting down was omitted and when it was included. Moreover, naming times were significantly faster in both of these conditions than in a control condition in which it was implied that the speaker remained standing.

### Protagonists and Objects

Comprehenders are quick to make inferences about protagonists, presumably in an attempt to construct a more complete situation model. Consider, for example, what happens after subjects read the sentence The electrician examined the light fitting. If the following sentence is She took out her screwdriver, their reading speed is slowed down compared with when the second sentence is He took out his screwdriver. This happens because she provides a mismatch with the stereotypical gender of an electrician, which the subjects apparently inferred while reading the first sentence (Carreiras, Garnham, Oakhill, & Cain, 1996).

Comprehenders also make inferences about the emotional states of characters.
For example, if we read a story about Paul, who wants his brother Luke to be good in baseball, the concept of “pride” becomes activated in our mind when we read
that Luke receives the Most Valuable Player Award (Gernsbacher, Goldsmith, & Robertson, 1992).
Thus, just as in real life, we make inferences about people’s emotions when we comprehend stories.


## Processing Frameworks

### Introduction

In the process of language and text comprehension new information has to be integrated into the current situation model. This is achieved by a processing framework. There are various theories and insights on this process. Most of them only model one or some aspects of Situation Models and language comprehension.

A list of theories, insights and developments in language comprehension frameworks:

• an interactive model of comprehension (Kintsch and van Dijk, 1978)
• early Computatinal Model (Miller, Kintsch, 1980)
• Constructing-integration Model (Kintsch, 1988)
• Structure-Building-Framework (Gernsbacher,1990)
• Capacity Constraint Reader Model (Just, Carpenter, 1992)
• Constructivist framework (Graesser, Singer, Trabasso, 1994)
• Event Indexing Model (Zwaan, Langston, Graesser, 1995)
• Landscape Model (van den Brock, Risden, Fletcher, & Thurlow, 1996)
• Capacity-constrained construction-integration Model (Goldman, Varma, Coté, 1996)
• The Immersed Experiencer Framework (Zwaan, 2003)

In this part of the chapter on Situation Models we will talk about several models; we will start with some of the early stuff and then go to the popular later ones. We will start with the work of Kintsch in the 70s and 80s and then go on to later research which is based on this.

### An interactive Model of Comprehension

This model was already developed in the 80s; it is the basis for many later models like the CI-Model, or even the Immersed-Experiencer Framework. According to Kintsch and van Dijk (1978), text comprehension proceeds in cycles. In every cycle a few propositions are processed, this number is determined by the capacity of the Short-Term Memory, so 7 plus or minus 2. In every cycle the new propositions are connected to existing ones, they therefore form a connected and hierarchical set.

### Early Computational Model

This computational model from Miller and Kintsch tried to model earlier theories of comprehension, to make predictions according to these and compare them to behavioural studies and experiments. It consisted of several modules. One was a chunking program: It's task is to read in one word at the moment, identify if it is a proposition and decide whether to integrate it or not. This part of the model was not done computationally. The next part in the input order was the Microstructure Coherence Program (MCP). The MCP sorted the propositions and stored them in the Working Memory Coherence Graph. The task of the Working Memory Coherence Graph was then to decide which propositions should be kept active during the next processing cycle. All propositions are stored in the Long Term Memory Coherence Graph, this decided which propositions should be transferred back in to the Working Memory or it can construct a whole new Working Memory Graph with a different superordinate node. The problem with this Computational Model was that it show a really low performance. But still it led to further research which tried to overcome it's shortcomings.

### Event-Indexing Model

The Event-Indexing Model was first proposed by Zwaan, Langston and Graesser (1995). It makes claims about how the incoming information in comprehension is processed and how it is represented in the long-term memory.

According to the Event-Indexing Model all incoming actions events are split into five indexes. The five indexes are the same as the five situational dimensions, though Zwaan & Radvasnky(1998) claim that there are possibly more dimensions. These might be found in future research. One basic point of this model is the processing time of integrating new events into the current model. It is easier to integrate a new incoming event if it shares indexes with a previous event. The more contiguous the new event is, the easier it is integrated into the new Situation Model. This prediction made by Zwaan & Radvanksy (1998) is supported by some prior research (Zwaan, Magliano and Graesser, 1995). The other important point of the Event-Indexing Model is the representation in long-term memory. Zwaan & Radvasnky (1998) predict that this representation is a network of nodes, these nodes encode the events. The nodes are linked with each other through situational links according to the indexes they share. This connection does not only encode if two nodes share indexes but it also encodes the number of shared indexes through its strength. This second point already hints what the Event-Indexing Model lacks. There are several things which it does not include. For example it does not encode the temporal order of the events nor the direction of the causal relationships. The biggest disadvantage of the Event-Indexing Model is clearly that it treats the different dimensions as different entities though they probably interact with each other.

Zwaan & Radvansky (1998) updated the Event-Indexing Model with some features. This new model splits the processed information into three types. These three types are the situational framework, the situational relations and the situational content. The situational framework grounds the situation in space and time, and its construction is obligatory. If no information is given this framework is probably built up by standard values retrieved from prior world knowledge or some empty variable would be instantiated. he situational relations are based on the five situational dimensions. These are analysed through the Event-Indexing Model. This kind of situational information includes not the basic information, which is given in the situational framework, but the relationships between the different entities or nodes in the network. In contrast to the situational framework the situational relations are not obligatory. If there is no information given or there are no possible inferences between entities, then there is simply no relationship there. There is also an index which addresses importance to the different relations. This importance consists of the necessity of the information to understand the situation, the easiness to inference it when it would not be mentioned and how easy the information can later be remembered. Another distinction this theory makes is the one between functional and non-functional relations (Carlson-Radvansky & Radvansky, 1996; Garrod & Sanford, 1989). Functional relations describe the interaction between different entities whereas non-functional relations are the ones between non-interacting entities. The situational content consists of the entities in the situation like protagonists and objects and their properties. These are only integrated explicitly in the Situation Model, like situational relations, if they are necessary for the understanding of the situation. Nonetheless the central and most important entities and their properties are obligatory again. It is proposed that, in order to keep the processing time low, non-essential information is only represented by something like a pointer so that this information can be retrieved if necessary.

### The Immersed Experiencer Framework

The Immersed Experiencer Framework (IEF) is based on prior processing framework models (see above for a detailed list) but tries to include several other research findings too. For example it was found that during comprehension brain regions are activated, which are very close or even overlap with brain regions which are active during the perception or the action of the words meaning (Isenberg et al., 2000; Martin & Chao, 2001; Pulvermüller, 1999, 2002). During comprehension there is also a visual representation of shape and orientation of objects (Dahan & Tanenhaus, 2002; Stanfield & Zwaan, 2002; Zwaan et al., 2002; Zwaan & Yaxley, in press a, b). Visual-spatial information primes sentence processing (Boroditsky, 2000). These visual representations can interfer with the comprehension (Fincher-Kiefer, 2001). Findings from (Glenberg, Meyer, & Lindem, 1987; Kaup & Zwaan, in press; Morrow et al., 1987; Horton & Rapp, in press; Trabasso & Suh, 1993; Zwaan et al., 2000) suggest that information which is part of the situation and the text is more active in the reader's mind than information which is not included. The fourth research finding is that people move their eyes and hand during comprehension in a consistent way with the described the situation. (Glenberg & Kaschak, in press; Klatzky et al., 1989; Spivey et al., 2000).

The main point of the Immersed Experiencer Framework is the idea that words active experiences with their referents. For example "an eagle in the sky" activates a visual experience of a eagle with stretched-out wings while "an eagle in the nest" activates a different visual experience. According to Zwaan (2003) the IEF should be seen as an engine to make predictions about language comprehension. These predictions are then suggested for further research.

According to the IEF the process of language comprehension consists of three components, these are activation, construal and integration. Each component works at a different level. Activation works at the world level, construal is responsible for the clause level while integration is active at the discourse level. Though the IEF shares many points with earlier models of language comprehension it differs in some main points. For example it suggests that language comprehension involves action and perceptual representations and not amodal propositions (Zwaan, 2003).

## Levels of Representation in Language and Text Comprehension

A lot of theories try to explain the situation model or so called mental model in different representations. Several theories of the representation deal with the comprehension from the text into the situation model itself. How many levels are included or needed and how is the situation model constructed, is it done by once like:

Sentence → Situation Model

Or are there levels in between which have to be passed until the model is constructed? Here are three different representations shown which try to explain the construction of the situation model by a text.

#### Propositional Representation

The propositional Representation claims that a sentence will be structured in another way and then it is stored. Included information does not get lost. We will have a look at the simple sentence:

“George loves Sally” the propositional representation is [LOVES(GEORGE, SALLY)]

It is easy to see that the propositional representation is easy to create and the information is still available.

#### Three levels of representation

Fletcher(1994); van Dijk & Kintch(1983); Zwaan & Radvansky (1998)

This theory says that there exist three levels of representation the surface form, text base and the situation model. In this example the sentence “The frog ate the bug.” Is already the surface form. We naturally create semantically relations to understand the sentence (semantic tree in the figure). The next level is the “Text base”. [EAT(FROG, BUG)] is the propositional representation and Text base is close to this kind of representation, except that it is rather spatial. Finally the situation model is constructed by the “Text base” representation. We can see that the situation model does not include any kind of text. It is a mental picture of information in the sentence itself.

#### Two levels of representation

Frank Koppen, Nordman, Vonk (to appear) Zwaan (2004)

This theory is like the “three levels of representations” theory. But the “Text base” level is left out. The theory itself claims that the situation model is created by the sentence itself and there is no “Text base” level needed.

Further situation model theories directing experiences exist. So not only text comprehension is done by situation models, learning through direct experience is handled by situation models, too.

#### KIWi-Model

A unified model by "Prof. Dr." Schmalhofer

One unified model the so called KIWi-Model tries to explain how text representation and direct experience interact with a situation model. Additionally the domain knowledge is integrated. The domain knowledge is used by forming a situation model in different tasks like simple sentence comprehension (chapter: Why do we need Situation Models). The KIWi-Model shows that a permanent interaction between “text representation → situation model” and between “sensory encoding → situation model” exists. These interactions supports the theory of a permanent updating of the mental model.

## Inferencing

Inferencing is used to build up complex situation models with limited information. For example: in 1973 John Bransford and Marcia Johnson made a memory experiment in which they had two groups reading variations of the same sentence.

The first group read the text "John was trying to fix the bird house. He was pounding the nail when his father came out to watch him do the work"

The second group read the text "John was trying to fix the bird house. He was looking for the nail when his father came out to watch him do the work"

After reading, some test statements were presented to the participants. These statements contained the word hammer which did not occur in the original sentences, e.g.: "John was using a hammer to fix the birdhouse. He was looking for the nail when his father came out to watch him". Participants of the first group said they had seen 57% of the test statements, while the participants from the second group had seen only 20% of the test statements.

As one can see, in the first group there is a tendency of believing to have seen the word hammer. The participants of this group made the inference, that John used a hammer to pound the nail. This memory influence test is good example to get an idea what is meant by making inferences and how they are used to complete situation models.

While reading a text, inferencing creates information which is not explicitly stated in the text; hence it is a creative process. It is very important for text understanding in general, because texts cannot include all information needed to understand the sense of a story. Texts usually leave out what is known as world knowledge. World knowledge is knowledge about situations, persons or items that most people share, and therefore don't need to be explicitly stated. Each person should be able to infer this kind of information, as for example that we usually use hammers to pound nails. It would be impossible to write a text, if it had to include all information it deals with; if there was no such thing like inferencing or if it was not automatically done by our brain.

There is a number of different kinds of inferences:

### Anaphoric Inference

This kind of Inferencing usually connects objects or persons from one to another sentence. Therefore it is responsible for connecting cross-sentence information. E.g. in "John hit the nail. He was proud of his stroke", we directly infer that "he" and "his" relate to "John". We normally make this kind of inference quite easily. But there can be sentences where more persons and other words relating to them are mixed up and people have problems understanding the story at first. This is normally regarded as bad writing style.

### Instrumental Inference

This type of Inference is about the tools and the methods used in the text, like the hammer in the example above. Or for example, if you read about somebody flying to New York, you would not infer that this person has built a dragon-flyer and jumped off a cliff but that he or she used a plane, since there is nothing else mentioned in the text and a plane is the most common form of flying to New York. If there is no specific information about tools, instruments and methods, we get this information from our General World Knowledge

### Causal Inference

Causal Inference is the conclusion that one event caused another in the text, like in "He hit his nail. So his finger ached". The first sentence gives the reason why the situation described in the second sentence came to be. It would be more difficult to draw a causal inference in an example like "He hit his nail. So his father ran away", although one could create an inference on this with some fantasy.

Causal inferences create causal connections between text elements. These connections are separated into local connections and global connections. Local connections are made within a range of 1 to 3 sentences. This depends on factors like the capacity of the working memory and the concentration due reading. Global connections are drawn between the information in one sentence together with the background information gathered so far about the whole text. Problems can occur with Causal Inferences when a story is inconsistent. For example, vegans eating steak would be inconsistent. An interesting fact about Causal Inferences (Goldstein, 2005) is that the kind of Inferences we draw here that are not easily seen at first are easier to remember. This may be due to the fact that they required a higher mental processing capacity while drawing the inference. So this "not-so-easy" inference seems to be marked in a way that it is easier to remember it.

### Predictive / Forward Inference

Predictive/Forward Inferences uses the General World Knowledge of the reader to build his prediction of the consequences of what is currently happening in the story into the Situation Model.

### Integrating Inferences into Situation Models

The question how models enter inferential processes is highly controversial in the two disciplines of cognitive psychology and artificial intelligence. A.I. gave a deep insight in psychological procedures and since the two disciplines crossed their ways and give two main bases of the cognitive science. The arguments in these are largely independent from each other although they have much in common.

Johnson-Laird (1983) makes a distinction between three types of reasoning-theories in which inferencing plays an important role. The first class gears to logical calculi and have been implemented in many formal system. The programming language Prolog arises from this way of dealing with reasoning and in psychology many theories postulate formal rules of inference, a "mental logic." These rules work in a purely syntactic way and so are "context free," blind for the context of its content. A simple example clarifies the problem with this type of theory:

    If patients have cystitis, then they are given penicillin.


and the logical conclusion:

    If patients have cystitis and are allergic to penicillin, then they are given penicillin


This is logically correct, but seems to fail our common sense of logic.

The second class of theories postulate content specific rules of inference. Their origin lies in programming languages and production systems. They work with forms like "If x is a, then x is b". If one wants to show that x is b, showing that x is a is a sub-goal of this argumentation. The idea of basing psychological theories of reasoning on content specific rules was discussed by Johnson-Laird and Wason and various sorts of such theories have been proposed. A related idea is that reasoning depends on the accumulation of specific examples within a connectionist framework, where the distinction between inference and recall is blurred.

The third class of theories is based on mental models and does not use any rules of inferencing. The process of building mental models of things heard or read. The models are in an permanent change of updates. A model built, will be equipped with new features of the new information as long as there is no information, which generates a conflict with that model. If this is the case the model is generally re-built, so that the conflict generating information fits into the new model.

## Important Topics of current research

### Linguistic Cues versus World Knowledge

According to many researchers, language is the set of processing instructions on how to build up the Situation Model of the represented situation (Gernsbacher, 1990; Givon, 1992; Kintsch, 1992; Zwaan & Radvansky, 1998). As mentioned, readers use the lexical cues and information to connect the different situational dimensions and integrate them into the model. Another important point here is prior world knowledge. World knowledge also influences how the different information in a situation model are related. The relation between linguistic cues and world knowledge is therefore an important topic of current and future research in the area of Situation Models.

### Multidimensionality

Another important aspect of current research in the area of Situation Models is the Multidimensionality of the Models. The main aspect is here how the different dimensions relate to each other, how they influence and interact. The question here is also if they interact at all and which interact. Most studies in the field were only about one or a few of the situational dimensions.

## References

Ashwin Ram, et al. (1999) Understanding Language Understanding - chapter 5

Baggett, P. (1979). Structurally equivalent stories in movie and text and the effect of the medium on recall. Journal of Verbal Learning and Verbal Behavior, 18, 333-356.

Bertram F. Malle, et al. (2001) Intentions and Intentionality - chapter 9

Boroditsky, L. (2000). Metaphoric Structuring: Understanding time through spatial metaphors. Cognition, 75, 1-28.

Carlson-Radvansky, L. A., & Radvansky, G. A. (1996). The influence of functional relations on spatial term selection. Psychological Science, 7, 56-60.

Carreiras, M., et al. (1996). The use of stereotypical gender information in constructing a mental model: Evidence from English and Spanish. Quarterly Journal of Experimental Psychology, 49A, 639-663.

Dahan, D., & Tanenhaus, M.K. (2002). Activation of conceptual representations during spoken word recognition. Abstracts of the Psychonomic Society, 7, 14.

Ericsson, K. A., & Kintsch, W. (1995). Long-term working memory. Psychological Review, 102, 211-245.

Farah, M. J., & McClelland, J. L. (1991). A computational model of semantic memory impairment: modality specificity and emergent category specificity. Journal of Experimental Psychology: General, 210, 339-357.

Fincher-Kiefer (2001). Perceptual components of situation models. Memory & Cognition, 29 , 336-343.

Fincher-Kiefer, R., et al. (1988). On the role of prior knowledge and task demands in the processing of text. Journal of Memory and Language, 27, 416-428.

Garrod, S. C., & Sanford, A. J. (1989). Discourse models as interfaces between language and the spatial world. Journal of Semantics, 6, 147-160.

Gernsbacher, M.A. (1990), Language comprehension as structure building. Hillsdale, NJ: Erlbaum.

Glenberg, A. M., & Kaschak, M. P. (2002). Grounding language in action. Psychonomic Bulletin & Review, 9, 558-565.

Glenberg, A. M., et al. (1987) Mental models contribute to foregrounding during text comprehension. Journal of Memory and Language 26:69-83.

Givon, T. (1992), The grammar of referential coherence as mental processing instructions, Linguistics, 30, 5-55.

Goldman, S.R., et al. (1996). Extending capacityconstrained construction integration: Towards "smarter" and flexible models of text comprehension. Models of understanding text (pp. 73–113).

Goldstein, E.Bruce, Cognitive Psychology, Connecting Mind, Research, and Everyday Experience (2005) - ISBN 0-534-57732-6.

Graesser, A. C., Singer, M., & Trabasso, T. (1994), Constructing inferences during narrative text comprehension. Psychological Review, 101, 371-395.

Holland, John H. , et al. (1986) Induction.

Horton, W.S., Rapp, D.N. (in press). Occlusion and the Accessibility of Information in Narrative Comprehension. Psychonomic Bulletin & Review.

Isenberg, N., et al. (1999). Linguistic threat activates the human amygdala. Proceedings of the National Academy of Sciences, 96, 10456-10459.

Johnson-Laird, P. N. (1983). Mental models: Towards a cognitive science of language, inference, and consciousness. Cambridge, MA: Harvard University Press.

John R. Koza, et al. (1996) Genetic Programming

Just, M. A., & Carpenter, P. A. (1992). A capacity hypothesis of comprehension: Individual differences ih working memory. Psychological Review, 99, 122-149.

Kaup, B., & Zwaan, R.A. (in press). Effects of negation and situational presence on the accessibility of text information. Journal of Experimental Psychology: Learning, Memory, and Cognition.

Keefe, D. E., & McDaniel, M. A. (1993). The time course and durability of predictive inferences. Journal of Memory and Language, 32, 446-463.

Kintsch, W. (1988), The role of knowledge in discourse comprehension: A construction-integration model, Psychological Review, 95, 163-182.

Kintsch, W., & van Dijk, T. A. (1978), Toward a model of text comprehension and production, Psychological Review, 85, 363-394.

Kintsch, W. (1992), How readers construct situation models for stories: The role of syntactic cues and causal inferences. In A. E Healy, S. M. Kosslyn, & R. M. Shiffrin (Eds.), From learning processes to cognitive processes. Essays in honor of William K. Estes (Vol. 2, pp. 261 – 278).

Klatzky, R.L., et al. (1989). Can you squeeze a tomato? The role of motor representations in semantic sensibility judgments. Journal of Memory and Language, 28, 56-77.

Martin, A., & Chao, L. L. (2001). Semantic memory and the brain: structure and processes. Current Opinion in Neurobiology, 11, 194-201.

McRae, K., et al. (1997). On the nature and scope of featural representations of word meaning. Journal of Experimental Psychology: General, 126, 99-130.

Mehler, Jacques, & Franck, Susana. (1995) Cognition on Cognition - chapter 9

Miceli, G., et al. (2001). The dissociation of color from form and function knowledge. Nature Neuroscience, 4, 662-667.

Morrow, D., et al. (1987). Accessibility and situation models in narrative comprehension. Journal of Memory and Language, 26, 165-187.

Pulvermüller, F. (1999). Words in the brain's language. Behavioral and Brain Sciences, 22, 253-270.

Pulvermüller, F. (2002). A brain perspective on language mechanisms: from discrete neuronal ensembles to serial order. Progress in Neurobiology, 67, 85–111.

Radvansky, G. A., & Zwaan, R.A. (1998). Situation models.

Schmalhofer, F., MacDaniel, D. Keefe (2002). A Unified Model for Predictive and Bridging Inferences

Schneider, W., & Körkel, J. (1989). The knowledge base and text recall: Evidence from a short-term longitudinal study. Contemporary Educational Psychology, 14, 382-393.

Singer, M., et al. (1992). Validation of causal bridging inferences. Journal of Memory and Language, 31, 507-524.

Spivey, M.J., et al. (2000). Eye movements during comprehension of spoken scene descriptions. Proceedings of the Twenty-second Annual Meeting of the Cognitive Science Society (pp. 487–492).

Stanfield, R.A. & Zwaan, R.A. (2001). The effect of implied orientation derived from verbal context on picture recognition. Psychological Science, 12, 153-156.

Talmy, Leonard,(2000) Toward a Cognitive Semantics - Vol. 1 - chapter1

van den Broek, P., et al. (1996). A "landscape" view of reading: Fluctuating patterns of activation and the construction of a memory representation. In B. K. Britton & A. C. Graesser (Eds.), Models of understanding text (pp. 165–187).

Van Dijk, T. A., and W. Kintsch. (1983).Strategies of discourse comprehension.

Yekovich, F.R., et al. (1990). The influence of domain knowledge on inferencing in low-aptitude individuals. In A. C. Graesser & G. H. Bower (Eds.), The psychology of learning and motivation (Vol. 25, pp. 175–196). New York: Academic Press.

Zwaan, R.A. (1996). Processing narrative time shifts. Journal of Experimental Psychology: Learning, Memory and Cognition, 22, 1196-1207

Zwaan, R.A. (2003), The Immersed Experiencer: Toward an embodied theory of language comprehension.B.H. Ross (Ed.) The Psychology of Learning and Motivation, Vol. 44. New York: Academic Press.

Zwaan, R. A., et al. (1998). Situation-model construction during translation. Manuscript in preparation, Florida State University.

Zwaan, R. A., et al. ( 1995 ). The construction of situation models in narrative comprehension: An event-indexing model. Psychological Science, 6, 292-297.

Zwaan, R. A., et al. (1995). Dimensions of situation model construction in narrative comprehension. Journal of Experimental Psychology." Learning, Memory, and Cognition, 21, 386-397.

Zwaan, R. A., Radvansky (1998), Situation Models in Language Comprehension and Memory. in Psychological Bulletin, Vol.123,No2 p. 162-185.

Zwaan, R.A., et al. (2002). Do language comprehenders routinely represent the shapes of objects? Psychological Science, 13, 168-171.

Zwaan, R.A., & Yaxley, R.H. (a). Spatial iconicity affects semantic-relatedness judgments. Psychonomic Bulletin & Review.

Zwaan, R.A., & Yaxley, R.H. (b). Hemispheric differences in semantic-relatedness judgments. Cognition.

# Knowledge Representation and Hemispheric Specialisation

## Introduction

Most human cognitive abilities rely on or interact with what we call knowledge. How do people navigate through the world? How do they solve problems, how do they comprehend their surroundings and on which basis do people make decisions and draw inferences? For all these questions, knowledge, the mental representation of the world is part of the answer.

What is knowledge? According to Merriam-Websters online dictionary, knowledge is “the range of one’s information and understanding” and “the circumstance or condition of apprehending truth or fact through reasoning”. Thus, knowledge is a structured collection of information, that can be acquired through learning, perception or reasoning.

This chapter deals with the structures both in human brains and in computational models that represent knowledge about the world. First, the idea of concepts and categories as a model for storing and sorting information is introduced, then the concept of semantic networks and, closely related to these ideas, an attempt to explain the way humans store and handle information is made. Apart from the biological aspect, we are also going to talk about knowledge representation in artificial systems which can be helpful tools to store and access knowledge and to draw quick inferences.

After looking at how knowledge is stored and made available in the human brain and in artificial systems, we will take a closer look at the human brain with regard to hemispheric specialisation. This topic is not only connected to knowledge representation, since the two hemispheres differ in which type of knowledge is stored in each of them, but also to many other chapters of this book. Where, for example, is memory located, and which parts of the brain are relevant for emotions and motivation? In this chapter we focus on the general differences between the right and the left hemisphere. We consider the question whether they differ in what and how they process information and give an overview about experiments that contributed to the scientific progress in this field.

## Knowledge Representation in the Brain

### Concepts and Categories

For many cognitive functions, concepts are essential. Concepts are mental representations, including memory, reasoning and using/understanding language. One function of concepts is the categorisation of knowledge which has been studied intensely. In the course of this chapter, we will focus on this function of concepts.

Imagine you wake up every single morning and start wondering about all the things you have never seen before. Think about how you would feel if an unknown car parked in front of your house. You have seen thousands of cars but since you have never seen this specific car in this particular position, you would not be able to provide yourself with any explanation. Since we are able to find an explanation, the questions we need to ask ourselves are: How are we able to abstract from prior knowledge and why do we not start all over again if we are confronted with a slightly new situation? The answer is easy: We categorise knowledge. Categorisation is the process by which things are placed into groups called categories.

Categories are so called “pointers of knowledge”. You can imagine a category as a box, in which similar objects are grouped and which is labeled with common properties and other general information about the category. Our brain does not only memorise specific examples of members of a category, but also stores general information that all members have in common and which therefore defines the category. Coming back to the car-example, this means that our brain does not only store how your car, your neighbors’ and your friends’ car look like, but it also provides us with the general information that most cars have four wheels, need to be fueled and so on. Because categorisation immediately allows us to get a general picture of a scene by allowing us to recognise new objects as members of a category, it saves us much time and energy that we otherwise would have to spend in investigating new objects. It helps us to focus on the important details in our environment, and enables us to draw the correct inferences. To make this obvious, imagine yourself standing at the side of a road, wanting to traverse it. A car approaches from the left. Now, the only thing you need to know about this car is the general information provided by the category, that it will run you over if you don't wait until it has passed. You don't need to care about the car's color, number of doors and so on. If you were not able to immediately assign the car to the category "car", and infer the necessity to step back, you would get hit because you would still be busy with examining the details of that specific and unknown car. Therefore categorisation has proved itself as being very helpful for surviving during evolution and allows us to quickly and efficiently navigate through our environment.

#### Definitional Approach

Take a look at the following picture! You will see four different kinds of cars. They differ in shape, color and other features, nonetheless you are probably sure that they are all cars.

What makes us so convinced about the identity of these objects? Maybe we can try to find a definition which describes all these cars. Have all of them four wheels? No, There are some which have only three. Do all cars drive with petrol? No, That’s not true for all cars either. Apparently we will fail to come up with a definition. The reason for this failure is that we have to generalise to make a definition. That would work perhaps for geometrical objects, but obviously not for natural things. They do not share completely identical features in one category for that it is problematic to find an appropriate definition. There are however similarities between members of one category, so what about this familiarity? The famous philosopher and linguist Ludwig Wittgenstein asked himself this question and claimed to have found a solution. He developed the idea of family resemblance. That means that members of a category resemble each other in several ways. For example cars differ in shape, color and many other properties but every car resembles somehow other cars. The following two approaches determines categories by similarity.

#### Prototype Approach

The prototype approach was proposed by Rosch in 1973. A prototype is an average case of all members in a particular category, but it is not an actual, really existent member of the category. Even extreme various features of members within one category can be explained by this approach. Different degrees of prototypicality represent differences among category- members. Members which resemble the prototype very strongly are high-prototypical. Members which differ in a lot of ways from the prototype are therefore low-prototypical. There seem to be connections to the idea of family resemblance and indeed some experiments showed that high prototypicality and high family resemblance are strongly connected. The typicality effect describes the fact that high-prototypical members are faster recognised as a member of a category. For example participants had to decide whether statements like “A penguin is a bird.” or “A sparrow is bird.” are true. Their decisions were much faster concerning the “sparrow” as a high-prototypical member of the category “bird” than for an atypical member as “penguin”. Participants also tend to prefer prototypical members of a category when asked to list objects of a category. Concerning the birds-example, they rather list “sparrow” than “penguin”, which is a quite intuitive result. In addition high-prototypical objects are strongly affected by priming.

#### Exemplar Approach

The typicality effect can also be explained by a third approach which is concerned with exemplars. Similar to a prototype, an exemplar is a very typical member of the category. The difference between exemplars and prototypes is that exemplars are actually existent members of a category that a person has encountered in the past. Nevertheless, it involves also the similarity of an object to a standard object. Only that the standard here involves many examples and not the average, each one called an exemplar.

Again we can show the typicality effect: Objects that are similar to many examples we have encountered are classified faster to objects which are similar to few examples. You have seen a sparrow more often in your life than a penguin, so you should recognise the sparrow faster.

For both prototype and exemplar approach there are experiments whose results support either one approach. Some people claim that the exemplar approach has less problems with variable categories and with atypical cases within categories. E.g. the category “games” is quite difficult to realise with the prototype approach. How do you want to find an average case for all games, like football, golf, chess. The reason for that could be that “real” category- members are used and all information of the individual exemplars, which can be useful when encountering other members later, are stored. Another point where the approaches can be compared is how well they work for differently sized categories. The exemplar approach seems to work better for smaller categories and prototypes do better for larger categories.

Some researchers concluded that people may use both approaches: When we initially learn something about a category we average seen exemplars into a prototype. It would be very bad in early learning, if we already take into account what exceptions a category has. In getting to know some of these exemplars more in detail the information becomes strengthened.

“We know generally what cats are (the prototype), but we know specifically our own cat the best (an exemplar).” (Minda & Smith, 2001)

#### Hierarchical Organization of Categories

Now that we know about the different approaches of how we go about forming categories, let us look at the structure of a category and the relationship between categories. The basic idea is that larger categories can be split up into more specific and smaller ones.

Rosch stated that by this process three levels of categorization are created:

It is interesting that the decrease of information from basic to superordinate is really high but that the increase of information from basic down to subordinate is rather low. Scientists wanted to find out if among these levels one is preferred over the others. They asked participants to name presented objects as quickly as possible. The result was that the subjects tended to use the basic-level name, which includes the optimal amount of stored information. Therefore a picture of a retriever would be named “dog” rather than “animal” or “retriever”. It is important to note that the levels are different for each person depending on factors such as expertise and culture.

One factor which influences our categorization is knowledge itself. Experts pay more attention to specific features of objects in their area than non-experts would do. For example after presenting some pictures of birds experts of birds tend to say the subordinate name (blackbird, sparrow) while non-experts just say "bird". The basic level in the area of interest of an expert is lower than the basic level of a layperson. Therefore knowledge and experience of people affect categorization.

Another factor is culture. Imagine a people living for instance in close contact with their natural environment, and have therefore a greater knowledge about plants etc. than, for example, students in Germany. If you ask the latter what they see in nature, they use the basic level ‘tree’ and if you do the same task for the people closer to nature they will tend to answer in terms of lower level concepts such as ‘oak tree’.

#### Representation of Categories in the Brain

There is evidence that some areas in the brain are selective for different categories, but it is not very probable that there is a corresponding brain area for each category. Results of neurophysiological research point to a kind of double dissociation for living and non-living things. Evidence has been found in fMRI studies that they are indeed represented in different brain areas. It is important to denote that nevertheless there is much overlap between the activation of different brain areas by categories. Moreover when going one step closer into the physical area there is a connection to mental categories, too. There seem to exist neurons which respond better to objects of a particular category, namely so called “category-specific neurons”. These neurons fire not only as a response to one object but to many objects within one category. This leads to the idea that probably many neurons fire if a person recognises a particular object and that maybe these combined patterns of the firing neurons represent the object.

### Semantic Networks

The "Semantic Network approach" proposes that concepts of the mind are arranged in networks, in other words, in a functional storage-system for the meanings' of words. Of course, the concept of a semantic net is very flexible. In a graphical illustration of such a semantic net, concepts of our mental dictionary are represented by nodes, which in this way represent a piece of knowledge about our world.

The properties of a concept could be placed, or "stored", next to a node representing that concept. Links between the nodes indicate the relationship between the objects. The links can not only show that there is a relationship, they can also indicate the kind of relation by their length, for example.

Every concept in the net is in a dynamical correlation with other concepts, which may have protoypically similar characteristics or functions.

#### Collins and Quillian's Model

Semantic Network according to Collins and Quillian with nodes, links, concept names and properties.

One of the first scientists who thought about structural models of human memory that could be run on a computer was Ross Quillian (1967). Together with Allan Collins, he developed the Semantic Network with related categories and with a hierarchical organisation.

In the picture on the right hand side, Collins and Quillians network with added properties at each node is shown. As already mentioned, the skeleton-nodes are interconnected by links. At the nodes, concept names are added. Like in paragraph "Hierarchical Organisation of Categories", general concepts are on the top and more particular ones at the bottom. By looking at the concept "car", one gets the information that a car has 4 wheels, has an engine, has windows, and furthermore moves around, needs fuel, is manmade.

These pieces of information must be stored somewhere. It would take too much space, if every detail must be stored at every level. So the information of a car is stored at the basis level and further information about specific cars, e.g. BMW, is stored at the lower level, where you do not need the fact that the BMW also has four wheels, if you already know that it is a car. This way of storing shared properties at a higher-level node is called Cognitive Economy.

In order not to produce redundancies, Collins and Quillian thought of this as an information inheritance principle. Information, that is shared by several concepts, is stored in the highest parent node, containing the information. So all son-nodes, that are below the information bearer , also can access the information about the properties. However, there are exceptions. Sometimes a special car has not four wheels, but three. This specific property is stored in the son-node.

The logic structure of the network is convincing, since it can show that the time of retrieving a concept and the distances in the network correlate. The correlation is proven by the sentence-verification technique. In experiments probands had to answer statements about concepts with "yes" or "no". It took actually longer to say "yes", if the concept bearing nodes were further apart.

The phenomenon that adjacent concepts are activated is called Spreading activation. These concepts are far more easily accessed by memory, they are "primed". This was studied and backed by David Meyer and Roger Schaneveldt (1971) with a lexical-decision task. Probands had to decide if word pairs were words or non-words. They were faster at finding real word pairs if the concepts of the two words were close-by in the intended network.

While having the ability to explain many questions, the model has some flaws.

The Typicality Effect is one of them. It is known that "reaction times for more typical members of a category are faster than for less typical members". (MITECS) This contradicts the assumptions of Collins' and Quillian's Model, that the distance in the net is responsible for reaction time. It was experimentally determined that some properties are stored at specific nodes, therefore the cognitive economy stands in question. Furthermore, there are examples of faster concept retrieval although the distances in the network are longer.

These points led to another version of the Semantic Network approach: Collins and Loftus Model.

#### Collins and Loftus Model

Collins and Loftus (1975) tried to abandon these problems by using shorter or longer links depending on the relatedness and interconnections between formerly not directly linked concepts. Also the former hierarchic structure was substituted by a more individual structure of a person. Only to name a few of the extensions. As shown in the picture on the right, the new model represents interpersonal differences, such as acquired during a humans lifespan. They manifest themselves in the layout and the various lengths of the links of the same concepts.

An example: The concept "vehicle" is connected to car, truck or bus by short links, and to fire engine or ambulance with longer links.

After these enhancements, the model is so omnipotent that some researchers scarced it for being too flexible. In their opinion, the model is no longer a scientific theory, because it is not disprovable. Furthermore, we do not know how long these links are in us. How should they be measurable and could they actually?

### Connectionist Approach

Every concept in a semantic net is in a dynamical correlation with other concepts which can have prototypically similar characteristics or functions. The neural networks in the brain are organised similarly. Furthermore, it is useful to include the features of ”spreading activation” and ”parallel distributed activity” in a concept of such a semantic net to explain the complexity of the very sophisticated environment.

#### Basic Principles of Connectionism

The connectionists did this by modeling their networks after neural networks in the nervous system. Every node of the diagram represents a neuron-like processing unit. These units can be divided into three subgroups: Input units, which become activated by a stimulation of the environment, hidden units, which receive signals from an input-unit and pass them to an output unit and output units, which show a pattern of activation that represents the initial stimulus. Excitatory and inhibitory connections between units just like synapses in the brain allow ’input’ to be analyzed and evaluated. For computing the outcome of such systems, it is useful to attach a certain ’weight’ to the input of the connectionists system, that mimics the strength of a stimulus of the human nervous system.

It needs to be emphasized that connectionist networks are not models of how the nervous system works. The approach of connectionist networks is a hypothetical approach to represent categories in network patterns. Another name for the connectionist approach is Parallel Distributed Processing approach, for short PDP, since processing takes place in parallel lines and the output is distributed across many units.

#### Operation of Connectionist Networks

First a stimulus is presented to the input units. Then the links pass on the signal to the hidden units, that distribute the signal to the output units via further links. In the first trial, the output units shows a wrong pattern. After many repetitions, the pattern finally is correct. This is achieved by back propagation. The error signals are send back to the hidden units and the signals are reprocessed. During these repetitive trials, the ”weights” of the signal are gradually calibrated on behalf of the error signals in order to get a right output pattern at last. After having achieved a correct pattern for one stimulus, the system is ready to learn a new concept.

#### Evaluating Connectionism

The PDP approach is important for knowledge representation studies. It is far from perfect, but on the move to get there. The process of learning enables the system to make generalizations, because similar concepts create similar patterns. After knowing one car, the system can recognize similar patterns as other cars, or may even predict how other cars look like. Furthermore, the system is protected against total wreckage. A damage to single units will not cause the system’s total breakdown, but will delete only some patterns, which use those units. This is called graceful degradation and is often found in patients with brain lesions. These two arguments lead to the third. The PDP is organized similarly to the human brain. And some effective computer programs have been developed on this basis, that were able to predict the consequences of human brain damage.

On the other hand, the connectionist approach is not without problems. Formerly learned concepts can be superposed by new concepts. In addition, PDP can not explain more complex processes than learning concepts. Neither can it explain the phenomenon of rapid learning, which does not require extensive learning. It is assumed that rapid learning takes place in the hippocampus, and that conceptual and gradual learning is located in the cortex.

In conclusion, the PDP approach can explain some features of knowledge representation very well but fails for some complex processes.

### Mental Representation

There are different theories on how living beings, especially humans encode information to knowledge. We may think of diverse mental representations of the same object. When reading the written word "car", we call this a discrete symbol. It matches with all imaginable cars and is therefore not bound to a special vehicle. It is an abstract, or amodal, representation. This is different if instead we see a picture of a car. It might be a red sports car. Now we speak of a non-discrete symbol, an imaginable picture that appears in front of our inner eye and that fits only to certain cars of sufficiently similar appearance.

#### Propositional Approach

The Propositional Approach is one possible way to model mental representations in the human brain. It works with discrete symbols which are strongly connected among each other. The usage of discrete symbols necessitates clear definitions of each symbol, as well as information about the syntactic rules and the context dependencies in which the symbols may be used. The symbol "car" is only comprehensible for people who do understand English and have seen a car before and therefore know what a car is about. The Propositional Approach is an explicit way to explain mental representation.

Definitions of propositions differ in the different fields of research and are still under discussion. One possibility is the following: ”Traditionally in philosophy a distinction is made between sentences and the ideas underlying those sentences, called propositions. A single proposition may be expressed by an almost unlimited number of sentences. Propositions are not atomic, however; they may be broken down into atomic concepts called ”Concepts”.

In addition, mental propositions deal with the storage, retrieval and interconnection of information as knowledge in the human brain. There is a big discussion, if the brain really works with propositions or if the brain processes its information to and from knowledge in another way or perhaps in more than one way.

#### Imagery Approach

One possible alternative to the Propositional Approach, is the Imagery Approach. Since here the representation of knowledge is understood as the storage of images as we see them, it is also called analogical or perceptual approach. In contrast to the Propositional Approach it works with non-discrete symbols and is modality specific. It is an implicit approach to mental representation. The picture of the sports car includes implicitly seats of any kind. If additionally mentioned that they are off-white, the image changes to a more specific one. How two non-discrete symbols are combined is not as predetermined as it is for discrete symbols. The picture of the off-white seats may exist without the red car around, as well as the red car did before without the off-white seats. The Imagery and the Propositional Approaches are also discussed in chapter 8.

## Computational Knowledge Representation

Computational knowledge representation is concerned with how knowledge can be represented symbolically and how it can be manipulated in automated ways. Almost all of the theories mentioned above evolved in symbiosis with computer science. On the one hand, computer science uses the human brain as an inspiration for computational systems, on the other hand, artificial models are used to further our understanding of the biological basis of knowledge representation.

Knowledge representation is connected to many other fields related to information processing, e.g. logic, linguistics, reasoning, and the philosophical aspects of these fields. In particular, it is one of the crucial topics of Artificial Intelligence, as it deals with information encoding, storing and usage for computational models of cognition.

There are three main points that need to be addressed with regard to computational knowledge representation: The process, the formalisms and the applications of knowledge engineering.

### Knowledge Engineering

The process of developing computational knowledge-based systems is called knowledge engineering. This process involves assessing the problem, developing a structure for the knowledge base and implementing actual knowledge into the knowledge base. The main task for knowledge engineers is to identify an appropriate conceptual vocabulary.

There are different kinds of knowledge, for instance rules of games, attributes of objects and temporal relations, and each type is expressed best by its own specific vocabulary. Related conceptual vocabularies that are able to describe objects and their relationships are called ontologies. These conceptual vocabularies are highly formal and each is able to express meaning in specific fields of knowledge. They are used for queries and assertions to knowledge bases and make sharing knowledge possible. In order to represent different kinds of knowledge in one framework, Jerry Hobbs (1985) proposed the principle of ontological promiscuity. Thereby several ontologies are mixed together to cover a range of different knowledge types.

A query to a system that represents knowledge about a world made of everyday items and that can perform actions in this world may look like this: “Take the cube from the table!”. This query could be processed as follows: First, since we live in a temporal world, the action needs to be a processed in a way that can be broken down into successive steps. Secondly, we make general statements about the rules for our system, for example that gravitational forces have a certain effect. Finally, we try out the chain of tasks that have to be done to take the cube from the table. 1) Reach out for the cube with the hand, 2) grab it, 3) raise the hand with the cube, etc. Logical Reasoning is the perfect tool for this task, because a logical system can also recognise if the task is possible at all.

There is a problem with the procedure described above. It is called the frame problem. The system in the example deals with changing states. The actions that take place change the environment. That is, the cube changes its place. Yet, the system does not make any propositions about the table so far. We need to make sure, that after picking up the cube from the table, the table does not change its state. It should not disappear or break down. This could happen, since the table is no longer needed. The systems tells that the cube is in the hand and omits any information about the table. In order to tackle the Frame Problem there have to be stated some special axioms or similar things. The Frame Problem has not been solved completely. There are different approaches to a resolution. Some add object spatial and temporal boundaries to the system/world (Hayes 1985). Others try more direct modeling. They do transformations on state descriptions. For example: Before the transformation the cube is on the table, after transformation , the table still exists, but independent from the cube.

### Knowledge Representation Formalisms

The type of knowledge representation formalism determines how information is stored. Most knowledge representation applications are developed for a specific purpose, for example a digital map for robot navigation or a graph like account of events for visualizing stories.

Each knowledge representation formalisms needs a strict syntax, semantics and inference procedure in order to be clear and computable. Most formalisms have the following attributes to be able to express information more clearly: The Semantic Network Approach, hierarchies of concepts (e.g. vehicle -> car -> truck) and property inheritance (e.g. red cars have four wheels since cars have four wheels). There are attributes that provide the possibility to add new information to the system without creating any inconsistencies, and the possibility to create a "closed-world" assumption. For example if the information that we have gravitation on earth is omitted, the closed-world assumption must be false for our earth/world.

A problem for knowledge representation formalisms is that expressive power and deductive reasoning are mutually exclusive. If a formalism has a big expressive power, it is able to describe a wide range of (different) information, but is not able to do brilliant inferring from (given) data. Propositional logic is restricted to Horn clauses. A Horn clause is a disjunction of literals with at most one positive literal. It has a very good decision procedure(inferring), but can not express generalisations. An example is given in the logical programming language Prolog. If a formalism has a big deductive complexity, it is able to do brilliant inferring, i.e. make conclusions, but has a poor range of what it can describe. An example is second-order logic. So, the formalism has to be tailored to the application of the KR system. This is reached by compromises between expressiveness and deductive complexity. In order to get a greater deductive power, expressiveness is sacrificed and vice versa.

With the growth of the field of knowledge bases, many different standards have been developed. They all have different syntactic restrictions. To allow intertranslation, different "interchange" formalisms have been created. One example is the Knowledge Interchange Format which is basically first-order set theory plus LISP (Genesereth et al. 1992).

### Applications of Knowledge Representation

Computational knowledge representation is mostly not used as a model of cognition but to make pools of information accessible, i.e. as an extension of database technology. In these cases general rules and models are not needed. With growing storage media, one is capable of creating simple knowledge bases stating all specific facts. The information is stored in the form of sentential knowledge, that is knowledge saved in form of sentences comparable to propositions and program code. Knowledge is seen as a reservoir of useful information rather than as supporting a model of cognitive activity. More recently, increased available memory size has made it feasible to use "compute-intensive" representations that simply list all the particular facts rather than stating general rules. These allow the use of statistical techniques such as Markov simulation, but seem to abandon any claim to psychological plausibility.

### Artificial Intelligence

Artificial intelligence or intelligence added to a system that can be arranged in a scientific context or Artificial Intelligence (English: Artificial Intelligence or simply abbreviated AI) is defined as the intelligence of a scientific entity. This system is generally considered a computer. Intelligence is created and incorporated into a machine (computer) in order to be able to do work as human beings can. Several types of fields that use artificial intelligence include expert systems, computer games (games), fuzzy logic, artificial neural networks and robotics. Many things seem difficult for human intelligence, but for Informatics it is relatively unproblematic. For example: transforming equations, solving integral equations, making chess games or Backgammon. On the other hand, things that for humans seem to demand a little intelligence, until now are still difficult to realize in Informatics. For example: Object / Face Introduction, playing soccer.

Although AI has a strong connotation of science fiction, AI forms a very important branch of computer science, dealing with behavior, learning and intelligent adaptation in a machine. Research in AI involves making machines to automate tasks that require intelligent behavior. Examples include control, planning and scheduling, the ability to answer customer diagnoses and questions, as well as handwriting recognition, voice and face. Such things have become separate disciplines, which focus on providing solutions to real life problems. The AI ​​system is now often used in the fields of economics, medicine, engineering and the military, as has been built in several home computer and video game software applications. This 'artificial intelligence' not only wants to understand what an intelligence system is, but also constructs it. There is no satisfactory definition for 'intelligence': 1. intelligence: the ability to acquire knowledge and use it 2. or intelligence is what is measured by a 'Intelligence Test'

Broadly speaking, AI is divided into two notions namely Conventional AI and Computational Intelligence (CI, Computational Intelligence). Conventional AI mostly involves methods now classified as machine learning, which are characterized by formalism and statistical analysis. Also known as symbolic AI, logical AI, pure AI and GOFAI, Good Old Fashioned Artificial Intelligence. The methods include: 1. Expert system: apply the capability of consideration to reach conclusions. An expert system can process a large amount of information that is known and provides conclusions based on these information. 2. Case based considerations 3. Bayesian Network 4. Behavior-based AI: a modular method for manually establishing AI systems Computational intelligence involves iterative development or learning (eg tuning parameters as in connectionist systems. This learning is based on empirical data and is associated with non-symbolic AI, irregular AI and soft calculations. The main methods include: 1. Neural Network: a system with very strong pattern recognition capabilities 2. Fuzzy systems: techniques for consideration under uncertainty, have been used extensively in modern industry and consumer product control systems. 3. Evolutionary computing: applying biologically inspired concepts such as population, mutation and "survival of the fittest" to produce better problem solving. These methods are mainly divided into evolutionary algorithms (eg genetic algorithms) and group intelligence (eg ant algorithms) With a hybrid intelligent system, experiments were made to combine these two groups. Expert inference rules can be generated through neural networks or production rules from statistical learning as in ACT-R. A promising new approach states that strengthening intelligence tries to achieve artificial intelligence in the process of evolutionary development as a side effect of strengthening human intelligence through technology.

#### Justification in decision making

Decision making often includes the need to assign a reason for the decision and therefore justify it. This factor is illustrated by an experiment by A. Tversky and E. Shafir (1992): A very attractive vacation package has been offered to a group of students who have just passed an exam and to another group of students who have just failed the exam and have the chance to rewrite it after the holidays coming up. All students have the options to buy the ticket straight away, to stay at home, or to pay $5 for keeping the option open to buy it later. At this point, there is no difference between the two groups, since the number of students who passed the exam and decided to book the flight (with the justification of a deserving a reward), is the same as the number of students who failed and booked the flight (justified as consolation and having time for reoccupation). A third group of students who were informed to receive their results in two more days was confronted with the same problem. The majority decided to pay$5 and keep the option open until they would get their results. The conclusion now is that even though the actual exam result does not influence the decision, it is required in order to provide a rationale.

### Executive functions

Figure 9, Left frontal lobe

Subsequently, the question arises how this cognitive ability of making decisions is realized in the human brain. As we already know that there are a couple of different tasks involved in the whole process, there has to be something that coordinates and controls those brain activities – namely the executive functions. They are the brain's conductor, instructing other brain regions to perform, or be silenced, and generally coordinating their synchronized activity (Goldberg, 2001). Thus, they are responsible for optimizing the performance of all “multi-threaded” cognitive tasks.

Locating those executive functions is rather difficult, as they cannot be appointed to a single brain region. Traditionally, they have been equated with the frontal lobes, or rather the prefrontal regions of the frontal lobes; but it is still an open question whether all of their aspects can be associated with these regions.

Nevertheless, we will concentrate on the prefrontal regions of the frontal lobes, to get an impression of the important role of the executive functions within cognition. Moreover, it is possible to subdivide these regions into functional parts. But it is to be noted that not all researchers regard the prefrontal cortex as containing functionally different regions.

#### Executive functions in practice

According to Norman and Shallice, there are five types of situations in which executive functions may be needed in order to optimize performance, as the automatic activation of behaviour would be insufficient. These are situations involving...

1. ...planning or decision making.

2. ...error correction or trouble shooting.

3. ...responses containing novel sequences of actions.

4. ...technical difficulties or dangerous circumstances.

5. ...the control of action or the overcoming of strong habitual responses.

The following parts will have a closer look to each of these points, mainly referring to brain-damaged individuals.

Surprisingly, intelligence in general is not affected in cases of frontal lobe injuries (Warrington, James & Maciejewski, 1986). However, dividing intelligence into crystallised intelligence (based on previously acquired knowledge) and fluid intelligence (meant to rely on the current ability of solving problems), emphasizes the executive power of the frontal lobes, as patients with lesions in these regions performed significantly worse in tests of fluid intelligence (Duncan, Burgess & Emslie, 1995).

1. Planning or decision making

Impairments in abstract and conceptual thinking

To solve many tasks it is important that one is able to use given information. In many cases, this means that material has to be processed in an abstract rather than in a concrete manner. Patients with executive dysfunction have abstraction difficulties. This is proven by a card sorting experiment (Delis et al., 1992):

The cards show names of animals and black or white triangles placed above or below the word. Again, the cards can be sorted with attention to different attributes of the animals (living on land or in water, domestic or dangerous, large or small) or the triangles (black or white, above or below word). People with frontal lobe damage fail to solve the task because they cannot even conceptualize the properties of the animals or the triangles, thus are not able to deduce a sorting-rule for the cards (in contrast, there are some individuals only perseverating; they find a sorting-criterion, but are unable to switch to a new one).

These problems might be due to a general difficulty in strategy formation.

Goal directed behavior

Let us again take Knut into account to get an insight into the field of goal directed behaviour – in principle, this is nothing but problem solving since it is about organizing behavior towards a goal. Thus, when Knut is packing his bag for his holiday, he obviously has a goal in mind (in other words: He wants to solve a problem) – namely get ready before the plane starts. There are several steps necessary during the process of reaching a certain goal:

Goal must be kept in mind

Knut should never forget that he has to pack his bag in time.

Knut packs his bag in a structured way. He starts packing the crucial things and then goes on with rest.

Completed portions must be kept in mind

If Knut already packed enough underwear into his bag, he would not need to search for more.

Imagine that Knut wants to pack his favourite T-Shirt, but he realizes that it is dirty. In this case, Knut has to adapt to this situation and has to pick another T-Shirt that was not in his plan originally.

Evaluation of actions

Along the way of reaching his ultimate goal Knut constantly has to evaluate his performance in terms of ‘How am I doing considering that I have the goal of packing my bag?’.

Executive dysfunction and goal directed behavior

The breakdown of executive functions impairs goal directed behavior to a large extend. In which way cannot be stated in general, it depends on the specific brain regions that are damaged. So it is quite possible that an individual with a particular lesion has problems with two or three of the five points described above and performs within average regions when the other abilities are tested. However, if only one link is missing from the chain, the whole plan might get very hard or even impossible to master. Furthermore, the particular hemisphere affected plays a role as well.

Another interesting result was the fact that lesions in the frontal lobes of left and right hemisphere impaired different abilities. While a lesion in the right hemisphere caused trouble in making recency judgements, a lesion in the left hemisphere impaired the patient’s performance only when the presented material was verbal or in a variation of the experiment that required self-ordered sequencing. Because of that we know that the ability to sequence behaviour is not only located in the frontal lobe but in the left hemisphere particularly when it comes to motor action.

Problems in sequencing

In an experiment by Milner (1982), people were shown a sequence of cards with pictures. The experiment included two different tasks: recognition trials and recency trials. In the former the patients were shown two different pictures, one of them has appeared in the sequence before, and the participants had to decide which one it was. In the latter they were shown two different pictures, both of them have appeared before, they had to name the picture that was shown more recently than the other one. The results of this experiment showed that people with lesions in temporal regions have more trouble with the recognition trial and patients with frontal lesions have difficulties with the recency trial since anterior regions are important for sequencing. This is due to the fact that the recognition trial demanded a properly functioning recognition memory, the recency trial a properly functioning memory for item order. These two are dissociable and seem to be processed in different areas of the brain.

The frontal lobe is not only important for sequencing but also thought to play a major role for working memory. This idea is supported by the fact that lesions in the lateral regions of the frontal lobe are much more likely to impair the ability of 'keeping things in mind' than damage to other areas of the frontal cortex do.

But this is not the only thing there is to sequencing. For reaching a goal in the best possible way it is important that a person is able to figure out which sequence of actions, which strategy, best suits the purpose, in addition to just being able to develop a correct sequence. This is proven by an experiment called 'Tower of London' (Shallice, 1982) which is similar to the famous 'Tower of Hanoi' task with the difference that this task required three balls to be put onto three poles of different length so that one pole could hold three balls, the second one two and the third one only one ball, in a way that a changeable goal position is attained out of a fixed initial position in as few moves as possible. Especially patients with damage to the left frontal lobe proved to work inefficiently and ineffectively on this task. They needed many moves and engaged in actions that did not lead toward the goal.

Problems with the interpretation of available information

Quite often, if we want to reach a goal, we get hints on how to do it best. This means we have to be able to interpret the available information in terms of what the appropriate strategy would be. For many patients of executive dysfunction this is not an easy thing to do either. They have trouble to use this information and engage in inefficient actions. Thus, it will take them much longer to solve a task than healthy people who use the extra information and develop an effective strategy.

Problems with self-criticism and -monitoring

The last problem for people with frontal lobe damage we want to present here is the last point in the above list of properties important for proper goal directed behavior. It is the ability to evaluate one's actions, an ability that is missing in most patients. These people are therefore very likely to 'wander off task' and engage in behavior that does not help them to attain their goal. In addition to that, they are also not able to determine whether their task is already completed at all. Reasons for this are thought to be a lack of motivation or lack of concern about one's performance (frontal lobe damage is usually accompanied by changes in emotional processing) but these are probably not the only explanations for these problems.

Another important brain region in this context – the medial portion of the frontal lobe – is responsible for detecting behavioral errors made while working towards a goal. This has been shown by ERP experiments where there was an error-related negativity 100ms after an error has been made. If this area is damaged, this mechanism cannot work properly any more and the patient loses the ability to detect errors and thus monitor his own behavior.

However, in the end we must add that although executive dysfunction causes an enormous number of problems in behaving correctly towards a goal, most patients when assigned with a task are indeed anxious to solve it but are just unable to do so.

2. Error correction and trouble shooting

Figure 10, Example for the WCST: Cards sorted according to shape (a), number (b) or color (c) of the objects

The most famous experiment to investigate error correction and trouble shooting is the Wisconsin Card Sorting Test (WCST). A participant is presented with cards that show certain objects. These cards are defined by shape, color and number of the objects on the cards. These cards now have to be sorted according to a rule based on one of these three criteria. The participant does not know which rule is the right one but has to reach the conclusion after positive or negative feedback of the experimenter. Then at some point, after the participant has found the correct rule to sort the cards, the experimenter changes the rule and the previous correct sorting will lead to negative feedback. The participant has to realize the change and adapt to it by sorting the cards according to the new rule.

Patients with executive dysfunction have problems identifying the rule in the first place. It takes them noticeably longer because they have trouble using already given information to make a conclusion. But once they got to sorting correctly and the rule changes, they keep sorting the cards according to the old rule although many of them notice the negative feedback. They are just not able to switch to another sorting-principle, or at least they need many tries to learn the new one. They perseverate.

Problems in shifting and modifying strategies

Intact neuronal tissue in the frontal lobe is also crucial for another executive function connected with goal directed behavior that we described above: Flexibility and adaptability. This means that persons with frontal lobe damage will have difficulties in shifting their way of thinking – meaning creating a new plan after recognizing that the original one cannot be carried out for some reason. Thus, they are not able to modify their strategy according to this new problem. Even when it is clear that one hypothesis cannot be the right one to solve a task, patients will stick to it nevertheless and are unable to abandon it (called 'tunnel vision').

Moreover, such persons do not use as many appropriate hypotheses for creating a strategy as people with damage to other brain regions do. In what particular way this can be observed in patients can again not be stated in general but depends on the nature of the shift that has to be made.

These earlier described problems of 'redirecting' of one's strategies stand in contrast to the actual 'act of switching' between tasks. This is yet another problem for patients with frontal lobe damage. Since the control system that leads task switching as such is independent from the parts that actually perform these tasks, the task switching is particularly impaired in patients with lesions to the dorsolateral prefrontal cortex while at the same time they have no trouble with performing the single tasks alone. This of course, causes a lot of problems in goal directed behavior because as it was said before: Most tasks consist of smaller subtasks that have to be completed.

3. Responses containing novel sequences of actions

Many clinical tests have been done, requiring patients to develop strategies for dealing with novel situations. In the Cognitive Estimation Task (Shallice & Evans, 1978) patients are presented with questions whose answers are unlikely to be known. People with damage to the prefrontal cortex have major difficulties to produce estimates for questions like: “How many camels are in Holland?”.

In the FAS Test (Miller, 1984) subjects have to generate sequences of words (not proper names) beginning with a certain letter (“F” , “A” or “S”) in a one-minute period. This test involves developing new strategies, selecting between alternatives and avoiding repeating previous given answers. Patients with left lateral prefrontal lesions are often impaired (Stuss et al., 1998).

4. Technical difficulties or dangerous circumstances

One single mistake in a dangerous situation may easily lead to serious injuries while a mistake in a technical difficult situation (e.g. building a house of cards) would obviously lead to failure. Thus, in such situations, automatic activation of responses clearly would be insufficient and executive functions seem to be the only solution for such problems.

Wilkins, Shallice and McCarthy (1987) were able to prove a connection between dangerous or difficult situations and the prefrontal cortex, as patients with lesions to this area were impaired during experiments concerning dangerous or difficult situations. The ventromedial and orbitofrontal cortex may be particularly important for these aspects of executive functions.

5. Control of action or the overcoming of strong habitual responses

Deficits in initiation, cessation and control of action

We start by describing the effects of the loss of the ability to start something, to initiate an action. A person with executive dysfunction is likely to have trouble beginning to work on a task without strong help from the outside, while people with left frontal lobe damage often show impaired spontaneous speech and people with right frontal lobe damage rather show poor nonverbal fluency. Of course, one reason is the fact that this person will not have any intention, desire or concern on his or her own of solving the task since this is yet another characteristic of executive dysfunction. But it is also due to a psychological effect often connected with the loss of properly executive functioning: Psychological inertia. Like in physics, inertia in this case means that an action is very hard to initiate, but once started, it is again very hard to shift or stop. This phenomenon is characterized by engagement in repetitive behavior, is called perseveration (cp. WCST).

Another problem caused by executive dysfunction can be observed in patients suffering from the so called environmental dependency syndrome. Their actions are impelled or obligated by their physical or social environment. This manifests itself in many different ways and depends to a large extent on the individual’s personal history. Examples are patients who begin to type when they see a computer key board, who start washing the dishes upon seeing a dirty kitchen or who hang up pictures on the walls when finding hammer, nails and pictures on the floor. This makes these people appear as if they were acting impulsively or as if they have lost their ‘free will’. It shows a lack of control for their actions. This is due to the fact that an impairment in their executive functions causes a disconnection between thought and action. These patients know that their actions are inappropriate but like in the WCST, they cannot control what they are doing. Even if they are told by which attribute to sort the cards, they will still keep sorting them sticking to the old rule due to major difficulties in the translation of these directions into action.

What is needed to avoid problems like these are the abilities to start, stop or change an action but very likely also the ability to use information to direct behavior.

#### Deficits in cognitive estimation

Next to the difficulties to produce estimates to questions whose answers are unlikely known, patients with lesions to the frontal lobes have problems with cognitive estimation in general.

Cognitive estimation is the ability to use known information to make reasonable judgments or deductions about the world. Now the inability for cognitive estimation is the third type of deficits often observed in individuals with executive dysfunction. It is already known that people with executive dysfunction have a relatively unaffected knowledge base. This means they cannot retain knowledge about information or at least they are unable to make inferences based on it. There are various effects which are shown on such individuals. Now for example patients with frontal lobe damage have difficulty estimating the length of the spine of an average woman. Making such realistic estimations requires inferencing based on other knowledge which is in this case, knowing that the height of the average woman is about 5 ft 6 in (168 cm) and considering that the spine runs about one third to one half the length of the body and so on. Patients with such a dysfunction do not only have difficulties in their estimates of cognitive information but also in their estimates of their own capacities (such as their ability to direct activity in goal – oriented manner or in controlling their emotions). Prigatuno, Altman and O’Brien (1990) reported that when patients with anterior lesions associated with diffuse axonal injury to other brain areas are asked how capable they are of performing tasks such as scheduling their daily activities or preventing their emotions from affecting daily activities, they grossly overestimate their abilities. From several experiments Smith and Miler (1988) found out that individuals with frontal lobe damages have no difficulties in determining whether an item was in a specific inspection series they find it difficult to estimate how frequently an item did occur. This may not only reflect difficulties in cognitive estimation but also in memory task that place a premium on remembering temporal information. Thus both difficulties (in cognitive estimation and in temporal sequencing) may contribute to a reduced ability to estimate frequency of occurrence.

Despite these impairments in some domains the abilities of estimation are preserved in patients with frontal lobe damage. Such patients also do have problems in estimating how well they can prevent their emotions for affecting their daily activities. They are also as good at judging how many dues they will need to solve a puzzle as patients with temporal lobe damage or neurologically intact people.

#### Theories of frontal lobe function in executive control

In order to explain that patients with frontal lobe damage have difficulties in performing executive functions, four major approaches have developed. Each of them leads to an improved understanding of the role of frontal regions in executive functions, but none of these theories covers all the deficits occurred.

Role of working memory

The most anatomically specific approach assumes the dorsolateral prefrontal area of the frontal lobe to be critical for working memory. The working memory which has to be clearly distinguished from the long term memory keeps information on-line for use in performing a task. Not being generated for accounting for the broad array of dysfunctions it focuses on the three following deficits:

1. Sequencing information and directing behavior toward a goal
2. Understanding of temporal relations between items and events
3. Some aspects of environmental dependency and perseveration

Research on monkeys has been helpful to develop this approach (the delayed-response paradigm, Goldman-Rakic, 1987, serves as a classical example).

Role of Controlled Versus Automatic Processes

There are two theories based on the underlying assumption that the frontal lobes are especially important for controlling behavior in non-experienced situations and for overriding stimulus-response associations, but contribute little to automatic and effortless behavior (Banich, 1997).

Stuss and Benson (1986) consider control over behavior to occur in a hierarchical manner. They distinguish between three different levels, of which each is associated with a particular brain region. In the first level sensory information is processed automatically by posterior regions, in the next level (associated with the executive functions of the frontal lobe) conscious control is needed to direct behavior toward a goal and at the highest level controlled self-reflection takes place in the prefrontal cortex.

This model is appropriate for explaining deficits in goal-oriented behavior, in dealing with novelty, the lack of cognitive flexibility and the environmental dependency syndrome. Furthermore it can explain the inability to control action consciously and to criticise oneself. The second model developed by Shalice (1982) proposes a system consisting of two parts that influence the choice of behavior. The first part, a cognitive system called contention scheduling, is in charge of more automatic processing. Various links and processing schemes cause a single stimulus to result in an automatic string of actions. Once an action is initiated, it remains active until inhibited. The second cognitive system is the supervisory attentional system which directs attention and guides action through decision processes and is only active “when no processing schemes are available, when the task is technically difficult, when problem solving is required and when certain response tendencies must be overcome” (Banich , 1997).

This theory supports the observations of few deficits in routine situations, but relevant problems in dealing with novel tasks (e.g. the Tower of London task, Shallice, 1982), since no schemes in contention scheduling exist for dealing with it. Impulsive action is another characteristic of patients with frontal lobe damages which can be explained by this theory. Even if asked not to do certain things, such patients stick to their routines and cannot control their automatic behavior.

Use of Scripts

The approach based on scripts, which are sets of events, actions and ideas that are linked to form a unit of knowledge was developed by Schank (1982) amongst others.

Containing information about the setting in which an event occurs, the set of events needed to achieve the goal and the end event terminating the action. Such managerial knowledge units (MKUs) are supposed to be stored in the prefrontal cortex. They are organized in a hierarchical manner being abstract at the top and getting more specific at the bottom.

Damage of the scripts leads to the inability to behave goal-directed, finding it easier to cope with usual situations (due to the difficulty of retrieving a MKU of a novel event) and deficits in the initiation and cessation of action (because of MKUs specifying the beginning and ending of an action.)

Role of a goal list

The perspective of artificial intelligence and machine learning introduced an approach which assumes that each person has a goal list, which contains the tasks requirements or goals. This list is fundamental to guiding behavior and since frontal lobe damages disrupt the ability to form a goal list, the theory helps to explain difficulties in abstract thinking, perceptual analysis, verbal output and staying on task. It can also account for the strong environmental influence on patients with frontal lobe damages, due to the lack of internal goals and the difficulty of organizing actions toward a goal.

Brain Region Possible Function (left hemisphere) Possible Function (right hemisphere) Brodman's Areas which are involved
ventrolateral prefrontal cortex (VLPFC) Retrieval and maintenance of semantic and/or linguistic information Retrieval and maintenance of visuospatial information 44, 45, 47 (44 & 45 = Broca's Area)
dorsolateral prefrontal cortex )DLPRF) Selecting a range of responses and suppressing inappropriate ones; manipulating the contents of working memory Monitoring and checking of information held in mind, particularly in conditions of uncertainty; vigilance and sustained attention 9, 46
anterior prefrontal cortex; frontal pole; rostral prefrontal cortex Multitasking; maintaining future intentions & goals while currently performing other tasks or subgoals same 10
anterior cingulate cortex (dorsal) Monitoring in situations of response conflict and error detection same 24 (dorsal) & 32 (dorsal)

## Summary

It is important to keep in mind that reasoning and decision making are closely connected to each other: Decision making in many cases happens with a previous process of reasoning. People's everyday life is decisively coined by the synchronized appearance of these two human cognitive features. This synchronization, in turn, is realized by the executive functions which seem to be mainly located in the frontal lobes of the brain.

## References

•Krawczyk,Daniel (2018). Reasoning: The Neuroscience of How We Think. Elsevier.

Goldstein, E. Bruce (2005). Cognitive Psychology - Connecting, Mind Research, and Everyday Experience. Thomson Wadsworth.

• Marie T. Banich (1997). Neuropsychology. The neural bases of Mental Function. Houghton Mifflin.

• Wilson, Robert A.& Keil, Frank C. (1999). The MIT Encyclopedia of the Cognitive Sciences. Massachusetts: Bradford Book.

• Ward, Jamie (2006). The Student's Guide To Cognitive Science. Psychology Press.

• Levitin, D. J.(2002). Foundations of Cognitve Psychology.

• Schmalhofer, Franz. Slides from the course: Cognitive Psychology and Neuropsychology, Summer Term 2006/2007, University of Osnabrueck

Reasoning

Decision making

Executive functions

# Present and Future of Research

"It's hard to make predictions - especially about the future." Robert Storm Petersen

## Introduction / Until now

Developing from the information processing approach, present cognitive psychology differs from classical psychological approaches in the methods used as well as in the interdisciplinary connections to other sciences. Apart from rejecting introspection as a valid method to analyse mental phenomena, cognitive psychology introduces further, mainly computer-based, techniques which have not been in the range of classical psychology by now.

By using brain-imaging-techniques like fMRI, cognitive psychology is able to analyse the relation between the physiology of the brain and mental processes. In the future, cognitive psychology will likely focus on computer-based methods even more. Thus, the field will profit from improvements in the area of IT. For example, contemporary fMRI scans are plagued by many possible sources of error, which should be solved in the future, thereby improving the power and precision of the technique. In addition, computational approaches can be combined with classical behavioural approaches, where one infers a participant's mental states from exhibited behaviour.

Cognitive psychology, however, does not only rely on methods developed by other branches of science. It also collaborates with closely related fields, including artificial intelligence, neuroscience, linguistics and the philosophy of mind. The advantage of this multidisciplinary approach is clear: different perspectives on the topic make it possible to test hypotheses using different techniques and to eventually develop new conceptual frameworks for thinking about the mind. Often, modern studies of cognitive psychology criticise classical information processing approaches, which opens the door for other approaches to acquire additional importance. For example, the classical approach has been modified to a parallel information processing approach, which is thought to be closer to the actual functioning of the brain.

## Today's approaches

### The momentary usage of brain imaging

How are the known brain imaging methods used? What kind of information can be derived using this methods?

#### fMRI

fMRI is an non-invasive imaging method that pictures active structures of the brain in a high spatial resolution. For that the participant has to lie in a tube and his brain is pictured. While doing a task active structures in the brain of the participant can be recognised on the recordings.

How?
If parts of the brain are active, the metabolism is also stimulated. The blood, that has an important function in the metabolic transport is flowing to the active nerve cells. The haemoglobin in the red blood cells carries oxygen (oxyhaemogliobin) when flowing to the part that is active and that needs oxygen, to consume and work. With consumption the haemoglobin „delivers“ the oxygen (desoxyhaemoglobin). This leads to local changes in the relative concentration of oxyhemoglobin and desoxyhemoglobin and changes in local blood volume and blood flow. While haemoglobin is oxygenated it is diamagnetic (meaning the material tends to leave the magnetic field), but paramagnetic (material tends to migrate into the magnetic field) while desoxygenated. The magnetic resonance signal of blood is therefore slightly different depending on the level of oxygenation.

By being able to detect the magnetic properties mentioned above, the fMRI-scanner is able to determine alterations in blood flow and blood volume, and construct a picture. This picture shows the brain and its activated parts. While the participant is doing a task the researcher can derive, which brain regions are involved. This is an indirect measure, as the metabolism is measured and not the neuronal activity itself. Furthermore this imaging method has good spatial resolution(where the activity occurs) but low temporal resolution (when the activity occurs), as measurements occur after the neuronal activity.

#### EEG

The Electroencephalogram (EEG) is another non-invasive brain imaging method. Electronic signals from the human brain are recorded while the participant is doing a task. The electronic activity of the neuronal cells, that is adding can be measured.

The electronic activity is measured by attaching electrodes to the skin of the head. In most cases the electrodes are installed on a cap, that the participant wears. It is very time-consuming to install the cap correctly on the head of the participant, but it is very important for the outcome, that everything is in the right place. To assure the adding of the signals the electrodes have to be installed geometric and in a parallel configuration. This technique is applied to measure the event-related potential (ERP), potential changes. They are correlated temporal to an emotional, sensoric, cognitive or motoric event. In the experiment a certain event has to be repeated again and again. The type ERP then can be extracted and calculated. This method is not only time-consumptive, also a lot of disrupting factors complicate the measuring. Moreover this method has a very high temporal resolution, but a very low spatial resolution. It is hardly possible to measure activity in deeper brain regions or to detect the source of the activity interpreting only the recordings.

### Interdisciplinary Approaches

#### Cognitive Science

Cognitive science is multidisciplinary science. It comprises areas of cognitive psychology, linguistics, neuroscience, artificial intelligence, cognitive anthropology, computer science and philosophy. Cognitive science concentrates to study the intelligent behaviour of humans, which includes perception, learning, memory, thought and language. Research in cognitive sciences are based on naturalistic research methods such as cognitive neuropsychology, introspection, psychological experimentation, mathematical modelling and philosophical argumentation.

In the beginning of the cognitive sciences the most common method was introspection. It meant that the test subject evaluated his or her own cognitive thinking. In these experiments the researchers were using experienced subjects because they had to analyse and report their own cognitive thinking. Problems can occur when the results are interpreted and the subject has different reports from the same action. Obviously a clear separation is needed between the matters that can be studied by introspection and the ones that are not adequate for this method.

Computational modelling in cognitive science means that the mind is seen as a machine. This approach seeks to express theoretical ideas through computational modelling that generate behaviour similar to humans. Mathematical modelling is based on flow charts. The model's quality is very important to ensure the equivalence of the input and results.

Nowadays the researchers in cognitive sciences use often theoretical and computational models. "This does not exclude their primary method of experimentation with human participants. In cognitive sciences it is also important to bring the theories and the experimenting together. Because it comprises so many fields of science it is important to bring together the most appropriate methods from all these fields. The psychological experiments should be interpreted through a theory that expresses mental representations and procedures. The most productive and revealing way to perform research in cognitive sciences is to combine different approaches and methods together. This ensures overall picture from the research area and it comprises the viewpoints of all the different fields." (Thagard, Cognitive Science) Nevertheless Cognitive Science has not yet managed to succeed in bringing the different areas together. Nowadays it is criticised for not establishing a science on its own. Rather few scientist really address themselves as cognitive scientists. Furthermore the basic metaphor of the brain functioning like a computer is challenged as well as the distinctions between their models and nature (cf. Eysenck & Keane, Cognitive Psychology, pp. 519-520). This of course brings up a lot of work for the future. Cognitive Science has to work on better models that explain natural processes and that are reliably able to make predictions. Furthermore these models have to combine multiple mental phenomena. In addition to that a general "methodology for relating a computational model's behaviour to human behaviour" has to be worked out. Hereby the strength of such models can be increased. Apart from that Cognitive Science needs to establish an identity with prominent researchers that avow themselves to Cognitive Science. And finally its biggest goal, the creation of a general unifying theory of human cognition (see Theory Part), has to be reached (cf. ibid, p. 520).

##### Experimental Cognitive Psychology

Psychological experimentation studies mental functions. This is done with indirect methods meaning reasoning. These studies are performed to find causal relations and the factors influencing behaviour. The researcher observes visible actions and makes conclusions based on these observations. Variables are changed one at a time and the effect of this change is being observed. The benefits of experimental researching are that the manipulated factors can be altered in nearly any way the researcher wants. From this point it is finally possible to find causal relations.

In being the classical approach within the field of cognitive psychology, experimental studies have been the basis for the development of numerous modern approaches within contemporary Cognitive Psychology. Its empirical methods have been developed and verified over time and the gained results were a foundation for many enhancements contributed to the field of psychology.

Taking into consideration the established character of experimental cognitive psychology, one might think that methodological changes are rather negligible. But recent years came up with a discussion concerning the question, whether the results of experimental CP remain valid in the “real world” at all. A major objection is that the artificial environment in an experiment might cause that certain facts and coherences are unintentionally ignored, which is due to the fact that for reasons of clarity numerous factors are suppressed (cf. Eysenck & Keane, Cognitive Psychology, pp. 514–515). A possible example for this is the research concerning attention. Since the attention of the participant is mainly governed by the experimenter’s instructions, its focus is basically determined. Therefore "relatively little is known of the factors that normally influence the focus of attention" (ibid, p. 514). Furthermore it turns out to be problematic that mental phenomena are often examined in isolation. While trying to make the experimental setup as concise as possible (in order to get clearly interpretable results) one decouples the aspect at issue from adjacent and interacting mental processes. This leads to the problem that the results turn out to be valid in the idealised experimental setting only but not in “real life”. Here multiple mental phenomena interact with each other and numerous outer stimuli influence the behaviour of mental processes. The validity gained by such studies could only be characterised as an internal validity (which means that the results are valid in the special circumstances created by the experimenter) but not as an external validity (which means that the results stay valid in changed and more realistic circumstances) (cf. ibid, p. 514). These objections lead to experiments which have been developed to refer closer to "real life". According to these experiments "real-world" phenomena like 'absent-mindedness', 'everyday memory' or 'reading' gain importance. Nevertheless the discussion remains whether such experiments really deliver new information about mental processes. And whether these 'everyday phenomenon studies' really become broadly accepted greatly depends on the results current experiments will deliver.

Another issue concerning experimental setups in cognitive psychology is the way individual differences are handled. In general the results from an experiment are generated by an analysis of variance. This causes that results which are due to individual differences are averaged out and not taken into further consideration. Such a procedure seems to be highly questionable, especially if put into the context of an investigation of Bowers in 1973, which showed that over 30% of the variance in such studies are due to individual differences or their interaction with the current situation (cf. ibid, p. 515). Based on such facts one challenge for future experimental cognitive psychology is the analysis of individual differences and finding way to include knowledge about such differences in general studies.

##### Cognitive Neuroscience

Another approach towards a better understanding of human cognition is cognitive neuroscience. Cognitive neuroscience lies at the interface between traditional cognitive psychology and the brain sciences. It is a science whose approach is characterised by attempts to derive cognitive level theories from various types of information, such as computational properties of neural circuits, patterns of behavioural damage as a result of brain injury or measurements of brain activity during the execution of cognitive tasks (cf. www.psy.cmu.edu). Cognitive neuroscience helps to understand how the human brain supports thought, perception, affection, action, social process and other aspects of cognition and behaviour, including how such processes develop and change in the brain over time (cf. www.nsf.gov).

Cognitive neuroscience has emerged in the last decade as an intensely active and influential discipline, forged from interactions among the cognitive sciences, neurology, neuroimaging, physiology, neuroscience, psychiatry, and other fields. New methods for non-invasive functional neuroimaging of subjects performing psychological tasks have been of particular importance for this discipline. Non-invasive functional neuroimaging includes: positron emission tomography (PET), functional magnetic resonance imaging (fMRI), magnetoencephalography (MEG), optical imaging (near infra-red spectroscopy or NIRS), anatomical MRI, and diffusion tensor imaging (DTI) The findings of cognitive neuroscience are directed towards enabling a basic scientific understanding of a broad range of issues involving the brain, cognition and behaviour. (cf. www.nsf.gov).

Cognitive neuroscience becomes a very important approach to understand human cognition, since results can clarify functional brain organisation, such as the operations performed by a particular brain area and the system of distributed, discrete neural areas supporting a specific cognitive representation. These findings can reveal the effect on brain organization of individual differences (including even genetic variation) (cf. www.psy.cmu.edu, www.nsf.gov). Another importance of cognitive neuroscience is that cognitive neuroscience provides some ways that allow us to "obtain detailed information about the brain structures involved in different kinds of cognitive processing" (Eysenck & Keane, Cognitive Psychology, p. 521). Techniques such as MRI and CAT scans have proved of particular value when used on patients to discover which brain areas are damaged. Before non-invasive methods of cognitive neuroscience were developed localisation of "brain damage could only be established by post mortem examination" (ibid). Knowing which brain areas are related to which cognitive process would surely lead to obtain a clearer view of brain region, hence, in the end would help for a better understanding of human cognition processes. Another strength of cognitive neuroscience is that it serves as a tool to demonstrate the reality of theoretical distinctions. For example, it has been argued by many theorists that implicit memory can be divided into perceptual and conceptual implicit memory; support for that view has come from PET studies, which show that perceptual and conceptual priming tasks affected different areas of the brain (cf. ibid, pp. 521-522). However, cognitive neuroscience is not that perfect to be able to stand alone and answer all questions dealing with human cognition. Cognitive neuroscience has some limitations, dealing with data collecting and data validity. In most neuroimaging studies, data is collected from several individuals and then averaged. Some concern has arose about such averaging because of the existence of significant individual differences. Though the problem was answered by Raichle (1998), who stated that the differ in individual brain should be appreciated, however general organising principles emerge that transcend these differences, a broadly accepted solution to the problem has yet to be found (cf. ibid, p. 522).

##### Cognitive Neuropsychology

Cognitive Neuropsychology maps the connection between brain functions and cognitive behaviour. Patients with brain damages have been the most important source of research in neuropsychology. Neuropsychology also examines dissociation (“forgetting”), double dissociation and associations (connection between two things formed by cognition). Neuropsychology uses technological research methods to create images of the brain functioning. There are many differences in techniques to scan the brain. The most common ones are EEG (Electroencephalography), MRI and fMRI (functional Magnetic Resonance Imaging) and PET (Positron Emission Tomography).

Cognitive Neuropsychology became very popular since it delivers good evidence. Theories developed for normal individuals can be verified by patients with brain damages. Apart from that new theories could have been established because of the results of neuropsychological experiments. Nevertheless certain limitations to the approach as it is today cannot be let out of consideration. First of all the fact that people having the same mental disability often do not have the same lesion needs to be pointed out (cf. ibid, pp.516-517). In such cases the researchers have to be careful with their interpretation. In general it could only be concluded that all the areas that the patients have injured could play a role in the mental phenomenon. But not which part really is decisive. Based on that future experiments in this area tend to make experiments with a rather small number of people with pretty similar lesion respectively compare the results from groups with similar syndromes and different lesions. In addition to that the situation often turns out to be vice versa. Some patients do have pretty similar lesions but show rather different behaviour (cf. ibid, p.517). One probable reason therefore is that the patients differ in their age and lifestyle (cf. Banich, Neuropsychology, p.55). With better technologies in the future one will be better able to distinguish the cases in which really the various personalities make the difference or in which cases the lesions are not entirely equal. In addition to that the individual brain structures which may cause the different reactions to the lesions will become a focus of research. Another problem for Cognitive Neuropsychology is that their patients are rare. The patients which are interesting for such research have lesions of an accident or suffered during war. But in addition there are differences in the manner of the lesion. Often multiple brain regions are damaged which makes it very hard to determine which of them is responsible for the examined phenomenon. The dependency on chance whether there are available patients will remain in future. Thereby predictions concerning this aspect of the research are not very reliable. Apart from that it is not possible yet to localise some mental processes in the brain. Creative thought or organisational planning are examples (cf. Eysenck & Keane, Cognitive Psychology, p.517). A possible outcome of the research is that those activities rely on parallel processing. This would support the idea of the modification of the information processing theory that will be discussed later on. But if it shows up that a lot of mental processes depend on such parallel processing it would turn out to be a big drawback for Cognitive Psychology since its core is the modularization of the brain and the according phenomena. In this context the risk of overestimation and underestimation has to be mentioned. The latter occurs because Cognitive Psychology often only identifies the most important brain region for the mental task. Other regions that are related thereto could be ignored. This could turn out to be fundamental if really parallel processing is crucial to many mental activities. Overestimation occurs when fibers that only pass the damaged brain region are lesioned, too. The researcher concludes that the respective brain region plays an important role in the phenomenon he analyses even though only the deliverance of the information passed that region (cf. ibid). Modern technologies and experiments here have to be developed in order to provide valid and precise results.

#### Unifying Theories

A unified theory of cognitive science serves the purpose to bring together all the vantage points one can take toward the brain/mind. If a theory could be formed which incorporates all the discoveries of the disciplines mentioned above a full understanding would be tangible.

##### ACT-R

ACT-R is a Cognitive Architecture, an acronym for Adaptive Control of Thought–Rational. It provides tools which enable us to model the human cognition. It consists mainly of five components: Perceptual-motor modules, declarative memory, procedural memory, chunks and buffers. The declarative memory stores facts in “knowledge-units”, the chunks. These are transmitted through the modules respective buffers, which contain one chunk at a time. The procedural memory is the only one without an own buffer, but is able to access the contents of the other buffers. For example those of the perceptual-motor modules, which are the interface with the (simulated) outer world. Production is accomplished by predefined rules, written is LISP. The main character behind it is John R. Anderson who tributes the inspiration to Allan Newell.

##### SOAR

SOAR is another Cognitive Architecture, an acronym for State, Operator And Result. It enables one to model complex human capabilities. Its goal is to create an agent with human-like behaviour. The working principles are the following: Problem-solving is a search in a problem-space. Permanent Knowledge is represented by production rules in the production memory. Temporary Knowledge is represented by objects in the working memory. New Goals are created only if a dead end is reached. The learning mechanism is Chunking. Chunking: If SOAR encounters an impasse and is unable to resolve it with the usual technique, it uses “weaker” strategies to circumvent the dead end. In case one of these attempts leads to success, the respective route is saved as a new rule, a chunk, preventing the impasse to occur again. SOAR was created by John Laird, Allen Newell and Paul Rosenbloom.

#### Neural Networks

There are two types of neural networks: biological and artificial.

A biological NN consists of neurons which are physically or functionally connected with each other. Since each neuron can connect to multiple other neurons the number of possible connections is exponentially high. The connections between neurons are called synapses. Signalling along these synapses happens via electrical signalling or chemical signalling, which induces electrical signals. The chemical signalling works by various neurotransmitters.

Artificial NN are divided by their goals. One is that of artificial intelligence and the other cognitive modelling. Cognitive modelling NN try to simulate biological NN in order to gain better understanding of them, for example the brain. Until now the complexity of the brain and similar structures has prevented a complete model from being devised, so the cognitive modelling focuses on smaller parts like specific brain regions. NNs in artificial intelligence are used to solve distinct problems. But though their goals differ the methods applied are very similar. An artificial NN consist of artificial neurons (nodes) which are connected by mathematical functions. These functions can be of other functions which in turn can be of yet other functions and so on. The actual work is done by following the connections according to their weights. Weights are properties of the connections defining the probability of the specific route to be taken by the program and can be changed by it, thus optimizing the main function. Hereby it is possible to solve problems for which it is impossible to write a function “by hand”.

## Future Research

#### Brain imaging/activity measuring

As described in section 2.1. and 2.2. there are disadvantages of the brain imaging methods. fMRI has a low temporal resolution, but EEG a low spatial resolution. An interdisciplinary attempt is to combine both methods, to reach both a high spatial and temporal resolution. This technique (simultaneous EEG-measuring in the fMR) is used for instance in studying children with extratemporal epilepsy. It is important to assign the temporal progress to a region in which the epileptic seizure has its roots. In December of 2006 a conference in Munich discussed another idea of this mixture of methods: the study of Alzheimer's disease. It could be possible to recognise this disease very early. This could lead to new therapies to reduce the speed and the amount of cell-dead. In December of 2006 a conference in Munich discussed this eventuality. Brain imaging methods are not only useful in medical approaches. Other disciplines could benefit from the brain imaging methods and derive new conclusions. For instance for social psychologist the brain imaging methods are interesting. Experiments with psychopathic personalities are only one possibility to explore the behaviour of humans. For literature scientists there could be a possibility to study stylistic devices and their effect of humans while reading a poem. Another attempt in future research is to synchronise the direction of sight and the stimuli, that was trigger for the change of direction. This complex project needs data from eye-tracking experiments and data from fMRI-studies.

#### Unifying theories more unifying.

Since the mind is a single system it should be possible to explain it as such without having to take different perspectives for every approach (neurological,psychological,computational). Having such a theory would enable us to understand our brain far more thorough than now, and might eventually lead an everyday application. But until now there is no working Unifying Theory of Cognition, which fulfils the requirements stated by Allen Newell in his book Unified Theories of Cognition. Accordingly a UTC has to explain: How intelligent organisms respond flexibly to the environment. How they exhibit goal-directed behaviour and choose goals rationally (and in response to interrupts: see previous point). How they use symbols. How they learn from experience. Even Newells own implementation SOAR does not reach these goals.

### Promising experiments

Here I collected the abstracts of a few recent findings, feel free to modify or add to them.

>>Unintentional language switch [] Kho, K.H., Duffau, H., Gatignol, P., Leijten, F.S.S., Ramsey, N.F., van Rijen, P.C. & Rutten, G-J.M. (2007) Utrecht Abstract [1]

We present two bilingual patients without language disorders in whom involuntary language switching was induced. The first patient switched from Dutch to English during a left-sided amobarbital Wada-test. Functional magnetic resonance imaging yielded a predominantly left-sided language distribution similar for both languages. The second patient switched from French to Chinese during intraoperative electrocortical stimulation of the left inferior frontal gyrus. We conclude that the observed language switching in both cases was not likely the result of a selective inhibition of one language, but the result of a temporary disruption of brain areas that are involved in language switching. These data complement the few lesion studies on (involuntary or unintentional) language switching, and add to the functional neuroimaging studies of switching, monitoring, and controlling the language in use.

>>Bilateral eye movement -> memory Parker, A. & Dagnall, N. (2007) Manchester Metropolitan University, One hundred and two participants listened to 150 words, organised into ten themes (e.g. types of vehicle), read by a male voice. Next, 34 of these participants moved their eyes left and right in time with a horizontal target for thirty seconds (saccadic eye movements); 34 participants moved their eyes up and down in time with a vertical target; the remaining participants stared straight ahead, focussed on a stationary target. After the eye movements, all the participants listened to a mixture of words: 40 they'd heard before, 40 completely unrelated new words, and 10 words that were new but which matched one of the original themes. In each case the participants had to say which words they'd heard before, and which were new. The participants who'd performed sideways eye movements performed better in all respects than the others: they correctly recognised more of the old words as old, and more of the new words as new. Crucially, they were fooled less often by the new words whose meaning matched one of the original themes - that is they correctly recognised more of them as new. This is important because mistakenly identifying one of these 'lures' as an old word is taken as a laboratory measure of false memory. The performance of the participants who moved their eyes vertically, or who stared ahead, did not differ from each other. Episodic memory improvement induced by bilateral eye movements is hypothesized to reflect enhanced interhemispheric interaction, which is associated with superior episodic memory (S. D. Christman & R. E. Propper. 2001). Implications for neuropsychological mechanisms underlying eye movement desensitization and reprocessing (F. Shapiro, 1989, 2001), a therapeutic technique for posttraumatic stress disorder, are discussed

>>is the job satisfaction–job performance relationship spurious? A meta-analytic examination

Nathan A. Bowling(Department of Psychology, Wright State University) Abstract [2]

The job satisfaction–job performance relationship has attracted much attention throughout the history of industrial and organizational psychology. Many researchers and most lay people believe that a causal relationship exists between satisfaction and performance. In the current study, however, analyses using meta-analytic data suggested that the satisfaction–performance relationship is largely spurious. More specifically, the satisfaction–performance relationship was partially eliminated after controlling for either general personality traits (e.g., Five Factor Model traits and core self-evaluations) or for work locus of control and was almost completely eliminated after controlling for organization-based self-esteem. The practical and theoretical implications of these findings are discussed.

>>Mirror-touch synesthesia is linked with empathy

Michael J Banissy & Jamie Ward (Department of Psychology, University College London)

Abstract [3] Watching another person being touched activates a similar neural circuit to actual touch and, for some people with 'mirror-touch' synesthesia, can produce a felt tactile sensation on their own body. In this study, we provide evidence for the existence of this type of synesthesia and show that it correlates with heightened empathic ability. This is consistent with the notion that we empathize with others through a process of simulation.

### Discussion points

Where are the limitations of research? Can we rely on our intuitive idea of our mind? What impact could a complete understanding of the brain have on everyday life?

#### Brain activity as a false friend

In several experiments the outcome is not unambiguous. This hinders a direct derivation from the data. In experiments with psychopathic personalities researchers had to weaken their thesis, that persons with missing activity in the frontal lobe are predetermined for being violent psychopathic people, that are unethical murderers. Missing activity in the frontal lobe leads to a disregulation of threshold for emotional, impulsive or violent actions. But this also advantages for example fire fighters or policemen, who have to withstand strong pressures and who need a higher threshold. So missing activity is not a sufficient criterion for psychopathic personalities.

## Conclusion

Today's work in the field of Cognitive Psychology gives several hints how future work in this area may look like. In practical applications improvements will probably mainly be driven by the limitations one faces today. Here in particular the newer subfields of Cognitive Psychology will develop quickly. How such changes look like heavily depends on the character of future developments in technology. Especially improvements in Cognitive Neuropsychology and Cognitive Neuroscience depend on the advancements of the imaging techniques. In addition to that the theoretical framework of the field will be influenced by such developments. The parallel processing theory may still be modified according to new insights in computer science. Thereby or eventually by the acceptance of one of the already existing overarching theories the theoretical basis for the current research could be reunified. But if it takes another 30 years to fulfil Newell's dream of such a theory or if it will happen rather quick is still open. As a rather young science Cognitive Psychology still is subject to elementary changes. All its practical and theoretical domains are steadily modified. Whether the trends mentioned in this chapter are just dead ends or will cause a revolution of the field could only be predicted which definitely is hard.

## References

Anderson, John R., Lebiere, Christian, The Atomic Components of Thought, Lawrence Erlbaum Associates, 1998
Banich, Marie T., Neuropsycology - The Neural Bases of Mental Function, Hougthon Mifflin Company, 1997
E. Br. Goldstein, Cognitive Psychology, Wadsworth, 2004
Lyon, G.Reid, Rumsey, Judith M.: Neuroimaging. A Window to Neurological Foundations of Learning and Behaviour in Children. Baltimore. 1996.
M. W. Eysenck, M. T. Keane, Cognitive Psychology - A Student's Handbook, Psychology Press Ltd, 2000
Thagard, Paul, Cognitive Science in Edward N. Zalta (ed.), The Stanford Encyclopedia of Philosophy, 2004