Précis of epistemology/Instinct, learning and memory

What is learning? edit

An agent has a skill, or a know-how, when it is able to adapt to its environment to achieve its ends. A know-how is an intelligent behavior, or the ability to behave intelligently.

A know-how is instinctive when it is common to all individuals of the same species and is part of their phylogenetic traits, ie it is transmitted by a common biological heredity (Lorenz 1981, Tinbergen 1951). Such skills appear naturally during the normal development of individuals of the species. They are innate knowledge, even if they only appear long after birth.

For knowledge to be learned, its acquisition must pass through the memorization of experiences. For animals to be able to learn, their nervous systems must be able to retain traces of what they have experienced. This criterion is not sufficient to distinguish between the learned and the instinctive, because virtually all instinctive behaviors appear as a result of a period of cerebral maturation, during which experience determines the constitution of the neuronal circuits. The regulation of the heart beat, for example, is instinctive, but the experience of the first beats is crucial for the subsequent development of the neural networks which will regulate them. In general, the development of the nervous system is epigenetic, ie it is not determined only by genes but also and especially by experience. In particular, synapses can be modified by the signals they transmit. In this way, an experience stimulating a network can be decisive for its further development. Just as in forging one becomes a blacksmith, life itself enables us to live.

To understand the difference between the innate and the acquired, one must consider the differences of behavior. Sometimes they have a genetic explanation, because there are small genetic differences between individuals of the same species. But most often the differences in behavior are caused only, or above all, by differences of experience. We then say that they are acquired or learned. A behavior is learned when its peculiarities depend on the peculiarities of the previous experience and not on a genetic inheritance. For us the learned behaviors are the most important, because our natural faculties and our particular talents are nothing if we do not learn to develop them.

The instinct to learn edit

The animal faculties of learning are themselves of instinctive origin. The ability to learn is a skill and for learning to take place there must be a prior instinctive ability to learn. We can learn how to learn and thus acquire a specific ability to learn, but we could not learn anything if we were not naturally able to learn. This instinct to learn is based on the capacity of the nervous systems to take advantage of their experience to guide their development.

Neural plasticity edit

Memorization requires a plastic material which is able to preserve traces of its experience (plastic is opposed to elastic: an elastic material does not retain traces of deformations it undergoes). It seems that the plasticity of neurons is mainly that of their synapses. The experience of signal transmission can strengthen or weaken a synapse (Kandel 1999). It can also lead to the formation of other neighboring synapses which connect the same neurons. In this way the experience of the neurons modifies their connectivity. New networks can be formed and new features may appear. At the same time many neurons disappear, presumably because they have not proved their usefulness, because their synapses have not been reinforced by experience.

Donald Hebb proposed a simple rule which explains many neuronal learnings: two connected neurons reinforce their connection when they are excited together. It is a kind of reinforcement by success: when a neuron A transmits an excitation signal to another neuron B, it is not sure to succeed. The excitation of A by itself is not necessarily sufficient to trigger the excitation of B. Often several excitation signals from neurons other than A are required for B to be excited. The Hebb rule states that a synapse of an excitatory neuron is rewarded by success. It is reinforced when the targeted neuron is really excited.

The development of instincts edit

The existence of a skill requires a functional neural network which is able to make use of perception signals in order to give appropriate action signals. An instinctive skill is not learned, but it is nevertheless acquired, in the sense that it appears during the natural development of the individual. How can genes control the development of a functioning neural network?

The mystery of the genetic control of the development of the organism and its nervous system is partially elucidated : genes control the metabolism (synthesis and degradation of the molecules of the organism) through the synthesis of RNA and proteins. Cell differentiation depends on the activation of particular genes which synthesize proteins specific to the cell type. Genes control cell differentiation by controlling the synthesis of RNAs or proteins which activate or inhibit genes. The properties of cells and their interactions depend on their cell type. Genes can thus control the proliferation, differentiation and migration of all the cells of the organism during its development (Wolpert, Tickle & Martinez 2015). For nerve cells, they can also determine the migration of the termini of their axons and thus build networks of neurons. But they control only the overall plan of the system. The fine structure of connections between neurons is epigenetic, it depends on experience. Again, genes can exert an influence on development, because the plasticity of synapses, the way in which they respond to the various signals they receive, may vary depending on the cell type.

Procedural memory edit

Procedural memory is the memory of learned skills. Learning a skill consists in building a functional neural network. As long as the network is kept, and as it remains functional, the skill is retained. Procedural memory is therefore the conservation of functional neural networks built by learning.

A neural model for episodic memory: the convergence-divergence zones edit

Episodic memory is the memory of recollections, the memory of memories. When we remember an experience, we simulate it by imagination. How can a neural network accomplish such a performance, record an experiment, preserve it, and reproduce it through imagination?

A convergence-divergence zone (CDZ) is a neural network, which receives convergent projections from sites whose activity is to be memorized, and which returns divergent projections to these same sites (Damasio 1989, 2009). When an experiment is memorized, the signals converging on the CDZ excite neurons which then reinforce their reciprocal connections, following the Hebb rule, and thus form a self-excitatory network. The excitation of this network is then enough to reproduce the combination of initially received signals. In a self-excitatory network the excitation of one part spreads to all the others. In the same way, a fragment of memory is enough to awaken the entirety of a memorized experience (Proust 1927). A CDZ can thus be a place of recording and reproduction of memories.

In addition to convergent-divergent pathways, a CDZ can be connected to the rest of the brain in any conceivable way, by input signals which activate or inhibit it, and output signals with which it affects the rest of the system. In particular, the CDZs can be organized into a tree structure. A CDZ can recruit convergent channels from many other CDZs. It can thus synthesize the detection and production capacities of all the CDZs thus recruited.

To make a model of the system of CDZs, one distinguishes in the nervous system a peripheral part and a central part. The periphery brings together regions dedicated to perception, emotion and action. The tree of CDZs is organized hierarchically, from the periphery to the center. The most peripheral CDZs have convergent paths coming directly from the periphery. We get closer to the center by going up the arborescence of CDZs. One can think of roots which plunge into the earth, the periphery, and which approach the base of the trunk, the center. But in the brain there are many centers. The most central CDZs have convergent channels from other CDZs, and are not recruited by more central CDZs. The memory of an episode of our life could be retained by such a central CDZ. When we revive the perceptions, emotions and actions of a past experience, the excitement of this central CDZ would activate all subordinate CDZs, to the peripheral areas, and thus simulate the previous experience.

Learning to perceive edit

Perception is obviously necessary to act on the present. But its effect does not stop with actions on the perceived environment, because we learn continuously from what we perceive or imagine. Every experience, real or imaginary, can change our ways of perceiving and imagining.

Neural networks dedicated to low-level perception, close to the sensory organs, are probably not modifiable by experience as soon as they have finished their initial maturation period. Once they are functional, they no longer need to be changed, or only a little, because they have become necessary for performing higher level functions. If one changes a low-level network, one risks disrupting all the higher-level networks which use it.

Internal agitations sometimes resemble the movements of a fluid, as if there were forces of pressure which urge us to perceive, or to imagine. In order to explain how our experiments transform us, we can then consider the way a river digs its bed, the modeling of dunes by wind, and more generally the ways in which air, water, or any other fluid can modify The solids in contact with which they flow. Nerve impulses are like fluid currents, neural networks are like pipes in which they flow and they can dig, expand or clog. Of course this is just an analogy. Nerve impulses are electrical currents in neurons and through their membranes. They "dig their beds" in networks primarily by acting on their synapses.

This fluid model of memory, in which nerve impulses can permanently alter the ways in which they flow, can not be enough to explain how we are transformed by our experiments, because it gives too much importance to oblivion. Each new experience could erase the traces left by the old ones. The memories would be like traces on the sand of a beach swept by the waves.

Our memory often works in an accumulative way. The memories, skills and all the memorized information are acquired and preserved independently of each other. In general, the new memorized items do not erase the oldest items. How brains develop such memorization skills is quite mysterious. The CDZs, which require at least the constitution of a new network, with previously unused neurons, for each new memorized item, are probably part of the explanation, but only a part.

We learn to perceive and imagine by learning to make silent inferences from the information provided by the senses. When a silent inference is memorized, a combination of a condition and a consequence is retained. To do this, it is sufficient in principle to maintain an excitatory connection between the network representing the condition and the one representing the consequence. As our faculties of inference develop cumulatively, we must assume that our brains know how to build such links without altering the old ones, that they have a memory which sometimes resembles that of computers, where the links between conditions, ie the addresses in memory, and the consequences, the contents kept at these addresses, are learned in a cumulative way.

A lived experience always brings together many elements, in a way which can sometimes seem very disorderly. So that the inference of a condition to a consequence is legitimate it is not sufficient that they have been united in an experiment, because their association might be fortuitous. How do we recognize legitimate inferences, those which truly increase our knowledge? For example many animals know how to identify the cause of their discomfort if they have ingested bad food. That they avoid eating it again shows that they have correctly identified the source of their suffering. But how do they do it? Many other perceptions preceded their discomfort. Why do they select food as the cause and not the other perceptions which were also part of the same experience?

Perception does not stop with sensation. It builds models of reality which go beyond the knowledge provided directly by the senses and which guide the identification of relations of condition to consequence. For example, we recognize solid objects and attribute them spontaneously qualities of permanence. We know that they do not disappear and that their form remains unchanged, as long as there is no cause capable of making them disappear or deform them. This knowledge of solidity is an inexhaustible source of silent inferences, with which we know the future, the present which is not perceived by the senses, and the past which we have not experienced. In general, we naturally know how to perceive qualities of permanence, causal relations, or other qualities and relations which lead to legitimate inferences. We naturally know to identify causes and effects, we know how to recognize what is acting and what is undergoing, we perceive traces and annunciating signs ... Such faculties of perception combined with episodic memory enable us to develop our deductive imagination.

We instinctively know to perceive causality, or other qualities and relations which lead to legitimate inferences, only in simple cases, such as solidity, contact action or food as a cause of discomfort. In general, the correct identification of legitimate inferences is a very difficult problem which our instinctive knowledge is not able to solve on its own. In fact, we are naturally inclined to perceive causal relations where there is none. All forms of superstition and divagation show that our natural faculties of perception of causality are of very limited reliability.