10.3. Molecular basis of neuronal plasticity

Piotr Wozniak, 1990

This text was taken from P.A.Wozniak, Optimization of learning, Master's Thesis, University of Technology in Poznan, 1990 and adapted for publishing as an independent article on the web. (P.A.Wozniak, June 4, 1998).

For a more up-to-date texts on the same subject see:

Here are some notes on terminology and facts that are no longer used or valid:

  1. the terms deterministic and stochastic learning correspond with well-documented learning paradigms: declarative and procedural learning respectively
  2. the concept of E-Factor has been replaced with A-Factor in Algorithm-SM8 used in SuperMemo 8
  3. retrograde messengers were not discovered until 1991

Various mechanisms have been proposed to account for the formation of long-term memory. They include increased release of synaptic transmitter, increased number of synaptic receptors, decreased Km of receptors, synthesis of new memory factors either in the presynaptic or postsynaptic element, sprouting of new synaptic connections, increase of the active area in the presynaptic membrane and many others. Let us first consider sprouting, the only one of the mentioned mechanisms that involves changes in the structure of the neuronal circuitry. That neuronal sprouting indeed takes place is best illustrated by a series of experiments by Rosenzweig [Rosenzweig, 1984]. He reared rats in enriched and impoverished environments studying the impact of increased complexity of behavioral activities on the development of the brain. In individuals reared in enriched environment he observed increased thickness of the cerebral cortex, better capillary supply, more glial cells, increased content of proteins, increased activity of acetylcholinesterase, more dendritic spines, increased dendritic branching and greater contact areas of synapses. This experiment provides an excellent proof that the brain, very much like muscles of a body builder, may undergo a training which apart from engraving new memories has a general trophic effect on the nervous tissue. Nonetheless, studies of genius brains did not reveal an increased number of nervous connections [Restak, 1984]. All in all, it seems unlikely that sprouting is indeed responsible for engraving new, specific memory traces. Let us notice that deficit of visual stimulation in kittens may result in irreversible underdevelopment of neural structures of the visual cortex [Siekievitz, 1988]. Similar deficits in adult cats were not confirmed. This indicates that sprouting is rather specific to young individuals and therefore cannot account for memories. Furthermore, even if the above reasoning is not correct, it is easier to conceive a chemical mechanism of memory that capitalizes on existing interneuronal connections than a mechanisms requiring fast and specific growth of appendices in a desired direction. Also in the book edited by Gispen [1976], the chemical nature of memory is advocated, and an extensive list of hypothetical phenomena that might participate in short and long-term memory is presented. In the following paragraph I will assume that sprouting does not provide basis for specific memories. As it was shown earlier, heterosynaptic facilitation seems to be a better candidate to account for deterministic learning at the level of the synapse. In deterministic learning, during the initial facilitation of a questioning synapse (see terminology introduced earlier) both the pre- and postsynaptic membranes must be depolarized. However, the depolarization of the postsynaptic membrane is caused not by the questioning but by the template synapse. Because the changes in the postsynaptic element must be local (i.e. they cannot affect conductivity at other synaptic junction of the same neuron) they have to be somehow related to the membrane. Thus the postsynaptic membrane seems to be the most suitable candidate for changes coding for both short and long-term memories. Another problem in studying the nature of molecular memory is a great number of neurotransmitters that could possibly be involved in memory. Glutamate, acetylcholine, opiates, vasopressin, dopamine and catecholamines are often mentioned, although available data are full of contradictions. Ketty (1976) argues that many neurotransmitters are involved in memory formation because the central nervous system, in the course of evolution, would snatch every opportunity to incorporate new mechanisms to enrich its flexibility. However, he does not elaborate what sort of advantages could divers neurohormones bring to the brain, because at least one disadvantage is obvious - energetic expenses for the maintenance of a diversified metabolic machinery. To me, only the existence of state-dependent memory suggests a plausible answer to the puzzle of multiple neurotransmission implicated in memory. The phenomenon of state-dependent memory consists in a better recall of memories if the brain is in the same drug or hormonal state as that at the moment of learning. Perhaps, individuals that were able to engrave memories that were closely related to their emotional and hormonal state at the moment of conditioning had a survival advantage by not recalling them in irrelevant situations. For example, a crack of a wooden stick would not spark an escape response unless the nervous system were in the state of hormonal alertness caused by previously noticed disturbing indications of the presence of a predator. The influence of alertness on recall is a well-known phenomenon and is partially related to the increased level of catecholamines. Different neurotransmitters might be sensitive to different hormonal modulation and provide for memory storages for different emotional or hormonal states. However, as far as the intellectual activity of the human brain is concerned less and less space seems to be left for a diversified set of neuronal transmitters because this sort of activity evolved relatively late and was much less related to the struggle for survival in the classical sense. Therefore I believe that despite the multitude of mechanisms of neural transmission, deterministic learning is based on a limited number or even on one kind of a neurotransmitter. By far the most often mentioned neurotransmitters involved in learning are glutamate and acetylcholine. High intake of glutamate in foods was reported to improve memory. Glutamate is also known to be a primary excitatory neurotransmitter in the hippocampus. It was therefore, for a time, a subject of special interest which led to the formulation of one of the most consistent molecular models of memory [Lynch, 1984]. This model is presented in detail later in the chapter. Regarding acetylcholine, it was observed that drugs affecting the action of this hormone affect memory. Drugs blocking muscarinic receptor (e.g. scopolamine) cause significant memory impairment. On the other hand, arecoline (muscarinic agonist) was reported to allow to learn lists of words faster. Similarly, physostigmine which inhibits acetylcholinesterase was found to improve learning. Finally, caffeine which increases the number of acetylcholine receptors in synaptic connections has also a positive influence on learning. A positive correlation was also found between ability to learn in rats and the level of acetylcholine esterase in their brains. It is also known that Alzheimer's disease entails the loss of acetylcholine secreting cells; especially in the hippocampus and neocortex (temporal and frontal lobes). Study of inhibitors and stimulants of memory contributes significantly to the understanding of the nature of memory. This truth is nicely illustrated by the evidence provided above for the involvement of acetylcholine in memory. Below I list the most prominent substances reported to affect memory. Inhibitors: colchicine (perhaps an evidence for sprouting), ouabain, marihuana, inhibitors of protein synthesis and alcohol. Stimulants: morphine, Met-enkephaline, strychnine (in low doses), ACTH, MSH, catecholamines, vasopressin, orotic acid (in food!), caffeine, amphetamine and pentylene-tetrazole (Metrazole - powerful influence). The arising picture is further complicated by a changing influence of particular factors on various forms of learning, at various stages and in various doses.

Concerning the way in which memory traces are encoded chemically, phosphorylation of proteins and increase of number of receptors are most often mentioned. Studies of Aplysia californica revealed that phosphorylation changes the state of potassium channels decreasing the conductivity of membranes for potassium (see Kandel's model below). Decreased permeability for potassium was also involved in the Gibbs' model mentioned earlier. Other research indicates that activation of muscarinic receptors may result in closing potassium channels (this would once again involve acetylcholine). All this evidence indicates that phosphorylation of potassium channels is an interesting candidate for a memory factor. Eric Kandel and his colleagues studied sensitization of the gill-withdrawal response in Aplysia californica and came up with an interesting model of presynaptic facilitation [Kandel, 1982]. Although deterministic learning involves rather heterosynaptic facilitation, the Kande's model may be considered as illustrating more universal mechanisms that can possibly be involved also in human memory. In the model the presynaptic terminal undergoes facilitation as a result of stimulation of the axoaxonic synapse. An interesting is the fact that simultaneous stimulation of the facilitated synapse is not necessary for the process to occur. In Kandel's model the serotonin discharged in the axoaxonic synapse activates adenyl cyclase in the presynaptic element. The resulting increase in the level of cAMP activates protein kinases that phosphorylize potassium channels. The phosphorylation decreases efflux of potassium during depolarization and slow downs the return of the membrane potential back to the baseline. This allows for increased entry of calcium into the cell since voltage-sensitive calcium channels remain open for a longer time. The increased influx of calcium is responsible for increased discharge of the neurotransmitter, hence the facilitation.

Another interesting model was proposed by Lynch and Baudry [Lynch, 1984]. They studied glutamate receptors in the hippocampus and based their model, later called the calpain model, on the following observations:

  1. calcium increases the number of glutamate receptors in the postsynaptic membranes,
  2. inhibitors of proteinases block the effect of calcium,
  3. calpain is a calcium-activated proteinase occurring in the hippocampus,
  4. calcium effect is not caused by the synthesis of glutamate receptors de novo,
  5. fodrin is a protein that lines neural membranes, esp. in the postsynaptic region.

The calpain model assumes that calcium entering the terminal button activates calpain, which in turn degrades fodrin and which results in exposition of new glutamate receptors. Thus the following excitations of the synapse will cause greater and greater increase of the number of glutamate receptors in the postsynaptic membrane.

In Kandel's and calpain models, there is yet another common and worth noticing element: increased inflow of calcium ions into the cell causes increased release of the neurotransmitter or increased exposition of postsynaptic receptors. I will use the presented observations in construction of the model of molecular memory that is consistent with the SuperMemo theory.

Interim summary

  1. Sprouting of new neuronal appendices is involved in nonspecific development of nervous tissue, but does not seem to be involved in formation of specific memories,
  2. Postsynaptic membrane in a heterosynaptic configuration seems to be the best location for molecular changes responsible for formation of memory in deterministic learning,
  3. Only few or perhaps one neurotransmitter is likely to be involved in deterministic learning in humans. Glutamate and acetylcholine are the most prominent candidates,
  4. Phosphorylation of proteins, and phosphorylation of potassium channels in particular, as well as increase of the number of receptors in the postsynaptic membrane are very promising candidates for molecular factors involved in formation of memory.
  5. Increased influx of calcium into the cell is likely to be a link in the chain of events leading to facilitation.

10.4. Molecular model of memory comprising elements of the SuperMemo theory

10.4.1. Biological interpretation of E-Factors

Let us recall one of the elements of the SuperMemo theory which states that the relation between subsequent optimal inter-repetition intervals is roughly described by the formula:



I(n) - the n-th interval,

EF - E-Factor of the considered item.

Let us consider what, in the biological sense, the interpretation of the E-Factor is. According to conclusions drawn earlier in the chapter, memory of an item is stored in postsynaptic membranes (SuperMemo theory refers primarily to deterministic learning). There are two parameters of a set of synapses that might account for differences in difficulty between items: either the elements of the set are variable or the number of elements in the set is variable. It seems possible that different synapses have different facilitation parameters, after all they are not cut-out along a standard and reveal typical variability encountered in all biological systems. However, if we consider the fact that the difficulty of an item is a property that reflects its intricacy (or interference with other items), how could it happen that more intricate items have always this bad luck to pick up sets of bad synapses. On the other hand, it is unimaginable that SuperMemo items are coded by a fixed, always the same number of synapses. The conclusion is that the difficulty of an item reflects the number of synapses involved, although individual synaptic properties may interfere with the picture. Let us now try to answer why a greater number of synapses involved should make an item more difficult. To this end we have to consider individual properties of synapses more closely. If synapses differ among themselves significantly then a greater number of connections involved in memory will increase the probability that a bad synapse will crop up in the set. But this seems unlikely because of two reasons. First, it never happens that an easy-looking item has a bad luck to be coded by a bad synapse and therefore become difficult nor does it happen that a complicated item by a stroke of luck is characterized be a high E-Factor. Second, the likely number of synapses involved in remembering items seems to be high enough to level statistical differences between their memory parameters.

In consequence, there must be a factor that accounts for differences between difficulty of items as a result of variable number of synapses involved, even if all synapses are assumed to work in exactly the same way. Temporarily, for the sake of abstract reasoning, let us therefore assume that synapses are indeed identical. Let us consider at which moments and how synapses are stimulated in order to sustain memory traces. There is only one valid form of specific stimulation of a synapse in the optimal process of learning: stimulation during a repetition. No other phenomena have a significant importance in retaining of memory. Occasional repetitions, e.g. during night dreams can be discarded as statistically insignificant. We can assume therefore that during an optimal repetitory process, all synapses of a given item are stimulated synchronically at the moment of repetition and never else. If all of them are identical and are stimulated in exactly the same way then all of them should undergo exactly the same changes independent of their number; thus all of them would have the same E-Factor. The conclusion must be that the stimulation of synapses during repetitions is not always the same, i.e. certain synapses are stimulated stronger than others.

The only plausible answer to the question why synapses are stimulated with a variable intensity is that during a repetition, the state of the nervous system is never exactly the same. Thus it is impossible to send exactly the same series of questioning impulses to the memory circuitry. It has to result in difficulties in managing memories comprising larger numbers of synapses, because more complicated signalization has to be sent, and more probable are variations in the final, stimulatory effect. This conclusion can be confirmed by the observation that easy items evoke an instant response why difficult ones tend to cause a stepwise process of reconstructing the correct answer. For example, items that require enumeration of elements of a set are usually quite difficult and have to be numbered among the ill-structured ones (unless the set has only 2-3 elements). It can be observed that in different repetitions, elements of the set are recalled in a different order. This seems to confirm the fact that difficult items generate less coherent impulsation in the questioning circuitry which results in insufficient stimulation of some of the synapses involved in remembering the item. If we drop the assumption that stimulation of synapses occur during repetitions and never else, we can add yet another factor that contributes to the difficulty of items coded by many synapses. This factor is interference whereby particular synapses can occasionally be facilitated or suppressed by repetitions of other items (the greater the number of synapses, the greater the probability that one of them will be interfered with). At this moment we must only notice an obvious fact that the validity of our conclusion is by no means changed by the fact that synapses are not all identical as assumed earlier. In further consideration we will focus our interest on a single synapse which in an optimal process of learning should be stimulated in intervals related to one another by the formula:


where C is a constant greater than one (according to the above reasoning C should be greater than the maximum known E-Factor, i.e. it should be at least greater than three). Although it seems certain that differences between E-Factors are not caused by differences between synapses, a significant variability of the C constant among individual synapses cannot be excluded. Finally, I should remark on the fact that in the process of relearning that might follow forgetting, the first optimal interval is not much longer from the one that is used in the case of first memorization. It may seem that learning the same item again should capitalize on previously stimulated synapses and, in consequence, the first optimal interval should be relatively longer. The mere fact that it is not so seems to prove that usually, during relearning a forgotten item, a different set of synapses is used. This might be caused by the fact that it is unlikely that the template and questioning stimulation will meet exactly at the same locations in relearning (unlike in the exemplary network presented earlier). However, this last claim could be refuted by the existence of a special memory cleaning mechanism that would allow for removal of all memory traces immediately after forgetting. The biological usefulness of such a mechanism is discussed at the end of the chapter.

Interim summary

  1. E-Factors reflect the number of synapses involved in remembering items
  2. Large number of synapses coding for an item is undesirable because of the difficulty of uniform synaptic stimulations during repetitions
  3. In the process of relearning a forgotten item, usually a different set of synapses is used
  4. In later analysis we will consider a single synapse as undergoing the same learning process as that applied to memorize SuperMemo items

10.4.2. Two variables of memory: stability and retrievability

[for a more coherent text on the same subject see: Two components of long-term memory]

There is an important conclusion that comes directly from the SuperMemo theory that there are two, and not one, as it is commonly believed, independent variables that describe the conductivity of a synapse and memory in general. To illustrate the case let us again consider the calpain model of synaptic memory (compare earlier in the chapter). It is obvious from the model, that its authors assume that only one independent variable is necessary to describe the conductivity of a synapse. Influx of calcium, activity of calpain, degradation of fodrin and number of glutamate receptors are all examples of such a variable. Note, that all the mentioned parameters are dependent, i.e. knowing one of them we could calculate all others; obviously only in the case if we were able to construct the relevant formulae. The dependence of the parameters is a direct consequence of causal links between all of them.

However, the process of optimal learning requires exactly two independent variables to describe the state of a synapse at a given moment:

  • A variable that plays the role of a clock that measures time between repetitions. Exemplary parameters that can be used here are:
    • Te - time that has elapsed since the last repetition (it belongs to the range <0,optimal-interval>),
    • Tl - time that has to elapse before the next repetition will take place (Tl=optimal-interval-Te),
    • Pf - probability that the synapse will lose the trace of memory during the day in question (it belongs to the range <0,1>).

Obviously, one can conceive a countless number of parameters that could be used in representing the clock variable. All these parameters are dependent, i.e. one of them is sufficient to compute all the others.

  • A variable that measures the durability of memory. Exemplary parameters that can be used here are:
    • - I(n+1) - optimal interval that should be used after the next repetition ( I(n+1)=I(n)*C where C is a constant greater than three),
    • - I(n) - current optimal interval,
    • - n - number of repetitions preceding the moment in question, etc.

Again the parameters are dependent and only one of them is needed to characterize the durability of memory.

Let us now see if the above variables are necessary and sufficient to characterize the state of synapses in the process of time-optimal learning. To show that variables are independent we will show that none of them can be calculated from the other. Let us notice that the I(n) parameter remains constant during a given inter-repetition interval, while the Te parameter changes from zero to I(n). This shows that there is no function f that satisfies the condition:


On the other hand, at the moment of subsequent repetitions Te always equals zero while I(n) has always a different, increasing value. Therefore there is no function g that satisfies the condition:


Hence independence of I(n) and Te.

To show that no other variables are necessary in the process of optimal learning let us notice that at any given time we can compute all the moments of future repetitions using the following algorithm:

  1. Let there elapse I(n)-Te days.
  2. Let there be a repetition.
  3. Let Te be zero and I(n) increase C times.
  4. Go to 1.

Note that the value of C is a constant characteristic for a given synapse and as such does not change in the process of learning. I will later use the term retrievability to refer to the first of the variables and the term stability to refer to the second one. To justify the choice of the first term let me notice that we use to think that memories are strong after a learning task and that they fade away afterwards until they become no longer retrievable. This is retrievability that determines the moment at which memories are no longer there. It is also worth mentioning that retrievability was the variable that was tacitly assumed to be the only one needed to describe memory (as in the calpain model). The invisibility of the stability variable resulted from the fact that researchers concentrated their effort on a single learning task and observation of the follow-up changes in synapses, while the importance of stability can be visualized only in the process of repeating the same task many times. To conclude the analysis of memory variables let us ask the standard question that must be posed in development of any biological model. What is the possible evolutionary advantage that arises from the existence of two variables of memory?

Retrievability and stability are both necessary to code for a process of learning that allows subsequent inter-repetition intervals to increase in length without forgetting. It can be easily demonstrated that such model of learning is best with respect to the survival rate of an individual if we acknowledge the fact that remembering without forgetting would in a short time clog up the memory system which is a finite one. If memory is to be forgetful it must have a means of retaining of these traces that seem to be important for survival. Repetition as a memory strengthening factor is such a means. Let us now consider what is the most suitable timing of the repetitory process. If a given phenomenon is encountered for the n-th time, the probability that it will be encountered for the n+1 time increases and therefore a longer memory retention time seems advantageous. The exact function that describes the best repetitory process depends on the size of memory storage, number of possible phenomena encountered by an individual, and many others. However, the usefulness of increasing intervals required to sustain memory by repetitions is indisputable and so is the evolutionary value of retrievability and stability of memory. One can imagine many situations interfering with this simple picture of the development of memory in the course of evolution. For example, events that were associated with an intense stress should be remembered better. Indeed, this fact was proved in research on the influence of catecholamines on learning. Perhaps, using hormonal stimulation one could improve the performance of a student applying the SuperMemo method.

Interim summary

  1. Existence of two independent variables necessary to describe the process of optimal learning was postulated. These variables were named retrievability and stability of memory
  2. Retrievability of memory reflects the lapse of time between repetitions and indicates to what extent memory traces can successfully be used in the process of recall
  3. Stability of memory reflects the history of repetitions in the process of learning and increases with each stimulation of the synapse. It determines the length of the maximum inter-repetition interval that does not result in forgetting

10.4.3. Molecular changes in synapses that correspond to retrievability and stability of memory

The following conclusions drawn earlier in the chapter will be used in the construction of the molecular model of memory that is consistent with the SuperMemo theory:

  1. deterministic memory is located in the postsynaptic membrane (heterosynaptic configuration),
  2. deterministic memory comprises to components: retrievability and stability,
  3. glutamate and acetylcholine are the most promising candidates for the neurotransmitter participating in memory,
  4. increased number of receptors is reported in many cases of facilitation,
  5. phosphorylation of proteins, particularly potassium channels, is reported in many cases of facilitation,
  6. increased influx of calcium into the cell is reported in many cases of facilitation.

If we assume that only the above facts can be used in construction of the model of memory then we will have only two candidates to represent retrievability and stability:

  1. number of receptors,
  2. level of phosphorylation of membrane proteins.

Concentrations of neither calcium nor neurotransmitter can be considered as variables of memory because they are not limited to the membrane and therefore cannot account for lasting memories (see earlier). Retrievability, as the time-measuring variable, changes steadily during the inter-repetition period. Of the two proposed parameters, phosphorylation of proteins is a less lasting one. The degree of its transience might even call in question its validity as a factor of long-term memory. Phosphorylation is therefore a more suitable candidate to code for retrievability. By exclusion, the number of receptors in the postsynaptic membrane has to be used as a parameter coding for stability. The decrease of retrievability during an inter-repetition period could be accounted by spontaneous, gradual dephosphorylation of proteins. Decrease of phosphorylation below a certain level would cause forgetting. There must, however, be a link between retrievability and stability that would allow the former to decrease to the forgetting level at, and only at, the end of the current, optimal interval (do not confuse it with dependence between parameters!).

If we agree that the decrease of phosphorylation should have an exponential nature then we have two basic possibilities:

In our model, it is hard to conceive how an increased number of receptors in the postsynaptic membrane could slow down phosphorylation. Moreover, it is obvious that the number of receptors could speed up phosphorylation at the moment of repetition (e.g. increased activation of protein kinase; compare Kandel's model). However, the first formula implies a dramatic increase in phosphorylation levels during repetitions (the order of the increase being an exp(exp(n)) function of the number of the repetition!). This certainly exceed physical availability of residues that could undergo phosphorylation, and in consequence, only the variant with decreasing dephosphorylation rate can be accepted. At this moment all ambiguities of the model are solved and the sequence of processes involved in memory can be outlined. During a repetition the following phenomena take place:

  1. Particles of the synaptic neurotransmitter (possibly glutamate or acetylcholine) are bound by postsynaptic receptors.
  2. Fast phosphorylation of potassium channels follows causing rapid increase of retrievability. This process might proceed in the same way as in Kandel's model, i.e. through activation of adenyl cyclase, increase in cAMP and activation of the relevant membrane protein kinase.
  3. As a result of phosphorylation and lower conductivity of the membrane for potassium a large amount of calcium enters the cell during depolarization.
  4. Calcium induces exposition of new receptors therefore causing increase of stability. This process might proceed as in the calpain model, i.e. through activation of calpain and degradation of fodrin.

Initialization of conductive abilities of the synapse proceeds along the same lines as the process of a repetition except for the depolarization which is caused by the template synapse of the heterosynaptic configuration. Between repetitions potassium channels are slowly dephosphorylized resulting in reduction of retrievability. Similarly, the number of receptors should undergo slow reduction causing loss of stability. However, the decrease in stability should be slow enough so that at the end of a given optimal interval stability could be higher that at the end of the previous optimal interval. The decrease of stability is necessary if we want to explain why synapses are not permanently potentiated for a large increase of retrievability. If stability were not reduced there would be a constant disadvantageous increase of the brain's ability to store memories. The disappearance of stability after an instance of memory lapse could also be solved by a process of constant rebuilt of the network of neuronal connections in the brain. Such a process would consist in constant, nonspecific sprouting of new neuronal appendices with concomitant removal of synapses characterized by low retrievability and high stability. This would be an example of a memory cleaning system mentioned earlier to explain why relearning an item is not much different from learning it for the first time (note, that if did not obey the minimum information principle then relearning would always have to be easier, as it is shown in standard textbooks on psychology).

The presented model does not answer all questions related to the process of learning. Below the most burning problems are listed:

  1. The model explains well the phenomena pertinent to optimal learning but does not account for one of my early observations concerning intermittent learning. I mentioned earlier in the thesis the fact that intervals shorter than optimal are less effective in their impact on stability of memory. This fact is not implied by the model, in which nothing prevents pumping up the phosphorylation levels in a short time by repeated stimulations. In learning, it is certainly not possible to make intensive repetitions during a short period, and thus remember till the end of one's life. This has a practical importance of eliminating the impact of nine-day wonders on memory. There must be a mechanism that prevents further phosphorylation if the potassium channels are already highly phosphorylized at the moment of stimulation. Thus the potential for the increase of phosphorylation would reach maximum at the moment when the optimal inter-repetition interval comes to a close.
  2. During a repetitory stimulation of the synapse, both retrievability and stability have to increase sharply in a very short time. The length of the period for build-up of both variables is limited by the diffusion barrier which requires that local, synaptic phenomena that inseparably involve cytoplasmic processes must be fast enough if they are to be specific. Otherwise other synapses of the same neuron might undergo facilitation because of the cytoplasmic connection. Consolidation of long-term memory may last longer, but it has to be based on differences that has already occurred in the synapse during depolarization. Synthesis, incorporation or release of receptors in the postsynaptic membrane seems to be too slow a process to account for the immediate increase in stability. Therefore it cannot be excluded that memory variables are coded by different molecular factors before and after the consolidation.
  3. The presented model does not require appearance of new protein molecules for formation and maintenance of memory while many experiments show increased synthesis of RNA and proteins during and after learning. It was also shown, that inhibitors of proteins impair formation of long-term memory. Although, it was shown that calcium increases the number of glutamate receptors even in isolated synaptosomes [Lynch, 1984], it seems unlikely that consolidation of long-term memory does not involve synthesis of proteins at anyone of its stages.
  4. The function that describes the rate of dephosphorylation between repetitions implies that retrievability should decrease more slowly in subsequent inter-repetition intervals. The decrease should somehow be related to the stability of memory. However, the molecular mechanism of such a possibility is not clear. Perhaps the increased number of receptor would result in the change of properties of the membrane, and in consequence protect potassium channels against dephosphorylation. Because of the foregoing problems, the presented model has to be considered only as an illustration of how retrievability and stability of memory might be interpreted at the cellular level. Certainly a lot of research on molecular changes in synapses during facilitation is required to yield data that could constitute a basis of a truly adequate model of memory.


  1. An illustrative model of possible molecular mechanism involved in the process of learning based on repetition spacing was presented.
  2. In the model, retrievability of memory was represented as the level of phosphorylation of potassium channels. Retrievability is supposed to decrease exponentially during the inter-repetition period.
  3. The model represents stability of memory as the number of receptors in the postsynaptic membrane.
  4. The most important questions that cannot be answered on the base of the model are:
    • Why is the increase of memory stability lower while applying intervals that are shorter than optimal ones?
    • Why do inhibitors of protein synthesis prevent formation of long-term memory?