SuperMemo pioneered the use of
spaced repetition
in computer software in 1987 (Wozniak, 1990). Since then,
the SuperMemo spaced repetition algorithm has been perfected by collecting
memory data and investigating its performance. The theoretical foundations of
the present version are described in
Building Memory Stability.
Below you will find a general outline of the 8th major formulation of the
repetition spacing algorithm used in SuperMemo. It is referred to as
Algorithm SM-15 since it was first implemented in SuperMemo 15.0.
Although the increase in complexity of Algorithm SM-15 as compared with its
predecessors (e.g.
Algorithm SM-6 is incomparably greater than the expected benefit for the
user, there is a substantial theoretical and practical evidence that the
increase in the speed of learning resulting from the upgrade may fall into
the range from 30 to 50%. In addition, Algorithm SM-15 eliminates various
problems with repetition delay and repetition advancement evident in
Algorithm SM-8.
Historic note: earlier releases of the algorithm Although the presented algorithm may seem complex, you should find it easier
to follow its steps if you understand the evolution of individual concepts such as
E-Factor, matrix of optimum intervals, optimum factors, forgetting curves,
or mid-interval repetition:
- 1985 - Paper-and-pencil version of SuperMemo is
formulated (Algorithm SM-0). Repetitions of whole
pages of material proceed along a fixed table of intervals. See also: Using SuperMemo without a computer
- 1987 - First computer implementation makes it
possible to divide material into individual items. Items are classified into difficulty
categories by means of E-Factor. Each difficulty
category has its own optimum spacing of repetitions (Algorithm SM-2)
- 1989 - SuperMemo 4 was
able to modify the function of optimum intervals depending on the student's performance (Algorithm SM-4)
- 1989 - SuperMemo 5
replaced the matrix of optimum intervals with the matrix of optimal factors (an optimum
factor is the ratio between successive intervals). This approach accelerated the
modification of the function of optimum intervals (Algorithm SM-5)
- 1991 - SuperMemo 6
derived optimal factors from forgetting curves plotted for each entry of the matrix of
optimum factors. This could dramatically speed up the convergence of the function of
optimum intervals to its ultimate value (Algorithm SM-6)
- 1995 - SuperMemo 8
capitalized on data collected by users of SuperMemo 6 and
SuperMemo 7 and added a number
of improvements that strengthened the theoretical validity of the function of optimum
intervals and made it possibility to accelerate its modification, esp. in early stages of
learning (Algorithm SM-8). New concepts
- 2002 - SuperMemo
11
introduced the first SuperMemo algorithm that is entirely resistant to
interference from delay or advancement of repetitions. This makes it
possible to safely delay repetitions (Postpone) or advance
repetitions (Review):
- 2005 - SuperMemo
12 introduces boundaries on A and B parameters computed from the
Grade-vs.-Forgetting Index data. This plugs up a weakness in the
algorithm that showed when importing repetitions from other applications
(e.g. open source MemAid). If a large number of easy repetitions
occurred at unnaturally long intervals (as after pre-training with
another application), the graph might show reversed monotonicity that
would temporarily affect the estimation of A-Factors (the speed of
self-correction would be reversely proportional to the amount of flawed
data). When boundaries are imposed, self-correction is instant, and the
accuracy of A-Factor estimation increases with each repetition executed
in SuperMemo
- 2011 -
SuperMemo 15
eliminated two weaknesses of Algorithm SM-11 that would show up in
heavily overloaded collections with very large item delays:
- U-Factors now allow of correctly
interpreting repetition delays of up to 15 years
- forgetting curves are now corrected for
repetition delays beyond the maximum registered U-Factor value
(preventing failed grades in delayed repetitions decreasing the
estimates of the optimum interval for standardly-scheduled items in
the same category)
|
Algorithm SM-15
SuperMemo begins the effort to compute the optimum inter-repetition intervals by storing
the recall record of individual items (i.e. grades scored in learning). This record is used to estimate the
current strength of a given memory trace (memory
stability), and the difficulty of the underlying
piece of knowledge (item). The item difficulty expresses the complexity of memories, and
reflects the effort needed to
produce unambiguous and stable memory traces. SuperMemo takes the requested
recall rate as the optimization criterion (e.g. 95%), and computes the
intervals that satisfy this criterion. The function of optimum intervals is
represented in a matrix form (OF matrix) and is subject to modification based on
the results of the learning process. Although satisfying the optimization
criterion is relatively easy, the complexity of the algorithm comes from the
need to obtain maximum speed of convergence possible in the light of the known
memory models.
Important! Algorithm SM-15 is used only to compute the
intervals between repetitions of items. Topics
are reviewed at intervals computed with an entirely different algorithm (not
described here).
This is a more detailed description of the Algorithm SM-15:
- Optimum interval: Inter-repetition intervals are computed using the following formula:
I(1)=OF[1,L+1]
I(n)=I(n-1)*OF[n,AF]
where:
- OF - matrix of optimal factors, which is modified in the course of
repetitions
- OF[1,L+1] - value of the OF matrix entry taken from the first row and
the L+1 column
- OF[n,AF] - value of the OF matrix entry that corresponds with the
n-th repetition, and with item difficulty AF
- L - number of times a given item has been forgotten (from
"memory Lapses")
- AF - number that reflects absolute difficulty of a given item (from
"Absolute difficulty Factor")
- I(n) - n-th inter-repetition interval for a given item
- Advanced repetitions: Because of possible advancement in executing
repetitions (e.g. forced review before an exam), the actual
optimum factor (OF)
used to compute the optimum interval is decremented by dOF using formulas that
account for the spacing effect in learning:
dOF=dOFmax*a/(thalf+a)
dOFmax=(OF-1)*(OI+thalf-1)/(OI-1)
where:
- dOF - decrement to OF resulting from the spacing effect
- a - advancement of the repetition in days as compared with
the optimum schedule (note that there is no change to OF if a=0,
i.e. the repetition takes time at optimum time)
- dOFmax - asymptotic limit on dOF for infinite a (note that
for a=OI-1 the decrement will be OF-1 which corresponds to no increase
in inter-repetition interval)
- thalf - advancement at which there is half the expected
increase to synaptic stability as a result of a repetition (presently
this value corresponds roughly to 60% of the length of the optimum
interval for well-structured material)
- OF - optimum factor (i.e. OF[n,AF] for the n-th interval and a given
value of AF)
- OI - optimum interval (as derived from the matrix OF)
- Delayed repetitions: Because of possible delays in executing repetitions, matrix OF is not
actually indexed with repetitions but with repetition categories. For example if the 5-th
repetition is delayed, OF matrix is used to compute the repetition category, i.e. the
theoretical value of the repetition number that corresponds with the interval used before
the repetition. The repetition category may, for example, assume the value 5.3 and we will
arrive at I(5)=I(4)*OF[5.3,AF] where OF[5.3,AF] has a intermediate value derived from
OF[5,AF] and OF[6,AF]
- Matrix of optimum intervals: SuperMemo does not store the matrix of
optimum intervals as in some earlier versions. Instead it keeps a matrix of
optimal factors that can be converted to the matrix of optimum intervals (as
in the formula from Point 1). The matrix of optimal factors OF used in Point 1 has been derived
from the mathematical model of forgetting and from similar matrices built on data
collected in years of repetitions in collections created by a number of users. Its initial
setting corresponds with values found for a less-than-average student. During repetitions,
upon collecting more and more data about the students memory, the matrix is
gradually modified to make it approach closely the actual students memory
properties. After years of repetitions, new data can be fed back to generate more accurate
initial matrix OF. In SuperMemo, this matrix can be viewed in 3D with Tools
: Statistics : Analysis : 3-D Graphs :
O-Factor Matrix
- Item difficulty: The absolute item difficulty factor (A-Factor),
denoted AF in Point 1, expresses the difficulty of an item (the higher it is,
the easier the item). It is worth noting that AF=OF[2,AF]. In other words, AF denotes the
optimum interval increase factor after the second repetition. This is also equivalent
with the highest interval increase factor for a given item. Unlike E-Factors in Algorithm SM-6 employed in SuperMemo 6 and SuperMemo 7, A-Factors
express absolute item difficulty and do not depend on the difficulty of other items in the
same collection of study material (see FAQs for explanation)
- Deriving OF matrix from RF matrix: Optimum values of the entries of the OF matrix are derived through a
sequence of approximation procedures from the RF matrix which is defined in the same way
as the OF matrix (see Point 1), with the exception that its values are taken from the real
learning process of the student for who the optimization is run. Initially, matrices OF and RF are identical;
however, entries of the RF matrix are modified with each repetition, and a new value of
the OF matrix is computed from the RF matrix by using approximation procedures. This
effectively produces the OF matrix as a smoothed up form of the RF matrix. In simple
terms, the RF matrix at any given moment corresponds to its best-fit value derived from
the learning process; however, each entry is considered a best-fit entry on its own,
i.e. in abstraction from the values of other RF entries. At the same time, the OF matrix
is considered a best-fit as a whole. In other words, the RF matrix is computed entry by
entry during repetitions, while the OF matrix is a smoothed copy of the RF matrix
- Forgetting curves: Individual entries of the RF matrix are computed from forgetting
curves approximated for each entry individually. Each forgetting curve corresponds with a
different value of the repetition number and a different value of A-Factor (or memory
lapses in the case of the first repetition). The value of the RF matrix entry corresponds
to the moment in time where the forgetting curve passes the knowledge retention point
derived from the requested forgetting index. For example, for
the first repetition of a new item, if the forgetting index equals 10%, and after four
days the knowledge retention indicated by the forgetting curve drops below 90% value, the
value of RF[1,1] is taken as four. This means that all items entering the learning process
will be repeated after four days (assuming that the matrices OF and RF do not differ at
the first row of the first column). This satisfies the main premise of SuperMemo, that the
repetition should take place at the moment when the forgetting probability equals 100%
minus the forgetting index stated as percentage. In SuperMemo,
forgetting curves can be viewed with Tools
: Statistics : Analysis : Curves (or
in 3-D with Tools
: Statistics : Analysis : 3-D Curves):
- Deriving OF matrix from the forgetting curves: The OF matrix is derived from the RF matrix by: (1) fixed-point power
approximation of the R-Factor decline along the RF matrix columns (the fixed point
corresponds to second repetition at which the approximation curve passes through the
A-Factor value), (2) for all columns, computing D-Factor which expresses the decay
constant of the power approximation, (3) linear regression of D-Factor change across the
RF matrix columns and (4) deriving the entire OF matrix from the slope and intercept of
the straight line that makes up the best fit in the D-Factor graph. The exact formulas
used in this final step go beyond the scope of this illustration.
Note that the first row of the OF matrix is computed in a different way. It corresponds to
the best-fit exponential curve obtained from the first row of the RF matrix.
All the above steps are passed after each repetition. In other words, the theoretically
optimum value of the OF matrix is updated as soon as new forgetting curve data is
collected, i.e. at the moment, during the repetition, when the student, by providing a
grade, states the correct recall or wrong recall (i.e. forgetting) (in Algorithm SM-6, a separate procedure Approximate
had to be used to find the best-fit OF matrix, and the OF matrix used at repetitions might
differ substantially from its best-fit value)
- Item difficulty: The initial value of A-Factor is derived from the first grade
obtained by the item, and the correlation graph of the first grade and A-Factor (G-AF graph). This graph is updated after each repetition in which a
new A-Factor value is estimated and correlated with the items first grade.
Subsequent approximations of the real A-Factor value are done after each repetition by
using grades, OF matrix, and a correlation graph that shows the correspondence of the
grade with the expected forgetting index (FI-G graph). The grade
used to compute the initial A-Factor is normalized, i.e. adjusted for the difference
between the actually used interval and the optimum interval for the forgetting index equal
10%
- Grades vs. expected forgetting index correlation: The FI-G
graph is updated after each
repetition by using the expected forgetting index and actual grade scores. The expected
forgetting index can easily be derived from the interval used between repetitions and the
optimum interval computed from the OF matrix. The higher the value of the expected
forgetting index, the lower the grade. From the grade and the FI-G graph (see: FI-G graph in Analysis),
we can compute the estimated forgetting index which corresponds to the post-repetition
estimation of the forgetting probability of the just-repeated item at the hypothetical
pre-repetition stage. Because of the stochastic nature of forgetting and recall, the same
item might or might not be recalled depending on the current overall cognitive status of
the brain; even if the strength and retrievability of memories of all contributing
synapses is/was identical! This way we can speak about the pre-repetition recall
probability of an item that has just been recalled (or not). This probability is expressed
by the estimated forgetting index
- Computing A-Factors: From (1) the estimated forgetting index, (2) length of the interval
and (3) the OF matrix, we can easily compute the most accurate value of A-Factor. Note
that A-Factor serves as an index to the OF matrix, while the estimated forgetting index
allows one to find the column of the OF matrix for which the optimum interval corresponds
with the actually used interval corrected for the deviation of the estimated forgetting
index from the requested forgetting index. At each repetition, a weighted
average is taken of the old A-Factor and the new estimated value of the
A-Factor. The newly obtained A-Factor is used in indexing the OF matrix when
computing the new optimum inter-repetition interval
To sum it up. Repetitions result in computing a set of parameters
characterizing the memory of the student: RF matrix, G-AF graph, and FI-G
graph. They are also used to compute A-Factors of individual items that characterize the
difficulty of the learned material. The RF matrix is smoothed up to produce the OF matrix,
which in turn is used in computing the optimum inter-repetition interval for items of
different difficulty (A-Factor) and different number of repetitions (or memory lapses in
the case of the first repetition). Initially, all students memory parameters are
taken as for a less-than-average student (less-than average yields faster
convergence than average or more-than-average), while all A-Factors are assumed to be
equal (unknown).
Optimization solutions used in Algorithm SM-15 have been perfected
since 1987 while investigating the efficiency of the SuperMemo method (first
implementation: December 1987). This makes sure that the
convergence of the starting memory parameters with the actual parameters of the student
proceeds in a very short time. In addition, Algorithm SM-15 includes mechanisms
that make it insensitive to interference from the deviation from the optimum
repetition timing (e.g. delayed or advanced repetitions). It can also accept
uninitialized learning process imported from other applications (e.g. other
SuperMemos). The introduction of A-Factors and the use of the
G-AF graph greatly enhanced the speed of estimating item difficulty.
Algorithm SM-15 is constantly being perfected in successive releases
of SuperMemo, esp. to account for newly collected repetition data, convergence data, input
parameters, instabilities, etc. The
adopted solutions are the result of constant research into new algorithmic variants. Theoretical
considerations and research implementations, including the employment of neural networks in
spaced repetition, indicate that it is
not likely new solutions could effectively compete with the presented algebraic
approach (see Algorithm FAQs). Further
improvement to the speed of learning is likely to be negligible, and even older
implementations of SuperMemo algorithms (esp. Algorithm
SM-2) are employed successfully (e.g. by competing spaced repetition
applications).
Frequently Asked Questions
For questions about SuperMemo algorithm, neural network implementation,
stability and retrievability interpretation, as well as algorithms used in
various versions of SuperMemo and in other spaced repetition software, see:
FAQ: SuperMemo Algorithm