SuperMemo pioneered the use of
spaced repetition
in computer software in 1987 (Wozniak, 1990). Since then,
the SuperMemo spaced repetition algorithm has been perfected by collecting
memory data and investigating its performance. The theoretical foundations of
the present version are described in
Building Memory Stability.
Below you will find a general outline of the 8th major formulation of the
repetition spacing algorithm used in SuperMemo. It is referred to as
Algorithm SM15 since it was first implemented in SuperMemo 15.0.
Although the increase in complexity of Algorithm SM15 as compared with its
predecessors (e.g.
Algorithm SM6 is incomparably greater than the expected benefit for the
user, there is a substantial theoretical and practical evidence that the
increase in the speed of learning resulting from the upgrade may fall into
the range from 30 to 50%. In addition, Algorithm SM15 eliminates various
problems with repetition delay and repetition advancement evident in
Algorithm SM8.
Historic note: earlier releases of the algorithm Although the presented algorithm may seem complex, you should find it easier
to follow its steps if you understand the evolution of individual concepts such as
EFactor, matrix of optimum intervals, optimum factors, forgetting curves,
or midinterval repetition:
 1985  Paperandpencil version of SuperMemo is
formulated (Algorithm SM0). Repetitions of whole
pages of material proceed along a fixed table of intervals. See also: Using SuperMemo without a computer
 1987  First computer implementation makes it
possible to divide material into individual items. Items are classified into difficulty
categories by means of EFactor. Each difficulty
category has its own optimum spacing of repetitions (Algorithm SM2)
 1989  SuperMemo 4 was
able to modify the function of optimum intervals depending on the student's performance (Algorithm SM4)
 1989  SuperMemo 5
replaced the matrix of optimum intervals with the matrix of optimal factors (an optimum
factor is the ratio between successive intervals). This approach accelerated the
modification of the function of optimum intervals (Algorithm SM5)
 1991  SuperMemo 6
derived optimal factors from forgetting curves plotted for each entry of the matrix of
optimum factors. This could dramatically speed up the convergence of the function of
optimum intervals to its ultimate value (Algorithm SM6)
 1995  SuperMemo 8
capitalized on data collected by users of SuperMemo 6 and
SuperMemo 7 and added a number
of improvements that strengthened the theoretical validity of the function of optimum
intervals and made it possibility to accelerate its modification, esp. in early stages of
learning (Algorithm SM8). New concepts
 2002  SuperMemo
11
introduced the first SuperMemo algorithm that is entirely resistant to
interference from delay or advancement of repetitions. This makes it
possible to safely delay repetitions (Postpone) or advance
repetitions (Review):
 2005  SuperMemo
12 introduces boundaries on A and B parameters computed from the
Gradevs.Forgetting Index data. This plugs up a weakness in the
algorithm that showed when importing repetitions from other applications
(e.g. open source MemAid). If a large number of easy repetitions
occurred at unnaturally long intervals (as after pretraining with
another application), the graph might show reversed monotonicity that
would temporarily affect the estimation of AFactors (the speed of
selfcorrection would be reversely proportional to the amount of flawed
data). When boundaries are imposed, selfcorrection is instant, and the
accuracy of AFactor estimation increases with each repetition executed
in SuperMemo
 2011 
SuperMemo 15
eliminated two weaknesses of Algorithm SM11 that would show up in
heavily overloaded collections with very large item delays:
 UFactors now allow of correctly
interpreting repetition delays of up to 15 years
 forgetting curves are now corrected for
repetition delays beyond the maximum registered UFactor value
(preventing failed grades in delayed repetitions decreasing the
estimates of the optimum interval for standardlyscheduled items in
the same category)

Algorithm SM15
SuperMemo begins the effort to compute the optimum interrepetition intervals by storing
the recall record of individual items (i.e. grades scored in learning). This record is used to estimate the
current strength of a given memory trace (memory
stability), and the difficulty of the underlying
piece of knowledge (item). The item difficulty expresses the complexity of memories, and
reflects the effort needed to
produce unambiguous and stable memory traces. SuperMemo takes the requested
recall rate as the optimization criterion (e.g. 95%), and computes the
intervals that satisfy this criterion. The function of optimum intervals is
represented in a matrix form (OF matrix) and is subject to modification based on
the results of the learning process. Although satisfying the optimization
criterion is relatively easy, the complexity of the algorithm comes from the
need to obtain maximum speed of convergence possible in the light of the known
memory models.
Important! Algorithm SM15 is used only to compute the
intervals between repetitions of items. Topics
are reviewed at intervals computed with an entirely different algorithm (not
described here).
This is a more detailed description of the Algorithm SM15:
 Optimum interval: Interrepetition intervals are computed using the following formula:
I(1)=OF[1,L+1]
I(n)=I(n1)*OF[n,AF]
where:
 OF  matrix of optimal factors, which is modified in the course of
repetitions
 OF[1,L+1]  value of the OF matrix entry taken from the first row and
the L+1 column
 OF[n,AF]  value of the OF matrix entry that corresponds with the
nth repetition, and with item difficulty AF
 L  number of times a given item has been forgotten (from
"memory Lapses")
 AF  number that reflects absolute difficulty of a given item (from
"Absolute difficulty Factor")
 I(n)  nth interrepetition interval for a given item
 Advanced repetitions: Because of possible advancement in executing
repetitions (e.g. forced review before an exam), the actual
optimum factor (OF)
used to compute the optimum interval is decremented by dOF using formulas that
account for the spacing effect in learning:
dOF=dOF_{max}*a/(t_{half}+a)
dOF_{max}=(OF1)*(OI+t_{half}1)/(OI1)
where:
 dOF  decrement to OF resulting from the spacing effect
 a  advancement of the repetition in days as compared with
the optimum schedule (note that there is no change to OF if a=0,
i.e. the repetition takes time at optimum time)
 dOF_{max}  asymptotic limit on dOF for infinite a (note that
for a=OI1 the decrement will be OF1 which corresponds to no increase
in interrepetition interval)
 t_{half}_{ } advancement at which there is half the expected
increase to synaptic stability as a result of a repetition (presently
this value corresponds roughly to 60% of the length of the optimum
interval for wellstructured material)
 OF  optimum factor (i.e. OF[n,AF] for the nth interval and a given
value of AF)
 OI  optimum interval (as derived from the matrix OF)
 Delayed repetitions: Because of possible delays in executing repetitions, matrix OF is not
actually indexed with repetitions but with repetition categories. For example if the 5th
repetition is delayed, OF matrix is used to compute the repetition category, i.e. the
theoretical value of the repetition number that corresponds with the interval used before
the repetition. The repetition category may, for example, assume the value 5.3 and we will
arrive at I(5)=I(4)*OF[5.3,AF] where OF[5.3,AF] has a intermediate value derived from
OF[5,AF] and OF[6,AF]
 Matrix of optimum intervals: SuperMemo does not store the matrix of
optimum intervals as in some earlier versions. Instead it keeps a matrix of
optimal factors that can be converted to the matrix of optimum intervals (as
in the formula from Point 1). The matrix of optimal factors OF used in Point 1 has been derived
from the mathematical model of forgetting and from similar matrices built on data
collected in years of repetitions in collections created by a number of users. Its initial
setting corresponds with values found for a lessthanaverage student. During repetitions,
upon collecting more and more data about the student’s memory, the matrix is
gradually modified to make it approach closely the actual student’s memory
properties. After years of repetitions, new data can be fed back to generate more accurate
initial matrix OF. In SuperMemo, this matrix can be viewed in 3D with Tools
: Statistics : Analysis : 3D Graphs :
OFactor Matrix
 Item difficulty: The absolute item difficulty factor (AFactor),
denoted AF in Point 1, expresses the difficulty of an item (the higher it is,
the easier the item). It is worth noting that AF=OF[2,AF]. In other words, AF denotes the
optimum interval increase factor after the second repetition. This is also equivalent
with the highest interval increase factor for a given item. Unlike EFactors in Algorithm SM6 employed in SuperMemo 6 and SuperMemo 7, AFactors
express absolute item difficulty and do not depend on the difficulty of other items in the
same collection of study material (see FAQs for explanation)
 Deriving OF matrix from RF matrix: Optimum values of the entries of the OF matrix are derived through a
sequence of approximation procedures from the RF matrix which is defined in the same way
as the OF matrix (see Point 1), with the exception that its values are taken from the real
learning process of the student for who the optimization is run. Initially, matrices OF and RF are identical;
however, entries of the RF matrix are modified with each repetition, and a new value of
the OF matrix is computed from the RF matrix by using approximation procedures. This
effectively produces the OF matrix as a smoothed up form of the RF matrix. In simple
terms, the RF matrix at any given moment corresponds to its bestfit value derived from
the learning process; however, each entry is considered a bestfit entry on it’s own,
i.e. in abstraction from the values of other RF entries. At the same time, the OF matrix
is considered a bestfit as a whole. In other words, the RF matrix is computed entry by
entry during repetitions, while the OF matrix is a smoothed copy of the RF matrix
 Forgetting curves: Individual entries of the RF matrix are computed from forgetting
curves approximated for each entry individually. Each forgetting curve corresponds with a
different value of the repetition number and a different value of AFactor (or memory
lapses in the case of the first repetition). The value of the RF matrix entry corresponds
to the moment in time where the forgetting curve passes the knowledge retention point
derived from the requested forgetting index. For example, for
the first repetition of a new item, if the forgetting index equals 10%, and after four
days the knowledge retention indicated by the forgetting curve drops below 90% value, the
value of RF[1,1] is taken as four. This means that all items entering the learning process
will be repeated after four days (assuming that the matrices OF and RF do not differ at
the first row of the first column). This satisfies the main premise of SuperMemo, that the
repetition should take place at the moment when the forgetting probability equals 100%
minus the forgetting index stated as percentage. In SuperMemo,
forgetting curves can be viewed with Tools
: Statistics : Analysis : Curves (or
in 3D with Tools
: Statistics : Analysis : 3D Curves):
 Deriving OF matrix from the forgetting curves: The OF matrix is derived from the RF matrix by: (1) fixedpoint power
approximation of the RFactor decline along the RF matrix columns (the fixed point
corresponds to second repetition at which the approximation curve passes through the
AFactor value), (2) for all columns, computing DFactor which expresses the decay
constant of the power approximation, (3) linear regression of DFactor change across the
RF matrix columns and (4) deriving the entire OF matrix from the slope and intercept of
the straight line that makes up the best fit in the DFactor graph. The exact formulas
used in this final step go beyond the scope of this illustration.
Note that the first row of the OF matrix is computed in a different way. It corresponds to
the bestfit exponential curve obtained from the first row of the RF matrix.
All the above steps are passed after each repetition. In other words, the theoretically
optimum value of the OF matrix is updated as soon as new forgetting curve data is
collected, i.e. at the moment, during the repetition, when the student, by providing a
grade, states the correct recall or wrong recall (i.e. forgetting) (in Algorithm SM6, a separate procedure Approximate
had to be used to find the bestfit OF matrix, and the OF matrix used at repetitions might
differ substantially from its bestfit value)
 Item difficulty: The initial value of AFactor is derived from the first grade
obtained by the item, and the correlation graph of the first grade and AFactor (GAF graph). This graph is updated after each repetition in which a
new AFactor value is estimated and correlated with the item’s first grade.
Subsequent approximations of the real AFactor value are done after each repetition by
using grades, OF matrix, and a correlation graph that shows the correspondence of the
grade with the expected forgetting index (FIG graph). The grade
used to compute the initial AFactor is normalized, i.e. adjusted for the difference
between the actually used interval and the optimum interval for the forgetting index equal
10%
 Grades vs. expected forgetting index correlation: The FIG
graph is updated after each
repetition by using the expected forgetting index and actual grade scores. The expected
forgetting index can easily be derived from the interval used between repetitions and the
optimum interval computed from the OF matrix. The higher the value of the expected
forgetting index, the lower the grade. From the grade and the FIG graph (see: FIG graph in Analysis),
we can compute the estimated forgetting index which corresponds to the postrepetition
estimation of the forgetting probability of the justrepeated item at the hypothetical
prerepetition stage. Because of the stochastic nature of forgetting and recall, the same
item might or might not be recalled depending on the current overall cognitive status of
the brain; even if the strength and retrievability of memories of all contributing
synapses is/was identical! This way we can speak about the prerepetition recall
probability of an item that has just been recalled (or not). This probability is expressed
by the estimated forgetting index
 Computing AFactors: From (1) the estimated forgetting index, (2) length of the interval
and (3) the OF matrix, we can easily compute the most accurate value of AFactor. Note
that AFactor serves as an index to the OF matrix, while the estimated forgetting index
allows one to find the column of the OF matrix for which the optimum interval corresponds
with the actually used interval corrected for the deviation of the estimated forgetting
index from the requested forgetting index. At each repetition, a weighted
average is taken of the old AFactor and the new estimated value of the
AFactor. The newly obtained AFactor is used in indexing the OF matrix when
computing the new optimum interrepetition interval
To sum it up. Repetitions result in computing a set of parameters
characterizing the memory of the student: RF matrix, GAF graph, and FIG
graph. They are also used to compute AFactors of individual items that characterize the
difficulty of the learned material. The RF matrix is smoothed up to produce the OF matrix,
which in turn is used in computing the optimum interrepetition interval for items of
different difficulty (AFactor) and different number of repetitions (or memory lapses in
the case of the first repetition). Initially, all student’s memory parameters are
taken as for a lessthanaverage student (lessthan average yields faster
convergence than average or morethanaverage), while all AFactors are assumed to be
equal (unknown).
Optimization solutions used in Algorithm SM15 have been perfected
since 1987 while investigating the efficiency of the SuperMemo method (first
implementation: December 1987). This makes sure that the
convergence of the starting memory parameters with the actual parameters of the student
proceeds in a very short time. In addition, Algorithm SM15 includes mechanisms
that make it insensitive to interference from the deviation from the optimum
repetition timing (e.g. delayed or advanced repetitions). It can also accept
uninitialized learning process imported from other applications (e.g. other
SuperMemos). The introduction of AFactors and the use of the
GAF graph greatly enhanced the speed of estimating item difficulty.
Algorithm SM15 is constantly being perfected in successive releases
of SuperMemo, esp. to account for newly collected repetition data, convergence data, input
parameters, instabilities, etc. The
adopted solutions are the result of constant research into new algorithmic variants. Theoretical
considerations and research implementations, including the employment of neural networks in
spaced repetition, indicate that it is
not likely new solutions could effectively compete with the presented algebraic
approach (see Algorithm FAQs). Further
improvement to the speed of learning is likely to be negligible, and even older
implementations of SuperMemo algorithms (esp. Algorithm
SM2) are employed successfully (e.g. by competing spaced repetition
applications).
Frequently Asked Questions
For questions about SuperMemo algorithm, neural network implementation,
stability and retrievability interpretation, as well as algorithms used in
various versions of SuperMemo and in other spaced repetition software, see:
FAQ: SuperMemo Algorithm