In 1966, Laszlo Belady wrote a seminal paper titled “A study of replacement algorithms
for a virtual-storage computer” that appeared in IBM SYSTEMS JOURNAL * VOL. 5 * NO. 2. 1966 . In this paper, he iterates over several different approaches for efficient
virtual memory paging and provides evidence of the efficacy of the various methods. He
considers there to be three classes of page replacement algorithms:
- Class 1: These algorithms do not use an information about memory usage and
consider each block a likely candidate for removal
- Class 2: These algorithms use historical information to determine who is the best
victim for swapping
- Class 3: These algorithms consider the history of the page’s usage so that memory
that is resident and used frequently is less likely to be removed
Evaluation of such algorithms use a reference string to determine what is the
level of page faulting and swapping that would occur given each algorithm. The
string is a sequence of addresses that will be accessed by a sample program. This
sequence can be auto-generated using some a random number generator or can be collected from real program execution. An example of a sequence is:
FYI, I cheated and grabbed this sequence from one of
the references below.
Again, standing on the shoulders of giants, I will use the
example of a system that is 100 bytes per page. This allows us to distill down
the above sequence to:
At first glance, I could not see how the author arrived
and the simplified sequence. Once I put on my thinking cap, I noticed that it was
removing all of the sequences where the page access was repeated. For example,
the first few addresses in the sequence (0100,0432,0101,0612,0102,0103) references the
100 address, then a 400 address followed by another 100 address. After the 600
address, it is followed by multiple 100 addresses. Each sequence of repetitive
address can be stored as a single address. This format can be used to show that
these address blocks will not require paging since they can fit in memory.
A measure of an algorithm’s efficiency is the number of
page faults that the algorithm will produce under normal conditions. It is easy
to reason that if you add more pages, you will have less need to evict to make room for
the next page demanded.
Belady observed that this theory does not hold true when
using a fifo based algorithm. Let me try to give an example of what a fifo
replacement algorithm would look like. Let’s use a larger sequence (1,2,3,4,1,2,5,1,2,3,4,5) and start with the assumption that we only have three frames to put memory into, the swapping would look like this:
The highlighted cells are the frames where a page fault occurred. If we now add a fourth slot for data, let’s see how this affects the amount of page faults.
A pattern is starting to form that is contradictory to the statements above. It is reasonable to believe that as you add more possible frames, you would have fewer page faults. To see the pattern further, let’s add a new scenario where there is 5 frames that can store data.
You can see that the amount of page faults for the given reference sequence falls off very quickly as you add more frames once you pass four. This was the anomalous condition that Belady observed in 1966. This is one small piece of the larger pie that is virtual memory storage.
NOTE: This is from my reading and understanding. If I have something incorrect or a better explanation or more is needed please leave a comment. This is for my benefit and for the community as well.
Silberschatz, Abraham, Peter B. Galvin, and Greg Gagne. Operating System Concepts. Hoboken, NJ: J. Wiley & Sons, 2009. Print.
Belady, L. A. “A Study of Replacement Algorithms for a Virtual-storage Computer.” IBM Syst. J. IBM Systems Journal 5.2 (1966): 78-101. Web.