Skip to content

NHLJang/backer.github.io

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 

Repository files navigation

backer.github.io

Virtual Memory, Paging, and Swapping

Original content by Gabriele Tolomei.
Source: Virtual Memory, Paging, and Swapping - Gabriele Tolomei


Index


Overview

Virtual Memory is a memory management technique that is implemented using both hardware (MMU) and software (operating system). It abstracts from the real memory available on a system by introducing the concept of virtual address space, which allows each process thinking of physical memory as a contiguous address space (or collection of contiguous segments).

Goals and Implementation

The goal of virtual memory is to map virtual memory addresses generated by an executing program into physical addresses in computer memory. This concerns two main aspects:

  • Address translation: (from virtual to physical).
  • Virtual address spaces management.

The former is implemented on the CPU chip by a specific hardware element called Memory Management Unit (MMU). The latter is provided by the operating system, which sets up virtual address spaces (i.e., either a single virtual space for all processes or one for each process) and actually assigns real memory to virtual memory.

Furthermore, software within the operating system may provide a virtual address space that can exceed the actual capacity of main memory (i.e., using also secondary memory) and thus reference more memory than is physically present in the system.

Primary Benefits

The primary benefits of virtual memory include:

  • Freeing applications (and programmers) from having to manage a shared memory space.
  • Increasing security due to memory isolation.
  • Being able to conceptually use more memory than might be physically available, using the technique of paging.

Almost every virtual memory implementations divide a virtual address space into blocks of contiguous virtual memory addresses, called pages, which are usually 4 KB in size.

Memory Management Unit (MMU) and Page Tables

In order to translate virtual addresses of a process into physical memory addresses used by the hardware to actually process instructions, the MMU makes use of so-called page table, i.e., a data structure managed by the OS that store mappings between virtual and physical addresses.

Concretely, the MMU stores a cache of recently used mappings out of those stored in the whole OS page table, which is called Translation Lookaside Buffer (TLB).

The picture below describes the address translation task as discussed above.

MMU Address Translation

Address Translation Process

When a virtual address needs to be translated into a physical address:

  1. The MMU first searches for it in the TLB cache (step 1 in the picture above).
  2. If a match is found (TLB hit), then the physical address is returned and the computation goes on (2.a.).
  3. If there is no match (TLB miss), the MMU searches for a match on the whole page table, i.e., page walk (2.b.).
  4. If this match exists on the page table, it is written to the TLB cache (3.a.).
  5. The address translation is restarted so that the MMU is able to find a match on the updated TLB (1 & 2.a.).

Page Table Lookup Failures

Page table lookup may fail due to two reasons:

  • Invalid Translation: When the process tries to access an area of memory which it cannot ask for. The page supervisor typically raises a segmentation fault exception (3.b.).
  • Page Not Loaded: The requested page is not loaded in main memory (indicated by a flag on the page table entry). A page fault occurs (3.c.).

In the case of a page fault:

  1. The requested page must be retrieved from secondary storage (disk).
  2. The page supervisor accesses the disk and re-stores the page in main memory (4.).
  3. It updates the page table and the TLB with the new mapping (3.a.).
  4. It tells the MMU to start the request again so that a TLB hit will take place (1 & 2.a.).

Paging and Swapping

When all physical memory is exhausted, the page supervisor must free a page in main memory to allow an incoming page from disk to be stored.

  • To determine which page to move, the supervisor uses page replacement algorithms, such as Least Recently Used (LRU).
  • Moving pages between secondary storage and main memory is referred to as swapping (4.).

Kernel vs. User Mode Virtual Memory

On a typical 32-bit Linux OS, the virtual address space is split between OS kernel and user mode.

Memory Layout Split

  • Kernel Space: Just because the kernel has a dedicated portion of the virtual address space (e.g., 1 GB) does not mean it uses that much physical memory. This is the portion available to map whatever physical memory the OS kernel wishes.
  • Consistency: In Linux, kernel space is constantly present and maps the same physical memory in all processes across context switches.
  • Mapping Example: If a system has 512 MB of physical memory, only those 512 MB out of the 1 GB virtual space will be mapped for the kernel.

Kernel Memory Persistence

In practice, the kernel does not reside on secondary storage. On Linux, kernels need approximately 70 MB, well within modern RAM capacities.

Crucially, kernel data and code must be kept always in main memory for efficiency and to avoid unhandleable page faults:

  • The OS kernel handles page faults via an Interrupt Service Routine (ISR).
  • If the kernel itself generated a page fault on the code for the page fault handling routine, the whole system would block!
  • Therefore, kernel code and data are always addressable and never generate page faults.

All rights belong to the original author, Gabriele Tolomei.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors