Vm


Q&A

  • When/Where does a .text page being mapped?

References:

Overview

Reference: 2.5 Memory Management

a process can change the size of its text segment only when the segment’s contents are overlaid with data from the filesystem, or when debugging takes place.

The entire contents of a process address space do not need to be resident for a process to execute. If a process references a part of its address space that is not resident in main memory, the system pages the necessary information into memory. (Where?)

When system resources are scarce, the system uses a two-level approach to maintain available resources. If a modest amount of memory is available, the system will take memory resources away from processes if these resources have not been used recently. Should there be a severe resource shortage, the system will resort to swapping the entire context of a process to secondary storage. The demand paging and swapping done by the system are effectively transparent to processes. A process may, however, advise the system about expected future memory utilization as a performance aid.

Inside the kernel

both stack and heap in kernel.

The kernel often does allocations of memory that are needed for only the duration of a single system call. In a user process, such short-term memory would be allocated on the [kernel’s] run-time stack.

Because the kernel has a limited run-time stack, it is not feasible to allocate even moderate-sized blocks of memory on it. Consequently, such memory must be allocated through a more dynamic mechanism. An example is protocol-control blocks that remain throughout the duration of a network connection.

A generalized memory allocator reduces the complexity of writing code inside the kernel. Thus, the 4.4BSD kernel has a single memory allocator that can be used by any part of the system.

[Kernel’s dynamic memory management] has an interface similar to the C library routines malloc and free that provide memory allocation to application programs. Like the C library interface, the allocation routine takes a parameter specifying the size of memory that is needed.

[Unlike C’s interface,] the range of sizes for memory requests is not constrained; however, physical memory is allocated and is not paged. The free routine takes a pointer to the storage being freed, but does not require the size of the piece of memory being freed.

Kenrel-user data sharing

Another issue with the virtual-memory system is the way that information is passed into the kernel when a system call is made. 4.4BSD always copies data from the process address space into a buffer in the kernel.

For read or write operations that are transferring large quantities of data, doing the copy can be time consuming. An alternative to doing the copying is to remap the process memory into the kernel.

The biggest incentives for memory mapping are the needs for accessing big files and for passing large quantities of data between processes. The mmap interface provides a way for both of these tasks to be done without copying.

vm_page_t

Reference: Chapter 7 Virtual Memory System

Physical memory is managed on a page-by-page basis through the vm_page_t structure.

Pages of physical memory are categorized through the placement of their respective vm_page_t structures on one of several paging queues.

A page can be in a wired, active, inactive, cache, or free state. Except for the wired state, the page is typically placed in a doubly link list queue representing the state that it is in. Wired pages are not placed on any queue.

FreeBSD implements a more involved paging queue for cached and free pages in order to implement page coloring. Each of these states involves multiple queues arranged according to the size of the processor’s L1 and L2 caches. When a new page needs to be allocated, FreeBSD attempts to obtain one that is reasonably well aligned from the point of view of the L1 and L2 caches relative to the VM object the page is being allocated for.

Additionally, a page may be held with a reference count or locked with a busy count. The VM system also implements an “ultimate locked” state for a page using the PG_BUSY bit in the page’s flags.

In general terms, each of the paging queues operates in a LRU fashion. A page is typically placed in a wired or active state initially. When wired, the page is usually associated with a page table somewhere. The VM system ages the page by scanning pages in a more active paging queue (LRU) in order to move them to a less-active paging queue.

Pages that get moved into the cache are still associated with a VM object but are candidates for immediate reuse. Pages in the free queue are truly free. FreeBSD attempts to minimize the number of pages in the free queue, but a certain minimum number of truly free pages must be maintained in order to accommodate page allocation at interrupt time.

If a process attempts to access a page that does not exist in its page table but does exist in one of the paging queues (such as the inactive or cache queues), a relatively inexpensive page reactivation fault occurs which causes the page to be reactivated. If the page does not exist in system memory at all, the process must block while the page is brought in from disk.

More

Created Aug 10, 2020 // Last Updated Aug 10, 2020

If you could revise
the fundmental principles of
computer system design
to improve security...

... what would you change?