Stores both the operating system and user processes for execution. Efficient memory management ensures protection, fast access, and flexible allocation. Usually implemented using DRAM.
CPU can directly access registers and main memory. Register accesses are fastest (1 clock cycle). Main memory operations are slower (100+ cycles). Memory operations involve addresses, read/write requests, and data transfer. Cache sits between CPU and memory to reduce access delay.
Terminology
Section titled “Terminology”Logical Address
Section titled “Logical Address”Aka. virtual address. Generated by the CPU. Used by programs to access memory.
MMU maps logical address to physical addresses.
Protection
Section titled “Protection”In multi-process systems, memory protection is crucial. One process must not access another’s memory.
Each process:
- Base register: aka. relocation register. smallest valid address.
- Limit register: range of accessible addresses.
Every user-mode access is checked against these bounds by the CPU. Prevents illegal memory references.
Hardware Access Protection
Section titled “Hardware Access Protection”CPU must have special registers to store base and limit registers. For each memory access by user-mode processes, the CPU must check if the address is within bounds. If not, it raises an exception (trap) to the OS.
The instructions to set base and limit registers are privileged and can only be executed in kernel mode.
Address Binding
Section titled “Address Binding”Refers to the process of deciding where the program will reside in the memory. Handled by the compiler (at compile-time) or by the OS (at load-time or execution-time). Happens at different stages of program execution, depending on the type of memory management used by the OS.
Addresses can be bound at:
- Compile time – fixed absolute addresses.
- Load time – relocatable code loaded to available memory.
- Execution time – binding done during run; allows relocation. Requires hardware support like base and limit registers.
Contiguous Memory Allocation
Section titled “Contiguous Memory Allocation”An early method of allocating memory to kernel-mode and user-mode processes. Memory is divided into OS and user partitions. Each process occupies one contiguous block.
Variable Partitions
Section titled “Variable Partitions”Variable paritions are created dynamically as required by processes. Allows more efficient use of memory.
Holes (free blocks) of varying sizes are created as processes load/unload. Leads to fragmentation.
Allocation strategies:
- First-fit
The first free hole large enough to accommodate the process is selected. Fast. Simple to implement. Causes external fragmentation because of small holes. - Best-fit
The smallest free block that is large enough to accommodate the process is chosen. Wastes less space compred to first-fit. Slower. Causes external fragmentation because of small holes. - Worst-fit
The largest free block that is used. Minimizes many small holes. Slow. Wasteful.
Compaction
Section titled “Compaction”The technique of merging adjacent free partitions into a single larger free space. Decreases fragmentation.
The OS might also move process memory spaces as well. Moving processes around has an overhead. Only possible if relocation is dynamic.
Pending I/O
Section titled “Pending I/O”If a process is currently doing an I/O operation, its memory block cannot be moved for the purpose of compaction. Can be solved by double buffering.
Fragmentation
Section titled “Fragmentation”Reduced by compaction.
- External fragmentation
When memory is free, but non-contiguous gaps are scattered. Makes it hard to allocate memory for large processes. - Internal fragmentation
When more memory is allocated to a process than requested. Common in fixed-size partitioning.
50% Rule
Section titled “50% Rule”First fit analysis reveals that for every 2 usuable blocks, 1 block would be lost to fragmentation. 1/3 of the total size may be unusable.