Memory management (1)
- Memory Hierarchies
- Computers typically have memory hierarchies
- Registers, L1/L2/L3 cache
- Main memory
- Disks
- “Higher memory” is faster, more expensive and volatile, “lower memory” is slower, cheaper, and non-volatile
- Memory can be seen as one linear array of bytes/words
- The operating system provides a memory abstraction
- Computers typically have memory hierarchies
- OS responsibility
- Allocate/deallocate memory when requested by processes, keep track of used/unused memory
- Transparently move data from memory to disc and vice versa
- Distribute memory between processes and simulate an “infinitely large” memory space
- Control access when multiprogramming is applied
Models
- Approches: Contiguous vs. Non-Contiguous (L13 - p7)
- Contiguous memory management models allocate memory in one single block without any holes or gaps
- Non-contiguous memory management models are capable of allocating memory in multiple blocks, or segments, which may be placed anywhere in physical memory
Partitioning
- Contiguous approaches
- Mono-programming: one single partition for user processes
- Muti-programming with fixed partitions
- Fixed equal sized partitions
- Fixed non-equal partitions
- Muti-programming with dynamic partitions
- Mono-programming - No Memory Abstraction
- Only one single user process is in memory/executed at any point in time (no multi-programming)
- A fixed region of memory is allocated to the OS/kernel, the remaining memory is reserved for a single process (MS-DOS worked this way)
- This process has direct access to physical memory (i.e. no address translation takes place)
- Properties:
- Every process is allocated contiguous block of memory, i.e. it contains no “holes” or “gaps” (, non-contiguous allocation)
- One process is allocated the entire memory space, and the process is always located in the same address space
- No protection between different user processes required (one one process)
- Overlays enable the programmer to use more memory than available (burden on programmer)
- Shortcomings of mono-programming:
- Since a process has direct access to the physical memory, it may have access to OS memory
- The operating system can be seen as a process - so we have two processes anyway
- Low utilisation of hardware resources (CPU, I/O devices, etc.)
- Mono-programming is unacceptable as multiprogramming is expected on modern machines
- Direct memory access and mono-programming is common in basic embedded systems and modern consumer electronics, e.g. washing machines, microwaves, car’s ECUs, etc.
- Simulating Muti-Programming
- Simulate multi-programming through swapping
- Swap process out to the disc and load a new one (context switches would become time consuming)
- Apply threads within the same process (limited to one process)
- Simulate multi-programming through swapping
- Muti-programming
- A Probabilities Model (L13 - p15)
- There are n processes in memory
- A process spends p percent of its time waiting for I/O
- CPU Utilisation is calculated as 1 minus the time that all processes are waiting for I/O: e.g., p = 0.9 then CPU utilisation = 1 - 0.9 => 0.1 (1 - p)
- The probability that all n processes are waiting for I/O (i.e., the CPU is idle) is pn, i.e.
- The CPU utilisation is given by
- CPU utilisation goes up with the number of processes and down for increasing levels of I/O
- A Probabilities Model (L13 - p15)
Partitioning
- Fixed Partitions
- Divide memory into static, contiguous and equal sized partitions that have a fixed size and fixed location
- advantages:
- Any process can take up any (large enough) partition
- Very little overhead and simple implementation
- The operating system keeps a track of which partitions are being used and which are free
- Disadvantages:
- Low memory utilisation and internal fragmentation: partition may be unnecessarily large
- Overlays must be used if a program does not fit into a partition (burden on programmer)
- advantages:
- Divide memory into static and non-equal sized partitions that have a fixed size and fixed location
- Reduces internal fragmentation
- The allocation or processes to partitions must be carefully considered
- Divide memory into static, contiguous and equal sized partitions that have a fixed size and fixed location
- Allocation method
- One private queue per partition:
- Assigns each process to the smallest partition that it would fit in
- Reduces internal fragmentation
- Can reduce memory utilisation (e.g., lots of small jobs result in unused large partitions) and result in starvation
- A single shared queue for all partitions can allocate small processes to large partitions but results in increased internal fragmentation
- One private queue per partition: