ELSOC Wiki
Register
Advertisement

Introduction[]

Memory is a finite resource required by most non trivial processes. It is the operating systems obligation to ensure that memory is correctly managed in the sense that processes that request memory are given it and that processes memory is protected from other processes. The specific roles of the operating system include tracking the amount of free and allocated memory and also handling the transfers between memory and disk.

Fixed Partitioned[]

Fixed partition allocation methods are rarely used in modern operating systems due to very real possibility of poor allocation and internal fragmentation. It may be occasionally used in embedded operating systems.

Picture 25

Equal Partitions

Equal Size Partition[]

Every partition is the same size, every process that is allocated space is given a fixed partition size. This solution is easy to implement but results in internal fragmentation.

Variable Size Partition[]

In a variable partitioning there are a number of different partition sizes. It is intended that processes are allocated to the smallest partition it fits into. This attempts to minimise the amount of internal fragmentation as most applications are quite small, but also to allow large applications to be run. If the partition which best suited a process is currently occupied there are a number of strategies to handle such cases. The first is to have a queue per partition size, if a partition is currently occupied processes queue to use the memory. The second strategy is to have a single queue and place the process in the partition of best fit.

Picture 26

Variable Partitions multiple queues

Picture 27

variable partitions single queue

Dynamic Partitioned[]

Partitions are of variable length allowing processes to be allocated only what they require. A dynamic partitioner needs to be able to quickly allocate free space, compact allocated memory to reduce external fragmentation and support merging of two adjacent and free regions in memory.

Free-List[]

The simplest method to quickly allocate free space is to keep a link-list of free space in memory. The link list stores both the starting address in memory and the size of the free region. The list is kept in order of memory address to simplify the merging of two adjacent and free regions of memory.

Dynamic Placement Algorithms[]

A number of algorithms exist to decide which region of free memory will the process have space allocated.

First-fit: This algorithm starts at the base of the link list and searches the free space linearally until it finds the first region of free memory that is large enough to house the process. The process is then allocated that region of memory. First fit will often leave a large free region of memory towards the end of the address space.

fffffis algorithm is similar to first fit but rather than beginning the linear search from the beginning of the free list,it remembers where it last allocated and begins from there. This is to avoid searching free regions already too small to house processes.

Best-fit: Searches the entire free-list to find the ebst possible fit for the requested allocation. This is a long process and often results in extreme external fragmentation where very small unusable regions of memory are left.

Worst-fit: Finds the worst fit for the requested allocation. This also generally results in extreme external fragmentation after a lengthy search process.

None of the above processes solve the external fragmentation issues but next and first fit are considered better algorithms as they do not suffer from the linear search over heads that the best and worst fit algorithms do. In practice next and first generally do a better job at reducing external fragmentation.

The above algorithms are rarely used in modern operating systems. The following algorithms are the preferred choice for modern operating systems.

Lazy Allocator: When a process requests memory the region requested is recorded by the kernel. The memory is only allocated once the process attempts to access the memory it has requested. Hence the allocator is lazy as it leaves allocation to the last minute. This scheme ensures that only used portions of memory are allocated. It more than likely uses first or next fit algorithms to find free space once allocation is necessary.

Slab Allocator: Many data structures used by applications are similar amoungst other processes. A slab allocator takes advantage of this by preallocating memory portions of standards sizes. When memory for a certain type is requested the exact size is ready to be used. And rather than destroying this memory after it is released by the process the slab allocator keeps track of it and will attempt to re use it for the same data structure later.

Compaction[]

External fragmentation is an unavoidable issue, sometimes the only remedy is to compact the data currently in memory to reacquire fragmented space. Compaction requires hardware support and is a lengthy process which should be avoided.

Fragmentation[]

External Fragmentation:[]

  • The space wasted external to the allocated memory regions
  • Memory space exists to satisfy a request, but it is unusable as it is not contiguous.

Internal Fragmentation:[]

  • The space wasted internal to the allocated memory regions;
  • allocated memory may be slightly larger than requested memory;
  • this size difference is wasted memory internal to a partition.

Swapping[]

System will often be running more processes than memory is large enough to support. To handle this memory swapping is used. When an application is no longer being actively used the memory, the memory it was allocated can be copied to disk and the space left reallocated to another process. This is called swapping. When the process wishes to run again it can be copy to memory from disk and continue running. Swapping is often a slow process as reads and writes to disk are slow in comparison to memory.

Overlays[]

Rather than swapping in and out a whole processes memory, which is slow, only swap in the pieces of data and code that are needed at any given time. This preserves resources and cuts down swap times but requires a complex structure to manage and implement such policies.

Advertisement