ELSOC Wiki
Register
Advertisement

This topic looks at some of the problems that occur when we decide that we want to have multiple processes, or mutilple threads running on one machine to do with sharing resources.

Concrrency means running several processes/threads at the same time; really this is done by running one process for a short time and then switching to another, called scheduling.

Synchronisation is the means by which we can solve some of these problems.

Race Conditions[]

A race condition or race hazard is a flaw in a system or process whereby the output and/or result of the process is unexpectedly and critically dependent on the sequence or timing of other events. The term originates with the idea of two signals racing each other to influence the output first.

Problems can occur when processes attempt to share resources, be it hardware, code, or memory areas. Imagine two processes attempting to increment a variable x. We have problems when the statement x++; becomes several lines of assembly instructions; load x into a register, add 1, store x back to memory.

At any point in this process, our scheduler may intervene and allow the other process to run. If this occurs just after the load instruction is run, both processes will load the same value of x into a register, both will increment it, and both will write it back out, but the final result will be incorrect. We can see that there are times, or segments of code where we do not want anyone else operating with our data. These sections are called "critical sections" and it is important to be able to define them.

Protecting Critical Sections[]

Once we have identified our critical sections of code, and the resources being used, we need a way of telling other threads that they cannot execute the same code, or operate on the same variables. We do this by ensuring that the critical sections are mutually exclusive, no two processes can execute same critical section. This is most commonly achieved using synchronisation primitives to co-ordinate between different processes and threads. Critical sections of code should be written as small as possible in order to minimise the impact on the rest of the section. If we place a long loop inside a critical section then we may well cause other threads not to be able to continue in their work, slowing down the whole response time of the system.

Test and Set[]

Synchronisation Options and Primitives[]

The majority of these solutions involve using one or another of the many synchronisation primitives. These include taking turns, disabling interrupts, locks, semaphores and hardware support for

Taking Turns[]

This involves assigning each thread a number. The thread constantly checks a particular variable to see if is equal to their number; if not, it checks again, if it is then it proceeds to run its code. Once completed, it changes the variable to the next number so the next thread can run.

Pros:

- fair, each process takes turns.

Cons:

- the phenomenon of "busy waiting", each process/thread takes processor time checking to see if they can run, while the process who is actually capable of running is blocked.
- the process whose turn it is may be actually running something completely different at that time, and "forget" to pass the turn on to someone else.

Disabling Interrupts[]

A process may be able to turn off interrupts when they enter a critical section, preventing a timer tick from invoking the scheduler. This ensures the process can't be pre-empted in the critical section.

Pros:

- really, really simple.

Cons:

- we only have the ability to do this in kernel mode, so normal user programs can't use this one.
- blocks absolutely EVERYONE else from doing anything, so we will neglect interrupts occuring during this time
- must remember to turn interrupts back on again afterwards!

Locks[]

Locks are just a variable that indicates that a process may enter a critical section without fear of disrupting someone else's critical section, and be sure that no one will mess around while their critical section is running. This requires special hardware support to implement. We need a special way to check the variable, and potentially change it, all in one clock cycle before another process will have time to check it. This time restriction is known as "atomic" because it occurs in less than the smallest amount of time available to a computer, the "indivisible unit" of a timer tick. Some processers support this special instruction and it is often called "test and set". Functions that implement these in C are lock_acquire and lock_release.

Pros:

- simple and available in user mode and kernel mode.

Cons:

- again we have the problem of "busy waiting".
- if a low priority process gets the lock, a higher priority process must wait for the lower one to give up the lock.
- there is the potential for starvation if there is more than one process waiting for the lock to be released.

Semaphores[]

To address the issue of busy waiting for critical section of code Dijkstra (1965) introduced 2 more primitives proberen (P, dutch for test), verhogen (V, dutch for increment). These two primitives are more powerful than just sleeping/waiting on a lock.

Semaphores are implemented using a queue. A process will test a semaphore and if it available the process will continue. If it is not available the process will be placed at the end of a queue and put to sleep. When another process signals, the semaphore takes the first process off the queue and lets it pass. The queue management is based around a count. When the count is positive or zero then there are available resources when it is negative there are processes waiting in the queue. Every signal called increments the count and every test decrements the count, both calls are uninterruptable.

Below are pseudo code implementations of the atomic primitives.

wait(S):

S.count--;

if (S.count < 0) {

add this process to queue;
sleep;

}

signal(S):

S.count++;

if (S.count >= 0) {

remove a processP from queue;

wakeup(P);

}

The initialisation of the count determines how many resources are available before the semaphore must force processes to wait. Using an initial count of 1 semaphores can be used as a mutex(fancy word for lock).

Semaphores can form complicated concurrency structures and are well suited to solving producer & consumer style problems.

Monitors and Conditional Variables[]

A monitor is a high level synchronisation primitive in which variables, functions and data types are grouped into. Everything contained in a monitor can only be accessed from within a monitor and only a single process can be inside a monitor at a time. This allows the mutual exclusion to be implemented by the compiler rather by the programmer, hopefully resulting in fewer errors. If a process calls a monitor whilst another process is already inside it is placed in a queue and will be given access once it has progressed through the queue.

To allow for processes to wait for special events monitors use conditional variables. A process once inside a monitor will wait on a conditional varible using the operation wait(), until another process invokes the signal() operation. The signal operation invokes exactly 1 process and if no process is waiting it does nothing.

Picture 1sdfsdf

Monitor with conditional variable queues

Advertisement