ELSOC Wiki
Register
Advertisement

THESE ARE MY ANSWERS AND ARE BY NO MEANS CORRECT OR THE BEST ANSWER> CONTRIBUTE YOURS!!! --Hankhank 23:29, 15 June 2009 (UTC)

1. Describe the two general roles of an operating system, and elaborate why these roles are important.

The first role of any operating system is to provide a level of abstraction from the hardware, by providing libraries that processes can use that handle tasks which interface with the systems hardware. This allows programmers to write applications which are not dependent upon the hardware configuration of the users system. Operating systems should also manage the systems resources such as memory, disks and a variety of I/O devices. This is important as it ensures that resources are used efficiently and effectively. Most modern operating systems also provide methods to multitask. The operating system manages these multiple processes and ensures that they cannot interfere with one another, whilst giving users the perception that the system is concurrently running multiple applications

2. Using a simple system call as an example (e.g. getpid, or uptime), KHANZADA TUM YE DEKH RAHY HO NA? Dubara dekh lo :D describe what is generally involved in providing the result, from the point of calling the function in the C library to the point where that function returns.

When a system call is made a context switch takes place between the calling function and the kernel. To enter into the kernel which will handle the system call an exception is called from within the C system call library, software exception. Once the exception is made the CPU calls the kernels exception handler which diligently stores the current CPU state on the stack.The exception handler then runs the appropriate kernel code to handle the requested system call. The system call then loads the return variables into the appropriate locations in the user stack, which will be loaded into registers upon the exception handling. Next the scheduler is called and if the original code is allowed to run again the previous CPU state will be reloaded from the stack, with the results lying already in the correct registers. The program making the system call then continues to run as if an ordinary function was all that was called.

3. Why must the operating system be more careful when accessing input to a system call (or producing the result) when the data is in memory instead of registers?

When the operating system accesses memory it does so without restriction where as in user land, memory access is tightly controlled. If a system call is pointing to memory rather than using registers operating systems must ensure that the user-land application have the correct permissions to access the specified memory locations. As the user could be maliciously or unwittingly be attempting to access another processes memory or even the memory which holds the kernel, which could cause security breaches,corruption of the kernel or corruption of other processes. All of which will result in an unstable and malfunctioning operating system.

4. Is putting security checks in the C library a good or a bad idea? Why?

Putting security checks in the C library effectively removes them from running in kernel-mode which will speed up the system and still provide a decent level of protection from poorly coded applications. Although putting security checks in user-land rather than in the kernel will render it in effective against malicious users as they can now effectively circumvent the user-land library calls and implement their own corrupted versions.

5. Describe the states a process can take. What transitions are valid between the three states, and describe an event that might cause such a transition.

The first state is running, in this state the application is being executed by the CPU. Second state is ready in this state the process is waiting to be scheduled to execute. The last state is blocked in this state the process is waiting for a resource to become available in order to continue processing. A process can transition between a running and ready state if the softwares time slice finishes and the scheduler schedules another task to run. A process can transition from ready to running if the scheduler schedules it. Process can also transition from running to blocked if a resource that is required is unavailable.

6. Multi-programming (or multi-tasking) enables more than a single process to apparently execute simultaneously. How is this achieved on a uniprocoessor?

Multi-tasking is acheieved by the operating system running small slices of a number of applications consequlatively. If the slices are small enough processed fast enough it gives the illusion of simutaneous execution. To manage this process effectively a section of the kernel manages which tasks to run, which is called the scheduler. The scheduler decides which applications will receive the next time slice with the goal of keeping the system responsive but effectively occupying the systems resources with processes.

7. What is a process? What are attributes of a process?

A process is the execution of a single application.

8. What is the function of the ready queue?

The ready queue is used by the kernel to queue processes which are ready to be processed. In a round-robin scheduling scheme there exists only 1 ready queue, newly ready items are added to the back and items are scheduled from the front. FIFO queue.

9. What is the relationship between threads and processes?

A process can encompass a number of threads of execution. Threads can be implemented using processes but there are numerous advantages to using threads inside processes. Usually the address space, global variables, accounting information, child processes etc.. are shared between threads. With each thread having its own stack, program counter, state and registers.


10. Describe how a multi-threaded application can be supported by a user level thread package. It may be helpful to consider (and draw) the components of such a package, and the function they perform.

A co-operatively


11. Name some advantages and disadvantages of user-level threads.

Generally user-level threads can be switched between faster than kernel level threads, as there are no overheads associated with loading into kernel mode. User-level threads also provide support for multithreading on operating systems that maynot otherwise support multiple threads. Although user-level threads do not allow I/O requests to overlap with CPU bound processes. Process which block on I/O block the rest of the threads. User-level threads also make it hard to utilise multiple CPU systems as only a single process is thought to be in existance from the OS perspective. Threads also have to manually yield() to return control back to the dispatcher, a single thread can end up monopolising the CPU.

12. Why are user-level threads packages generally cooperatively scheduled?

Co-operatively scheduling is usually the only option. For preemptive scheduling hardware support is required, usually in the form of timer interrupts. It is the role of the operating system to handle interrupts and for user-land preemption the operating system would need to reconginse that the userland process requested an interrupt and hand it back to that process. Also programmers writing applications with multiple threads have complete control over how the application will interact. It is therefore infact quite natural to create something that is co-operating as it will best meet the programmers needs, rather than dealing with the overheads associated with receiving a timer interrupt.

13. Enumerate the advantages and disadvantages of supporting multi-threaded applications with kernel level threads.

Kernel level threads allow the process to be used across multiple CPU effectively. It also allows for preemptive scheduling and I/O blocking in parallel with computation. Although thread creation, destruction, blocking and unblocking takes place in the kernel which has large overheads associated with it.


14. Describe a sequence the sequence of step that occur when a timer interrupt occurs that eventually results in a context switch to another application.

Timer interrupt is flagged from hardware. The CPU program counter automatically changes to a predefined address in memory, where the kernels interrupt handler lies. The interrupt handler saves registers that it will use in kernel mode then determines what cased the interrupt. It was a timer interrupt so the kernel calls its scheduler. The scheduler determines that the current threads time slice has ended and that a context switch between the out going process and the incoming one must take place.

15. Context switching between two threads of execution within the operating system is usually performed by a small assembly language function. In general terms, what does this small function do internally?

The small assembler function simply systematically saves the current context by storing the current values in the registers on the process/threads stack and also the CPU states registers on the stack. After these values have been saved it systematically loads the CPU state of the new context stack and then the registers saved on the stack.

16. What is a race condition? Give an example.

A race condition is a situation when an outcome is unwittingly dependent upon the timing of other events. Two signals racing each other to influence the output first. This can occur in concurrency problems when a variable used by a number of threads is not access mutually exclusively. A process incrementing the variable, which takes a number of CPU cycles may be interrupted during the process and the kernel schedules another process which successfully updates the variable. Once the original process has been given time to complete it will load the incorrect value into the variable.

17. What is a critical region? How do they relate to controlling access to shared resources?

A critical region is a section of code in which a shared resources is accessed. To control access to the shared resource, access is controlled to the critical region of code. By controlling the code that accessed the resource we can control access to the resource.

18. What are three requirements of any solution to the critical sections problem? Why are the requirements needed?

Mutual Exclusion, no two processes may be running the critical region at anyone time. Progress, code must not block other processes when running outside its critical region. No process must wait forever to enter into a critical section of code. I DONT KNOW WHY? BECAUSE THE SOLUTION WOULDNT SOLVE THE PROBLEM OTHERWISE?

19. Why is turn passing a poor solution to the critical sections problem?

Turn passing is a poor solution as it will leave processes busy-waiting for their turn to run the critical section. Also if a process finishes its turn and passes it to another process it must wait for that process to have its turn before it can have another turn. Essentially it can force processes to wait on no critical sections of code. Progress is also not guaranteed because the process who's turn it is may never take its turn leaving other applications busy-waiting.

20. Interrupt disabling and enabling is a common approach to implementing mutual exclusion, what are its advantages and disadvantages?

The simplicity of this approach is the main advantage, it can be implemented with 2 assembler instructions. It can start cause problems when it is miss used, ie a process could turn off interrupts and hog CPU and it will never be preempted. It also does not provide a secure system for user-land as process can essentially never hand control back to the kernel.

21. What is a test-and-set instruction? How can it be used to implement mutual exclusion? Consider using a fragment of psuedo-assembly language aid you explanation.

Test and set instruction is an atomic instruction which will test the value of a register and then set it to the value given to the instruction.

This can easily be used to implement mutual exclusion. See code fragment

getthelock = 1;

while(getthelock){

getthelock = testandset(lock,1);

}

/*enter critical region */

criticalregion();

/*exit critical region */

set(lock,0);

In this implementation the variable getthelock is used to tell the code when the lock has been acquired. The code continually loops checking the getthelock until the lock is acquired. The testandset function test the variable lock and sets it to one. It returns the previous value of the variable lock. If we acquire the lock it will return 0 otherwise we are still waiting for the lock. Once it returns 0 we break from the loop and enter the critical section. After which we set the variable lock to zero so the lock can be acquired by the next process.


22. What is the producer consumer problem? Give an example of its occurence in operating systems.

The producer consumer problem is a classic concurrency problem. It arises when a process is producing some data, the producer, and another process is using that data, the consumer. The producer and the consumer however could be operating at different rates, ie the consumer could be using data quicker than the producer can produce it. To support this we need to offer a queue structure for the producer to place data onto and the consumer to take from and also a count of how many items exist on the queue, so producers cannot place too much data on the queue and consumers do not try and take data that is not there. Operations on the queue and count are critical sections of code as multiple processes will be attempting to access them, and as such require mutex primitives.

An example in an operating system would be interfacing with a network device. The network device produces data at a certain rate and places it on a buffer, the operating system then consumes the data at another rate.

23. A semaphore is a blocking synchronisation primitive. Describe how they work with the aid of pseudo-code. You can assume the existance of a thread_block() and a thread_wakeup() function.

A semaphore is essentially a locked counter. A semaphore consists of two functions a test and increment. Test is used to check the counter, if > 0 it lets the process proceed and decrements the counter. Otherwise it places the process on a queue waiting for the count to be postive again. The signal function simply increments the counter and if there still exists processes on the queue awake one of them to run.

Test():

count--

if count<0{

thread_block()
add to queue

}


Signal():

count++

if count<=0{

remove process of queue

thread_wakeup

}


24. Describe how to implement a lock using semaphores.

Intialise the value of the semaphore to 1 and use P() to obtain the lock and V() to release it. Or test() to grab lock and signal() to release it.


25. What are monitors and condition variables?

Monitors are high level synchronisation primitives that encapsulate data, variables and operations. Items placed inside a monitor may only then be accessed from within a monitor and only a single process is granted access into a monitor at any one time. A conditional variable is variable placed inside a monitor to allow for processes to wait inside a monitor until a special event has occured. Once the conditional variables are created a process can wait() on the variable until another process invokes it by calling signal(). The signal() operation only invokes a single process waiting and does nothing if no processes are waiting.

26. What is a deadlock? What is starvation?How do they differ from each other?

A deadlock is a situation where a number of processes can not proceed with out a resource that another holds, but cannot release any resources that it is currently holding. A deadlock ensures that no processes can proceed.

Where as a starvation is a situation where a signal process is never given a resource or event that it requires to proceed, this includes CPU time. They differ in that a deadlock ensures that no processes can proceed where as starvation only a single process fails to proceed.

27. What are the four condtions required for deadlock to occur?

No preemption, cannot take resources back. Hold and wait, processes can hold resources whilst they wait. Mutual exclusion, a resource can only be given to a single process. Circular wait, 2 more processes are waiting for another to release the resource that it requires to proceed.


28. Describe four general strategies for dealing with deadlocks.

Detect and recover deadlock has occurred and roll back to a previous state and execute again hoping that deadlock will resolve itself. Doing nothing, sometimes deadlocks are so complex the overheads to deal with them and actually detect them are too high for how often that actually occur. Avoid deadlocks by only allowing safe resource states, that is only allow processes to run if the maximum number of resources it can request can always be provided. Prevent deadlocks by enforcing a policy that ensures that resources must be acquired and released in an order fashion, this ensures that the circular wait condition is squashed.

29. For single unit resources, we can model resource allocation and requests as a directed graph connecting processes and resources. Given such a graph, what is involved in deadlock detection.


30. Is the following system of four processes with 2 resources deadlocked?
Current allocation matrix

P1 1 3
P2 4 1
P3 1 2
P4 2 0

Current request matrix

P1 1 2
P2 4 3
P3 1 7
P4 5 1

Current availability

1 4

P2 and P3 are deadlocked.


If the availability vector is as below, is the system above still deadlocked?

2 3


Yes P2,P2,P4

Is the system deadlocked if the availability is

2 4

No


31. Assuming the operating system detects the system is deadlocked, what can the operating system do to recover from deadlock?

It can kill processes until the deadlock is resolved. Or it could preempt the resources and simply take them away from the process. It could also rollback to an earlier state before the deadlock and hope that it doesn't happen again.

32. What must the bankers algorithm know a priori in order to prevent deadlock?

The maximum number of resources required by the process

33. Describe the general strategy behind deadlock prevention, and give an example of a practical deadlock prevention method.

Deadlock prevention tries to attack one of the four conditions for deadlock, mutex, hold and wait, nonpremption and circular wait. The most practical deadlock prevention method is to attack circular wait by enforcing a policy ensures that processes must acquire and release resources in a specified order. With this policy resources will be linearly acquired and released and hence avoid deadlocks


34. Filesystems can support sparse files, what does this mean? Give an example of an application's file organisation that might benefit from a file system's sparse file support.

A sparse file is represented in a FS by an inode whos number of blocks is less than those required by its apparent file size. Sparse files are used to represent files who's contents is mostly empty, but rather than allocate space on the drive for empty blocks it simply maps them as empty and during run-time reads back zeros when those blocks are accessed. It leaves allocation to the last minute if data is to be written. This is benefical when large files such as data bases which are mostly empty are present on a FS, as it allows the file structure to remain but emtpy data not written onto disk.

35. Give an example of a scenario that might benefit from a file system supporting an append-only access write.

36. Give a scenario where choosing a large filesystem block size might be a benefit; give an example where it might be a hinderance.

Large blocks sizes are of benefit when large files are mostly being allocated and deallocated. As it ensures that most of the data is continuous rather than randomly scattered. This becomes an issue when files smaller than the block size are being stored, as each file will take up a block but will not properly utilise it, causing internal fragmentation.


37. Give an example where contiguous allocation of file blocks on disks can be used in practice.

An optimisation in the EXT2FS is to always allocate a contigous 8blocks more than what is requested by the process. This ensures that if the process at a later data requests more space that its data will be stored together rather than seperated by another processes which managed to request space inbetween the first processes requests. If the space is not used the FS simply frees the space back up after the process has completed.

38. What file access pattern is particularly suited to chained file allocation on disk?

Sequential access, as data is going to be accessed in order following the chain adds very little over head.

39. What file allocation strategy is most appropriate for random access files?

Contiguous allocation as you do not have to follow a linked list to data, it can be randomly accessed.


40. Compare bitmap-based allocation of blocks on disk with a free block list.

A free block-list after a period of time during which numerous allocations and deallocations take place will result in a random list of free space. This does not support contiguous allocation as the free list will force the FS to place data randomly. Bit-map-based allocation maintains the order of the free blocks, easily supporting contiguous block allocation, but sacrifices time required to find free space and to release space, they are no longer order 1 as the bit-map must be traversed. The benefits far out way those small overheads when accessing files stored.

41. How can the block count in an inode differ from the (file size / block size) rounded up to the nearest integer. Can the block count be greater, smaller, or both.

The block count of an inode can be smaller than the file size, this is to support sparse files who only occupy space when it is actually used.

42. Why might the be stored in the inode itself?


  • File mode - directory or file etc..
  • Uid: user id
  • gid: group id
  • Creation,access and modified times
  • Size of file
  • Permission to access the file
  • reference count - the number of file names hard linked to the inode
  • block count - number of blocks used by the file
  • Direct Mapped - Block location of directly mapped data
  • Single Indirect - Block location of blocks containing block locations
  • Double Indirect - Block location of blocks containing block locations of blocks containing block locations
  • Triple Indirect - Block location of blocks containing block locations of blocks containing block locations of block containing block locations


43. Given that the maximum file size of combination of direct, single indirection, double indirection, and triple indirection in an inode-based filesystem is approximately the same as a filesystem soley using triple indirection, why not simply use only triple indirection to locate all file blocks?

As most files are small in size, on average 2k, and can usually be mapped directly from the inode. It is not logical to force the OS to follow a tree 3 stages deep to find a single block. The overheads are large in comparison to the single byte read. Triple indirect is included to accommodate large files, which will more than likely be sequentially read in any case.

44. What is the maximum file size supported by a file system with 16 direct blocks, single, double, and triple indirection? The block size is 512 bytes. Disk block numbers can be stored in 4 bytes.

512*(16 + 128 + 128*128 + 128*128*128) = 2,113,580*512=1,082,204,160bytes=1,056,840kbytes = 1,032MB~1gig

45. The berkely fast filesystem (and Linux Ext2fs) use the idea of block groups. Describe what this idea is and what improvements block groups have over the simple filesystem layout of the System V file system (s5fs).

The idea is to partition the disk into equallly sized block groups each which contains information about the filesystem, the locations of bit-maps, inodes etc.. The idea is to have inode data proximal to the data blocks and also to add reduency to the FS identifying data. System V FS used to keep all this data at the top of the disk and if that section of data was corrupted the whole file system was lost. If we introduce redundency across the disk, if one section is gone there is a chance to recover it.


46. What is the reference count field in the inode? You should consider its relationship to directory entries in you answer.

The reference count is the number of file names that hard-link to the inode. Hard-linking is performed at the directory level. Translation between file names and inode numbers is kept in the directory file. The directory file keeps a list of all the file names, some properties and a corresponding inode number. A number for file names can reference the same inode number, the number of files referencing the inode is kept within the inode. When a file name is deleted the reference count is decremented, once no more file names reference the file it is then removed from the FS.

47. The filesystem buffer cache does both buffering and caching. Describe why buffering is needed. Describe how buffering can improve performance (potentially to the detriment of file system robustness). Describe how the caching component of the buffer cache improves performance.
Buffering allows the operating system to place access requests onto a queue and then continue performing other tasks until the drive sends an interrupt(for read requests( that the item has been processed. This allows the OS to do other tasks rather than to remain busy-waiting for the slow disk. It can cause problems if the operating system thinks that something has been written because it has been placed on the queue. A failure whilst something is on the queue/buffer will generally see the data being lost. Caching data improves the locality of data, spatially and temporal. Temporal in the sense if it has just been accessed it will generally be accessed at a later stage. Caching this data avoids the reading and writing speeds of a disk as a copy of the data is kept in memory. A cachebuffer also allows for more data to be requested from the disk, as data close to one another will generally be accessed together. This avoids more drive overheads.


48. What does flushd do on a UNIX system?

Flushd writes all items in the cache to disk. This command is used to ensure that data is actually written to disk, rather than if it was just buffer and consequently lost if there was a system failure.

49. Why might filesystems managing external storage devices do write-through caching (avoid buffering writes) even though there is a detrimental affect on performance.

Devices such as USB drives are likely to be removed from the system prematurally by users. If data is cached rather than written through users may assume the data has been saved but in reality it was just cached. When the device is removed the data is subsequently not on the external storage and most likely lost.

50. Describe the difference between internall and external fragmentation. Indicate which of the two are most likely to be an issues on a) a simple memory memory mangement machine using base limit registers and static partitioning, and b) a similar machine using dynamic partitioning.

Internal fragmentation is the loss of memory space due to less space being used by a process than has been allocated. The space is unrecoverable as the memory belongs to the process and teh OS has no method as to find how much is in reality being used at anyone time. External fragmentation is the loss of space inbetween allocated memory that is too small to be useful.

A) internal as allocation can is of a static size

B) external as only what is needed is allocated.

51. List and describe the four memory allocation algorithms covered in lectures. Which two of the four are more commonly used in practice?

Fixed Partition - all memory is divided into fixed sized partitions. When memory is requested a partition/s is allocated

Variable Partition - memory is divided into a range of different partitions. When memory is allocated the best fitting partitioned is selected and allocated. If it is being used a queue can be set up to wait until it is free or to select the next best fit.

Dynamic Allocation - Memory allocated is from a free-list to the size exactly requested. Picking a section of memory from the free-list can be done using first fit, next fit, best fit or worst fit. First and next fit are generally preferred as they reduce external fragmentation and have low overheads.


52. Base-limit MMUs can support swapping. What is? Can swapping permit an application requiring 16M memory to run on a machine with 8M of RAM?

Swapping is a process where data in memory is written to disk in order to allow new data from disk to be in memory. Swapping allows machines to run applications larger than the amount of memory available. Yes a system with swapping could run a 16MB application but its performance would suffer a great deal if large proportions of the applications were needed in quick succession as then the number of swaps would go up, possibly leading to thrashing. Swapping has a high cost.

53. Describe page-based-virtual memory You should consider pages, frames, page tables and memory management units in your answer.

Page based virtual memory provides processes with a virtual address space in which to operate. The virtual address space which processes use corresponds to a physical address space generally of a smaller size. The virtual address space is divided into equal portions known as pages, physical memory is divided into equally sized portions known as frames. When a virtual page is used by a process it is allocated a physical frame, the translation of which is now stored in a page table. To avoid having to load the page table everytime memory is accessed to discover the translation, the memory management unit caches recent translations. The recent translations are kept in a Translation lookaside buffer, which is usually contains the 64 most recent entries. With a TLB the page table only needs to be accessed when the TLB misses.

54. Give some advantages of a system with page-based virtual memory compared to a simply system with base-limit registers that implements swapping.

Page-based virtual memory allows processes to use the same address space, even the same addresses but be located in different sections of physical memory. This is useful as most processes use the bottom of memory for code and data and the top for the stack. A base-limit requires that processes use different parts of memory as if they do use the same address locations they will have to be swapped in and out to function correctly. This is a poor solution as if there is enough space in memory for both processes swapping should be avoided.

55. Describe segmentation based virtual memory. You should consider the components of a memory address, the segment table and its contents, and how the final physical address is formed in your answer.
Man segmentation memory is f-ing old and obsolete before it even started. But inside of using pages memory was divided up into segments which were of varying lengths. A virtual address then consisted of a segment number and an offset inside the segment. When a segment was found it returned a physical address and a length of the memory.


56. What is a translation lookaside buffer? What is contained in each entry it contains?

A translation lookaside buffer is essentially a cache that contains the most recent virtual to physical page translations. It is implemented in hardware and has two 32 bit sides in a typical 32 architecture. The hi side contains the virtual page number, usually of 20bits followed by 6bits which are the Address Space ID. The lo side contains the physical frame number and 4 flags, N,d,V,G. Non-cachable, dirty, valid and global.

The virtual page # and the asid are used by the MMU to find determine if it is the correct translation. The PF# gives the translations and the the flags give additional information like.

N = do not cache data in this memory location.

D = write protection

V = translation is valid

G = global and dont worry about asid.


57. Some TLBs support(ASIDS), why?

To allow multiple processes translation data to exist in the TLB at once, rather than everytime a context switch occurs having to flush the entire TLB of it entries. Iin case multiple processes have different translations for the same virtual address.

58. Describe a two-level page table? How does it compare to a simple page table array?

A two level page table system breaks the address space into a number of smaller page tables. The top table then references these tables using the first half of the virtual page number as an index. The second tier tables then use the second half of the virtual page number as an index and contain the corresponding physical frame number. A two-level page table reduces the size in memory of the page table as the second table only needs to be created when those addresses are being used. Where as a simple page table requires that every virtual address be represented in the table, which can get quite large in systems greater than 32bits. Processes generally do not use much of the address space given to them, usually just the bottom for code and data and the top for the stack.


59. What is an inverted page table? How does it compare to a two-level page table?

An inverted page table rather than using the virtual page number as the index to the table, uses the frame number and the table contains the corresponding page table number. To assist in finding the page number an inverted page table is accompanied by a hash table. Each virtual page number is hashed into the hash table which contains a frame number. The virtual page number is then checked against the one in the inverted page table at the frame number. If they do not match it means there was a hash collision and there is a link in the table entry to the next memory inverted page table entry this continues until the match is found. The index of where the match is found is then the physical frame number. Inverted page tables need only be the size of the number of frames which is generally far smaller than the address space but makes it difficult to share memory between processes.

60. What are temporal and spatial locality?

Temporal locality is the phenomenon that if a piece of data has been used it is likely in the future it will be used again. Spatial locality is the phenomenon that if a piece of data has been used it is likely its neighbours, data within its spatial region, will also be used.

61. What is the working set of a process?

The working set of a process is the section of data and code that is being executed by the CPU. A well known rule of thumb is that a process will spend 90% of its time in 10% of its code. It is therefore unwise to load the entire application into memory, but rather just the subset most commonly used, this subset is known as the working set.

62. How does page size of a particular achitecture affect working set size?

? Memory is allocated in pages and replaced in pages. A working set should be at least a single page, if the page size is quite small then a number of pages.


63. What is thrashing? How might it be detected? How might one recover from it once detected?

Thrashing is a situation when the degree of multitasking is not supported by the number of frames supplied by the target machine. This results in a large number of page faults and therefore a high number of swapping to and from disk. The systems then spends more time swapping then it does processing.

A high number of page faults is generally a good indicator of thrashing. To recover a number of processed need to be suspended until other complete, reducing the degree of multitasking.

64. Enumerate some pros and cons for increasing the page size.

Pros


  1. Increased spatial locality - more localised data is available
  2. Smaller page tables and the associated accounting

Cons


  1. Decreased temporal locality - more of the data is swapped in an out each page fault.
  2. Page faults are more expensive
  3. Reduces degree of multitasking possible


65. Describe two virtual memory page fetch policies. Which is less common in practice? Why?

Demand based paging is a fetch policy where memory pages are only retrieved when they are demanded. Pre-page fetching is a policy in which pages are retrieved before they are required in the hopes that it will be required in the future. Pre-paging attempts to use idle I/O to implement its policy. Pre-paging is less common as it is difficult to get right. Pre-paging becomes inefficient when pages as part of the working set are ejected for pages that will never be needed. The cost to compute all th necessary information to implement the strategy correctly outweighs any possible benefits.


66. What operating system event might we observe and use as input to an algorithm that decides how many frames an application receives (i.e. an algorithm that determines the application's resident set size)?

program request? what it requests... which devices it will be using farked if i know cant find in notes.

67. Name and describe four page replacement algorithms. Critically compare them with each other.

FIFO: First page in and the first one to be replaced. Fair policy but will run into situations where a current process can page fault and spend considerable amount of time paging in new data over which is over writing its working set, which results in more page faults.

Least recently used: The page which has been laying idle for the longest is replaced. The idea being if it hasn't been used for a while it won't be used in future. Quite a good strategy but introduces an overhead in computing which page is least recently used which can outweigh the benefits.
Optimal: pages in exactly what is required at the right time. Impossible to implement, used as a bench mark

Circular: Periodically checks pages to see if they have been used in that period. If they have they remain otherwise they are ear marked to be paged out. This scheme is quite fair and has some temporal locality built in. It is better than FIFO and quicker than FIFO but sub optimal.


68. Describein the I/O subsystem of an operating system. Give reasons why it is required, and give a case where it is an advantage, and a case where it is a disadvantage.

69. Device controllers are generally becoming more complex in the functionality they provide (e.g. think about the difference between implementing a serial port with a flip-flop controlled by the CPU and a multi-gigabit network adapter with the TCP/IP stack on the card itself). What effect might this have on the operating system and system performance?

This will ultimately improve system performance as the menial tasks that would previously occupy the CPU and operating system are now being handled by devices. This will eventually increase the complexity of operating systems as new complex devices must be support by the operating system and code that would previously handle such tasks removed with out applications noticing the difference.

70. Compare I/O based on polling with interrupt driven I/O. In what situation would you favour one technique over the other?

Polling an I/O device requires that the CPU manually check if the I/O device is ready, this costs the systems cycles. Interrupt driven on the other hand allows the CPU to process as it pleases whilst waiting for the device. When the device finishes it will interrupt the CPU at which stage it can act. Interrupt driven is useful on systems that are performing a multitude of tasks with a number of different I/O devices. It would be detrimental on such a system to have the CPU wasting cycles waiting on a single I/O device when there are other tasks and I/O devices to schedule. Polling is a cheaper solutions and doesnt require interrupt hardware and it a good solution if the CPU woul normally only be sitting idle waiting for an interrupt anyways. An example of which is a washing machine, the CPU has nothing better to do than wait for the I/O on a washing machine, its definately not going to be running some batch tasks in the background.

71. Explain how the producer-consumer problem is relevant to operating system I/O.

I/O and operating systems are generally a dual channel producer consumerb problem. Where the I/O consumes operating system request and produces data, where as operating systems consumer I/O data and produce I/O requests.

72. What is disk? What problem is it trying to solve?

A disk is a mass data storage device. It is providing the system with more storage capacity than the limited capacity of registers and memory.

73. What is cylinder skew? What problem is it trying to solve?

Cylinder skew is a technique used when laying data tracks out on disk to ensure that as the disk rotates and the head gradually moves outwards that the start of the next track is not missed by the head. If it is missed this force the As there is a well defined amount of time it takes the head to move outwards.

74. Name four disk-arm scheduling algorithms. Outline the basic algorithm for each.

FIFIO: First request request is the first served. No real scheduling usually results in random access.

Shortest Seektime First: Schedules the access request that is closest to the current request. Results in quite quick disk accesses but can lead to starvation of processes if their requests are outside what is currently being accessed.

Elevator: Schedule disk accesses in ascending order once the top has been reached descending order continually. Ensures that accesses are performed relatively close together and prevents starvation. Makes poor use of sequential files on the down scan.

CSCAN: The same as the elevator algorithm but only accesses disk on the ascending section of the arm movement. Once the arm has moved to the top is moves back to the start again. This is to ensure that sequential files are used effectively.

90. Why is it generally correct to favour I/O bound processes over CPU-bound processes?

I/O bound process generally require little CPU time, rather they just need to make a request then wait for the I/O to return. By favouring I/O-bound processes it allows the process to make those request then other processes can use the CPU whilst the I/O-bound process is happy to wait. It add parallelism to the system, CPU-bound is computed whilst I/O bound is waiting.

91. What is the difference between preemptive scheduling and non-preemptive scheduling? What is the issue with the latter?

Preemptive scheduling forcibly takes the CPU away from the current process, where as non-preemptive requires that the current processes relinquishes control back to the scheduler. Non-preemptive scheduling causes issues when the process does not return to the scheduler and effectively hogs the CPU. This can reduce a multitasking machine to a single process machine and negatively effect the systems response time.

93. Describe round robin scheduling. What is the parameter associated with the scheduler? What is the issue in chosing the parameter?

Round robin scheduling is a method in which each process is given a timeslice to compute, they are then interrupted by a timer and placed at the end of a ready queue. When all the processes in the queue have been given the chance to run the original process is given another time slice.

94. The traditional UNIX scheduler is a priority-based round robin scheduler (also called a multi-level round robin schduler). How does the scheduler go about favouring I/O bound jobs over long-running CPU-bound jobs?

The scheduler prioritises I/O bound jobs over CPU-bound jobs by giving them a higher priority. This is to ensure any waiting on I/O is performed in parallel with other processes. To ensure that CPU-bound processes are not starved of CPU by I/O bound processes the priority of those tasks are gradually increased to ensure they are given a chance to run.

97. In a real-time system with a periodic task set,, how are priorities assigned to each of the periodic tasks?

The priorities are set by the programmer before the task set is run.

98. What is an EDF scheduler? What is its advantage over a rate monotic scheduler?

EDF = earliest deadline first. Earliest deadline first schedules tasks by those whose deadline is the earliest. Rather than a rate monotonic scheduler which schedules the shortest task first. An EDF scheduler ensures that all deadlines are met even when system utilisation is high, a rate monotonic system cannot guarantee this above system utilisation of around 70%

99. Describe why the use of spinlocks might be more appropriate than blocking locks on a multiprocessor when compared to a uniprocessor. Is there a trade-off between spinning and blocking on a multiprocessor? Discuss.

Spinning on locks can be appropriate in multiprocessor systems rather than blocking. In uniprocessor the only way to release a lock is to run the lock holder, this is not true in multiprocessor systems. If the lock is going to be held for only a short amount of time, blocking and loading a new process then unblocking and finding another process has taken the lock has a higher overhead than spinning for a small amount of time. This is fine if the lock is released in a short amount of time, if the block is going to last for along time the system is better to block and run another process than wasting cpu cycles waiting.

100. Is a single ready queue on a multiprocessor a good idea? Why?

It is a simple solution and fast solution on multiprocessor with few processors, it schedules the best process the fastest. When we have a non trivial number of processes the lock on the ready queue can be come a major bottle neck. Also not all processors ar equal, a task is better to run on a processor it recently ran on as the cache will more likely contain vaild entries.

101. What is thread afinity? Why might it improve performance?

Thread afinity is a two-level scheduling algorithm. Where each CPU has its own scheduler and the whole system also has a ready queue. A CPU runs tasks in its own ready queue according to its own scheduler until it runs idle at which stage it requests a another task from the global ready queue or another CPU. This improves performance as processes can take advantage of data remaining on the local CPU cache and the lock on the global ready queue no longer becomes a bottle neck. Load balancing also ensures no idle cpus.

Advertisement