operating systems n.
Skip this Video
Loading SlideShow in 5 Seconds..
Operating Systems PowerPoint Presentation
Download Presentation
Operating Systems

play fullscreen
1 / 92
Download Presentation

Operating Systems - PowerPoint PPT Presentation

taji
130 Views
Download Presentation

Operating Systems

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. Operating Systems

  2. OS • In MIMD systems, several processors are active simultaneously and all processors in execution must be coordinated by the operating system (OS). The OS’s functions are similar to those of SISD machines.

  3. OS • The major functions of multiprocessor OS are the following: • Keep track of the status of all resources (computing, i/o, memory, switches, etc) at any time. • Assign tasks to processors according to some optimizing criteria (execution time, processor utilization, etc.). • Spawning new tasks or creating new processes such that they can be executed in parallel or independent of each other. • Collect individual results and pass them to other processes if required.

  4. Applications OS Hardware What is an Operating System? • An operating system (OS) is: • a software layer to abstract away and manage details of hardware resources • a set of utilities to simplify application development

  5. The major OS issues • structure: how is the OS organized? • sharing: how are resources shared across users? • naming: how are resources named (by users or programs)? • security: how is the integrity of the OS and its resources ensured? • protection: how is one user/program protected from another? • performance: how do we make it all go fast?

  6. More OS issues… • concurrency: how are parallel activities (computation and I/O) created and controlled? • scale: what happens as demands or resources increase? • persistence: how do you make data last longer than program executions? • reliability: what happens if something goes wrong (either with hardware or with a program)? • extensibility: can we add new features? • flexibility: are we in the way of new apps? • communication: how do programs exchange information?

  7. Uniprocessor Operating Systems • Separating applications from operating system code through • a microkernel. 1.11

  8. Multiprocessor Operating Systems • The extension from uniprocessor OS to multiprocessor OS is simple: all data structures needed by the OS for resource management are placed in the shared memory. The main difference is that these data are now accessible by multiple CPUs. • How to make multiple CPUs transparent to applications? • Two important synchronization primitives: • Semaphore & Monitor

  9. Synchronization mechanisms • The semaphore concept was developed by Dijkstra 1968. • A semaphore can be though of as an integer with two operations, up and down. • The down operation checks to see if the value of a semaphore is greater than 0, if so it decrements and continues, if not the calling process blocks.

  10. Synchronization mechanisms • The up operation does the opposite • It checks to see if there are any blocked processes, if so it unblocks them and continues, otherwise it simple increments the semaphore. • Semaphore operations are atomic.

  11. Synchronization mechanisms • Semaphores or prone to errors. The same sort of problems that occur with the goto statement can happen with semaphores. • Abundant use of semaphores leads to unstructured code. • An alternative is the monitor.

  12. Synchronization mechanisms • A monitor is a module consisting of variables and methods. • The only way to access a variable is via a monitor method. • Monitors are atomic, they don’t allow simultaneous access by 2 or more processes to a monitor method.

  13. Multicomputer Operating Systems • Multicomputer OS has a totally different structure and complexity from multiprocessor OS because of the lack of physically shared memory for storing data structures for system-wide resource management. • General structure of a multicomputer operating system

  14. Distributed Shared Memory Systems • Still need messages or mechanisms to get data to processor, but these are hidden from the programmer:

  15. Multicomputer Operating Systems • Message-passing primitives Alternatives for blocking and buffering in message passing.

  16. Distributed Shared Memory Systems • As we know in distributed memory systems (DMS) there is no single shared memory. • From our review of message passing we know that programming DMS is more difficult than programming shared memory systems (SMS).

  17. Distributed Shared Memory Systems • DMS are attractive because they are easy to build. • However the SMS is more attractive to programmers.

  18. Distributed Shared Memory Systems • Distributed Shared Memory systems (DSM) try to address this issue. • DSM are designed to provide programmers with a global address space across as DMS.

  19. Distributed Shared Memory Systems

  20. Distributed Shared Memory Systems • DSM systems provide the best features of a distributed memory system and shared memory system.

  21. Distributed Shared Memory System Advantages • System scalable • Hides the message passing - do not explicitly specific sending messages between processes • Can use simple extensions to sequential programming • Can handle complex and large data bases without replication or sending the data to processes • So what’s the catch?

  22. Distributed Shared Memory System Disadvantages • May incur a performance penalty • Must provide for protection against simultaneous access to shared data (locks, etc.) • Little programmer control over actual messages being generated • Performance of irregular problems in particular may be difficult

  23. Distributed Shared Memory Systems • Achieving good performance for a restricted class of applications is possible but is a major difficulty when dealing with a sizeable class of applications

  24. Methods of Achieving DSM • Hardware • Special network interfaces and cache coherence circuits • Software • Modifying the OS kernel Adding a software layer between the operating system and the application

  25. Software DSM Implementation • Page based - Using the system’s virtual memory • Shared variable approach- Using routines to access shared variables • Object based- Shared data within collection of objects. Access to shared data through object oriented discipline (ideally)

  26. Distributed Shared Memory Systems • A conventional DSM system implements the shared memory abstraction through a paging system.

  27. Software Page Based DSM Implementation

  28. Software Page Based DSM Implementation • The primary source of overhead in a conventional DSM is the large amount of communications required to keep memory consistent.

  29. Distributed Shared Memory Systems • Pages of address space distributed among four machines • Situation after CPU 1 references page 10 • Situation if page 10 is read only and replication is used

  30. Distributed Shared Memory Systems • Replication is used for read operations but when a write occurs all replicated copies must be invalidated. • This operation is called write invalidate • write-invalidate:the processor that is writing data causes copies in the caches of all other processorsin the system to be rendered invalid before it changes its local copy.

  31. Distributed Shared Memory Systems • Another protocol that could be used is the update protocol. • Update protocol: update all other copies immediately upon any single update. • A broadcast method can be used to accomplish this. • Invalidate is preferred since messages are only generated when processors try to access updated copies.

  32. Distributed Shared Memory Systems • Ideally we want the DSM system to communicate no more than for the same application executing using MPI. • The restrictive nature of the consistency model and inflexible consistency protocols make this difficult to achieve.

  33. Consistency Models • Strict Consistency - Processors sees most recent update, i.e. read returns the most recent wrote to location. • Sequential Consistency - Result of any execution same as an interleaving of individual programs. • Relaxed Consistency- Delay making write visible to reduce messages. • Weak consistency - programmer must use synchronization operations to enforce sequential consistency when necessary. • Release Consistency - programmer must use specific synchronization operators, acquire and release. • Lazy Release Consistency - update only done at time of acquire.

  34. Strict Consistency • Defined by the condition • Any read on a data item x returns a value corresponding to the result of the most recent write on x. • In a distributed system what is the implication of this statement? • How is most recent determined? • We need a global clock to begin with that yields absolute global time. • It also assumes instantaneous communication throughout the distributed system.

  35. Strict Consistency • Every write immediately visible

  36. Sequential Consistency • Sequential consistency is slightly weaker than strict consistency. • The following condition needs to be satisfied for sequential consistency. • The result of any execution is the same as if the (read and write) operations by all processes on the data store executed in some sequential order and the operations of each individual process appeared in this sequence in the order specified by its program.

  37. Sequential Consistency • The result of any execution is the same as if the (read and write) operations by all processes on the data store executed in some sequential order and the operations of each individual process appeared in this sequence in the order specified by its program. • What does this mean? • It means that when processes run concurrently on a distributed system any valid interleaving is acceptable behavior as long as all processes see the same behavior. • Note the absence of global time that was seen as necessary in strict consistency.

  38. Sequential Consistency • A sequentially consistent data store. • A data store that is not sequentially consistent. Why is (a) okay?

  39. Sequential Consistency • A sequentially consistent data store. • A data store that is not sequentially consistent. Why is (a) okay? Because even though it appears the write operation setting x to b appears to have taken place before the write operation setting x to a both P3 and P4 see it the same. What about (b)?

  40. Weak Consistency • Weak consistency model utilizes synchronization variables to synchronize local copies of a data store. • When a data store is synchronized, all local writes by a process P are propagated to the other copies, whereas writes by other processes are brought into P’s copy.

  41. Weak Consistency • Properties: • Accesses to synchronization variables associated with a data store are sequentially consistent • No operation on a synchronization variable is allowed to be performed until all previous writes have been completed everywhere • No read or write operation on data items are allowed to be performed until all previous operations to synchronization variables have been performed.

  42. Weak Consistency • Accesses to synchronization variables associated with a data store are sequentially consistent • It does not matter if process P1 call synchronize(S1) at the same time as process P2 calls synchronize(S2), whether S1 occurs before S2 or S2 occurs before S1 as long as they are seen in the same order by all other processes.

  43. Weak Consistency • No operation on a synchronization variable is allowed to be performed until all previous writes have been completed everywhere. • All writes must complete everywhere, this flushes the pipeline, a process doing a synchronization after updating shared data forces all new values out to other local copies. Another synchronization on the same synchronization variable cannot take place until the writes from the last synchronization on that variable are complete.

  44. Weak Consistency • No read or write operation on data items are allowed to be performed until all previous operations to synchronization variables have been performed. • Synchronizations must be complete before reads and writes can go ahead. A process can’t read or write items until the synchronization is complete.

  45. Weak Consistency • A valid sequence of events for weak consistency. • An invalid sequence for weak consistency. Should be R(x)b

  46. Release Consistency • Weak consistency does not distinguish between reads and writes therefore synchronizes on both. • An extension of weak consistency in which the synchronization operations have been specified: • acquire operation - used before a shared variable or variables are to be read. • release operation - used after the shared variable or variables have been altered (written) and allows another process to access to the variable(s) • Typically acquire is done with a lock operation and release by an unlock operation (although not necessarily).

  47. Release Consistency • Rules: • Before a read or write operation on shared data is performed, all previous acquires done by the process must have completed successfully. • Before a release is allowed to be performed, all previous reads and writes by the process must have completed • Accesses to synchronization variables are FIFO consistent (sequential consistency is not required).

  48. Release Consistency Updates are done at the release point

  49. Lazy Release Consistency Same as release consistency except updates are done at the acquire.

  50. Consistency Models • One of the problems that need to be addressed that is a side-effect of maintaining consistency in a distributed shared memory systems is false sharing. • False sharing is a problem for page-based DSM systems.