cgs 3763 operating systems concepts spring 2013 n.
Skip this Video
Loading SlideShow in 5 Seconds..
CGS 3763 Operating Systems Concepts Spring 2013 PowerPoint Presentation
Download Presentation
CGS 3763 Operating Systems Concepts Spring 2013

CGS 3763 Operating Systems Concepts Spring 2013

198 Views Download Presentation
Download Presentation

CGS 3763 Operating Systems Concepts Spring 2013

- - - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

  1. CGS 3763 Operating Systems Concepts Spring 2013 Dan C. Marinescu Office: HEC 304 Office hours: M-Wd 11:30 - 12:30 AM

  2. Last time: Atomicity Coordination with a bounded buffer Today Storage models Types of storage Transactions Next time Memory management Reading assignments Chapters 8 and 9 of the textbook Lecture 30 – Wednesday, April 3, 2013 Lecture 32

  3. Asynchronous events and signals • Signals, or software interrupts, were originally introduced in Unix to notify a process about the occurrence of a particular event in the system. • Signals are analogous to hardware I/O interrupts: • When a signal arrives, control will abruptly switch to the signal handler. • When the handler is finished and returns, control goes back to where it came from • After receiving a signal, the receiver reacts to it in a well-defined manner. That is, a process can tell the system (OS) what they want to do when signal arrives: • Ignore it. • Catch it and deliver it. In this case, it must specify (register) the signal handling procedure. This procedure resides in the user space. The kernel will make a call to this procedure during the signal handling and control returns to kernel after it is done. • Kill the process (default for most signals). • Examples: Event - child exit, signal - to parent. Control signal from keyboard. Lecture 32

  4. Thread synchronization • Case studies • Solaris • Windows XP • Linux • Pthreads • On uniprocessor /single-core systems: • the simplest solution to achieve mutual exclusion is to disable interrupts during a process' critical section. This will prevent any a process from being preempted. • a more elegant method for achieving mutual exclusion is the busy-wait. • Mutex – mutual exclusion object Lecture 32

  5. Solaris • Implements a variety of locks to support multitasking, multithreading (including real-time threads), and multiprocessing • Uses adaptive mutexesfor efficiency when protecting data from short code segments • Uses condition variables and readers-writers locks when longer sections of code need access to data • Uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or reader-writer lock Lecture 32

  6. Windows XP • Uses interrupt masks to protect access to global resources on uniprocessor systems • Uses spinlocks on multiprocessor systems • Also provides dispatcher objects which may act as either mutexes and semaphores • Dispatcher objects may also provide events • An event acts much like a condition variable Lecture 32

  7. Linux • Linux: • Prior to kernel Version 2.6, disables interrupts to implement short critical sections • Version 2.6 and later, fully preemptive • Linux provides: • semaphores • spin locks Lecture 32

  8. Pthreads synchronization • User-level threads library • The API is OS-independent • It provides: • mutex locks • condition variables • Non-portable extensions include: • read-write locks • spin locks

  9. Storage models Cell storage Journal storage Lecture 32

  10. Desirable properties of cell storage Lecture 32

  11. Lecture 32

  12. Lecture 32

  13. Types of Storage Media • Volatile storage  information stored here does not survive system crashes., e.g., main memory, cache. • Nonvolatile storage – Information usually survives crashe; e.g., disk and tape • Stable storage – Information never lost • Not actually possible, so approximated via replication or RAID to devices with independent failure modes Lecture 32

  14. Transaction processing system Goal  ensure ACID properties even when the system crashes. Lecture 32

  15. Log-Based Recovery • Record to stable storage information about all modifications by a transaction • Most common is write-ahead logging • Log on stable storage, each log record describes single transaction write operation, including • Transaction name • Data item name • Old value • New value • <Ti starts> written to log when transaction Ti starts • <Ti commits> written when Ti commits • Log entry must reach stable storage before operation on data occurs Lecture 32

  16. Log-based recovery algorithm • Using the log, system can handle any volatile memory errors • Undo(Ti) restores value of all data updated by Ti • Redo(Ti) sets values of all data in transaction Ti to new values • Undo(Ti) and redo(Ti) must be idempotent • Multiple executions must have the same result as one execution • If system fails, restore state of all updated data via log • If log contains <Ti starts> without <Ti commits>, undo(Ti) • If log contains <Ti starts> and <Ti commits>, redo(Ti) Lecture 32

  17. Checkpoints • Log could become long, and recovery could take long • Checkpoints shorten log and recovery time. • Checkpoint scheme: • Output all log records currently in volatile storage to stable storage • Output all modified data from volatile to stable storage • Output a log record <checkpoint> to the log on stable storage • Now recovery only includes Ti, such that Ti started executing before the most recent checkpoint, and all transactions after Ti All other transactions already on stable storage Lecture 32

  18. Concurrent transactions • Must be equivalent to serial execution – serializability • Could perform all transactions in critical section • Inefficient, too restrictive • Concurrency-control algorithms provide serializability Lecture 32

  19. Serializability • Consider two data items A and B • Consider Transactions T0 and T1 • Execute T0, T1 atomically • Execution sequence called schedule • Atomically executed transaction order called serial schedule • For N transactions, there are N! valid serial schedules Lecture 32

  20. Schedule 1: T0 then T1 Lecture 32

  21. Non-serial schedule • Non-serial schedule allows overlapped execute • Resulting execution not necessarily incorrect • Consider schedule S, operations Oi, Oj • Conflict if access same data item, with at least one write • If Oi, Oj consecutive and operations of different transactions & Oi and Oj don’t conflict • Then S’ with swapped order OjOiequivalent to S • If S can become S’ via swapping nonconflicting operations • S is conflict serializable Lecture 32

  22. Schedule 2: Concurrent serializable schedule Lecture 32

  23. LockingProtocol • Ensure serializability by associating lock with each data item • Follow locking protocol for access control • Locks • Shared – Ti has shared-mode lock (S) on item Q, Ti can read Q but not write Q • Exclusive – Ti has exclusive-mode lock (X) on Q, Ti can read and write Q • Require every transaction on item Q acquire appropriate lock • If lock already held, new request may have to wait • Similar to readers-writers algorithm Lecture 32

  24. Two-phase locking protocol • Generally ensures conflict serializability • Each transaction issues lock and unlock requests in two phases • Growing – obtaining locks • Shrinking – releasing locks • Does not prevent deadlock Lecture 32

  25. Timestamp-based protocols • Select order among transactions in advance – timestamp-ordering • Transaction Ti associated with timestamp TS(Ti) before Ti starts • TS(Ti) < TS(Tj) if Ti entered system before Tj • TS can be generated from system clock or as logical counter incremented at each entry of transaction • Timestamps determine serializability order • If TS(Ti) < TS(Tj), system must ensure produced schedule equivalent to serial schedule where Ti appears before Tj Lecture 32

  26. Timestamp-based protocol implementation • Data item Q gets two timestamps • W-timestamp(Q) – largest timestamp of any transaction that executed write(Q) successfully • R-timestamp(Q) – largest timestamp of successful read(Q) • Updated whenever read(Q) or write(Q) executed • Timestamp-ordering protocol assures any conflicting read and write executed in timestamp order • Suppose Ti executes read(Q) • If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was already overwritten • read operation rejected and Ti rolled back • If TS(Ti) ≥ W-timestamp(Q) • read executed, R-timestamp(Q) set to max(R-timestamp(Q), TS(Ti)) Lecture 32

  27. Timestamp-ordering protocol • Suppose Ti executes write(Q) • If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed previously and Ti assumed it would never be produced • Write operation rejected, Ti rolled back • If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete value of Q • Write operation rejected and Ti rolled back • Otherwise, write executed • Any rolled back transaction Ti is assigned new timestamp and restarted • Algorithm ensures conflict serializability and freedom from deadlock Lecture 32

  28. Schedule Under Timestamp Protocol Lecture 32

  29. Lecture 32

  30. Signals state and implementation • A signal has the following states: • Signal send - A process can send signal to one of its group member process (parent, sibling, children, and further descendants). • Signal delivered - Signal bit is set. • Pending signal - delivered but not yet received (action has not been taken). • Signal lost - either ignored or overwritten. • Implementation: Each process has a kernel space (created by default) called signal descriptor having bits for each signal. Setting a bit is delivering the signal, and resetting the bit is to indicate that the signal is received. A signal could be blocked/ignored. This requires an additional bit for each signal. Most signals are system controlled signals. • and sets mem=LOCKED Lecture 32