1 / 76

Processes, Threads, Synchronization

Processes, Threads, Synchronization. CS 519: Operating System Theory Computer Science, Rutgers University Fall 2011. Process. Process = system abstraction for the set of resources required for executing a program = a running instance of a program

jleona
Download Presentation

Processes, Threads, Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes, Threads, Synchronization CS 519: Operating System Theory Computer Science, Rutgers University Fall 2011

  2. Process • Process = system abstraction for the set of resources required for executing a program • = a running instance of a program • = memory image + registers (+ I/O state) • The stack + registers form the execution context CS 519: Operating System Theory

  3. Process Image • Each variable must be assigned a storage class • Global (static) variables • Allocated in the global region at compile-time • Local variables and parameters • Allocated dynamically on the stack • Dynamically created objects • Allocated from the heap Code Globals Stack Heap Memory CS 519: Operating System Theory

  4. What About The OS Image? • Recall that one of the function of an OS is to provide a virtual machine interface that makes programming the machine easier • So, a process memory image must also contain the OS OS Code Memory Globals Stack Code Heap Globals Stack OS data space is used to store things like file descriptors for files being accessed by the process, status of I/O devices, etc. Heap CS 519: Operating System Theory

  5. What Happens When There Are More Than One Running Process? OS Code Globals P0 Stack Heap P1 P2 CS 519: Operating System Theory

  6. Process Control Block • Each process has per-process state maintained by the OS • Identification: process, parent process, user, group, etc. • Execution contexts: threads • Address space: virtual memory • I/O state: file handles (file system), communication endpoints (network), etc. • Accounting information • For each process, this state is maintained in a process control block (PCB) • This is just data in the OS data space CS 519: Operating System Theory

  7. Process Creation • How to create a process? System call. • In UNIX, a process can create another process using the fork() system call • int pid = fork(); /* this is in C */ • The creating process is called the parent and the new process is called the child • The child process is created as a copy of the parent process (process image and process control structure) except for the identification and scheduling state • Parent and child processes run in two different address spaces • By default, there is no memory sharing • Process creation is expensive because of this copying • The exec() call is provided for the newly created process to run a different program than that of the parent CS 519: Operating System Theory

  8. System Call In Monolithic OS interrupt vector for trap instruction PSW PC in-kernel file system(monolithic OS) code for fork system call kernel mode trap iret user mode id = fork() CS 519: Operating System Theory

  9. exec() Process Creation fork() code PCBs fork() fork() CS 519: Operating System Theory

  10. Example of Process Creation Using Fork • The UNIX shell is command-line interpreter whose basic purpose is for user to run applications on a UNIX system • cmd arg1 arg2 ... argn CS 519: Operating System Theory

  11. Process Termination • One process can wait for another process to finish using the wait() system call • Can wait for a child to finish as shown in the example • Can also wait for an arbitrary process if it knows its PID • Can kill another process using the kill() system call • What all happens when kill() is invoked? • What if the victim process does not want to die? CS 519: Operating System Theory

  12. Process Swapping • May want to swap out entire process • Thrashing if too many processes competing for resources • To swap out a process • Suspend its execution • Copy all of its information to backing store (except for PCB) • To swap a process back in • Copy needed information back into memory, e.g. page table, thread control blocks • Restore state to blocked or ready • Must check whether event(s) has (have) already occurred CS 519: Operating System Theory

  13. Process State Diagram ready (in memory) swap in swap out suspended (swapped out) CS 519: Operating System Theory

  14. Signals • OS may need to “upcall” into user processes • Signals • UNIX mechanism to upcall when an event of interest occurs • Potentially interesting events are predefined: e.g., segmentation violation, message arrival, kill, etc. • When interested in “handling” a particular event (signal), a process indicates its interest to the OS and gives the OS a procedure that should be invoked in the upcall. CS 519: Operating System Theory

  15. Handler B B A A Signals (Cont’d) • When an event of interest occurs, the kernel handles the event first, then modifies the process‘ stack to look as if the process’ code made a procedure call to the signal handler. • When the user process is scheduled next, it executes the handler first • From the handler, the user process returns to where it was when the event occurred CS 519: Operating System Theory

  16. Inter-Process Communication • Most operating systems provide several abstractions for inter-process communication: message passing, shared memory, etc • Communication requires synchronization between processes (i.e. data must be produced before it is consumed) • Synchronization can be implicit (message passing) or explicit (shared memory) • Explicit synchronization can be provided by the OS (semaphores, monitors, etc) or can be achieved exclusively in user-mode (if processes share memory) CS 519: Operating System Theory

  17. Message Passing Implementation kernel buffers kernel 1st copy process 1 2nd copy x=1 send(process2, &X) X process 2 receive(process1,&Y) print Y Y • two copy operations in a conventional implementation

  18. Shared Memory Implementation kernel shared region process 1 X=1 X process 2 physical memory print Y Y • no copying but synchronization is necessary

  19. Inter-Process Communication • More on shared memory and message passing later • Synchronization after we talk about threads CS 519: Operating System Theory

  20. A Tree of Processes On A Typical UNIX System CS 519: Operating System Theory

  21. Process: Summary • System abstraction – the set of resources required for executing a program (an instantiation of a program) • Execution context • Address space • File handles, communication endpoints, etc. • Historically, all of the above “lumped” into a single abstraction • More recently, split into several abstractions • Threads, address space, protection domain, etc. • OS process management: • Supports creation of processes and interprocess communication (IPC) • Allocates resources to processes according to specific policies • Interleaves the execution of multiple processes to increase system utilization CS 519: Operating System Theory

  22. Threads • Thread of execution: stack + registers (including PC) • Informally: where an execution stream is currently at in the program and the method invocation chain that brought the execution stream to the current place • Example: A called B, which called C, which called B, which called C • The PC should be pointing somewhere inside C at this point • The stack should contain 5 activation records: A/B/C/B/C • Process model discussed thus far implies a single thread CS 519: Operating System Theory

  23. Multi-Threading • Why limit ourselves to a single thread? • Think of a web server that must service a large stream of requests • If only have one thread, can only process one request at a time • What to do when reading a file from disk? • Multi-threading model • Each process can have multiple threads • Each thread has a private stack • Registers are also private • All threads of a process share the code, the global data and heap CS 519: Operating System Theory

  24. OS OS Code Code Globals Globals Stack Stack Stack Heap Heap Process Address Space Revisited (a) Single-threaded address space (b) Multi-threaded address space CS 519: Operating System Theory

  25. Multi-Threading (cont) • Implementation • Each thread is described by a thread-control block (TCB) • A TCB typically contains • Thread ID • Space for saving registers • Pointer to thread-specific data not on stack • Observation • Although the model is that each thread has a private stack, threads actually share the process address space •  There’s no memory protection! •  Threads could potentially write into each other’s stack CS 519: Operating System Theory

  26. Thread Creation PC thread_create() code SP PCBs TCBs thread_create() new_thread_starts_here stacks CS 519: Operating System Theory

  27. Context Switching • Suppose a process has multiple threads, a uniprocessor machine only has 1 CPU, so what to do? • In fact, even if we only had one thread per process, we would have to do something about running multiple processes … • We multiplex the multiple threads on the single CPU • At any instance in time, only one thread is running • At some point in time, the OS may decide to stop the currently running thread and allow another thread to run • This switching from one running thread to another is called context switching CS 519: Operating System Theory

  28. Diagram of Thread State CS 519: Operating System Theory

  29. Context Switching (cont) • How to do a context switch? • Save state of currently executing thread • Copy all “live” registers to the thread control block • Restore state of thread to run next • Copy values of live registers from thread control block to registers • When does context switching take place? CS 519: Operating System Theory

  30. Context Switching (cont) • When does context switching occur? • When the OS decides that a thread has run long enough and that another thread should be given the CPU • Remember how the OS gets control of the CPU back when it is executing user code? • When a thread performs an I/O operation and needs to block to wait for the completion of this operation • To wait for some other thread • Thread synchronization CS 519: Operating System Theory

  31. How Is the Switching Code Invoked? • user thread executing  clock interrupt  PC modified by hardware to “vector” to interrupt handler  user thread state is saved for later resume  clock interrupt handler is invoked  disable interrupt checking  check whether current thread has run “long enough”  if yes, post asynchronous software trap (AST) enable interrupt checking  exit interrupt handler  enter “return-to-user” code  check whether AST was posted  if not, restore user thread state and return to executing user thread; if AST was posted, call context switch code • Why need AST? CS 519: Operating System Theory

  32. How Is the Switching Code Invoked? (cont) • user thread executing  system call to perform I/O  user thread state is saved for later resume  OS code to perform system call is invoked  I/O operation started (by invoking I/O driver)  set thread status to waiting  move thread’s TCB from run queue to wait queue associated with specific device  call context switching code CS 519: Operating System Theory

  33. Context Switching • At entry to CS, the return address is either in a register or on the stack (in the current activation record) • CS saves this return address to the TCB instead of the current PC • To thread, it looks like CS just took a while to return! • If the context switch was initiated from an interrupt, the thread never knows that it has been context switched out and back in unless it looking at the “wall” clock CS 519: Operating System Theory

  34. Context Switching (cont) • Even that is not quite the whole story • When a thread is switched out, what happens to it? • How do we find it to switch it back in? • This is what the TCB is for. System typically has • A run queue that points to the TCBs of threads ready to run • A blocked queue per device to hold the TCBs of threads blocked waiting for an I/O operation on that device to complete • When a thread is switched out at a timer interrupt, it is still ready to run so its TCB stays on the run queue • When a thread is switched out because it is blocking on an I/O operation, its TCB is moved to the blocked queue of the device CS 519: Operating System Theory

  35. Ready Queue And Various I/O Device Queues CS 519: Operating System Theory

  36. Switching Between Threads of Different Processes • What if switching to a thread of a different process? • Caches, TLB, page table, etc.? • Caches • Physical addresses: no problem • Virtual addresses: cache must either have process tag or must flush cache on context switch • TLB • Each entry must have process tag or must flush TLB on context switch • Page table • Typically have page table pointer (register) that must be reloaded on context switch CS 519: Operating System Theory

  37. Threads & Signals • What happens if kernel wants to signal a process when all of its threads are blocked? • When there are multiple threads, which thread should the kernel deliver the signal to? • OS writes into process control block that a signal should be delivered • Next time any thread from this process is allowed to run, the signal is delivered to that thread as part of the context switch • What happens if kernel needs to deliver multiple signals? CS 519: Operating System Theory

  38. Thread Implementation • Kernel-level threads (lightweight processes) • Kernel sees multiple execution contexts • Thread management done by the kernel • User-level threads • Implemented as a thread library, which contains the code for thread creation, termination, scheduling and switching • Kernel sees one execution context and is unaware of thread activity • Can be preemptive or not CS 519: Operating System Theory

  39. User-Level vs. Kernel-Level Threads • Advantages of user-level threads • Performance: low-cost thread operations (do not require crossing protection domains) • Flexibility: scheduling can be application specific • Portability: user-level thread library easy to port • Disadvantages of user-level threads • If a user-level thread is blocked in the kernel, the entire process (all threads of that process) are blocked • Cannot take advantage of multiprocessing (the kernel assigns one process to only one processor) CS 519: Operating System Theory

  40. User-Level vs. Kernel-Level Threads user-level threads kernel-level threads threads thread scheduling user kernel threads process thread scheduling process scheduling process scheduling processor processor CS 519: Operating System Theory

  41. User-Level vs. Kernel-Level Threads • No reason why we should not have both • Most systems now support kernel threads • User-level threads are available as linkable libraries user-level threads thread scheduling user kernel kernel-level threads thread scheduling process scheduling processor CS 519: Operating System Theory

  42. Kernel Support for User-Level Threads • Even kernel threads are not quite the right abstraction for supporting user-level threads • Mismatch between where the scheduling information is available (user) and where scheduling on real processors is performed (kernel) • When the kernel thread is blocked, the corresponding physical processor is lost to all user-level threads although there may be some ready to run. CS 519: Operating System Theory

  43. Why Kernel Threads Are Not The Right Abstraction user-level threads user-level scheduling user kernel kernel thread blocked kernel-level scheduling physical processor CS 519: Operating System Theory

  44. Scheduler Activations: Kernel Support for User-Level Threads • Each process contains a user-level thread system (ULTS) that controls the scheduling of the allocated processors • Kernel allocates processors to processes as scheduler activations (SAs). An SA is similar to a kernel thread, but it also transfers control from the kernel to the ULTS on a kernel event as described below • Kernel notifies a process whenever the number of allocated processors changes or when an SA is blocked due to the user-level thread running on it (e.g., for I/O or on a page fault) • The process notifies the kernel when it needs more or fewer SAs (processors) • Ex.: (1) Kernel notifies ULTS that user-level thread blocked by creating an SA and upcalling the process; (2) ULTS removes the state from the old SA, tells the kernel that it can be reused, and decides which user-level thread to run on the new SA CS 519: Operating System Theory

  45. User-Level Threads On Top ofScheduler Activations user-level threads user-level scheduling user blocked active kernel scheduler activation active blocked kernel-level scheduling physical processor Source: T. Anderson et al. “Scheduler Activations: Effective Kernel Support for the User-Level Management of Parallelism”. ACM TOCS, 1992. CS 519: Operating System Theory

  46. Threads vs. Processes • Why multiple threads? • Can’t we use multiple processes to do whatever it is that we do with multiple threads? • Of course, we need to be able to share memory (and other resources) between multiple processes … • But this sharing is already supported by threads • Operations on threads (creation, termination, scheduling, etc..) are cheaper than the corresponding operations on processes • This is because thread operations do not involve manipulations of other resources associated with processes (I/O descriptors, address space, etc) • Inter-thread communication is supported through shared memory without kernel intervention • Why not? Have multiple other resources, why not threads? CS 519: Operating System Theory

  47. Thread/Process Operation Latencies VAX uniprocessor running UNIX-like OS, 1992. 2.8-GHz Pentium 4 uniprocessor running Linux, 2004. CS 519: Operating System Theory

  48. Synchronization

  49. Synchronization • Problem • Threads must share data • Data consistency must be maintained CS 519: Operating System Theory

  50. Terminologies • Critical section: a section of code which reads or writes shared data • Race condition: potential for interleaved execution of a critical section by multiple threads • Results are non-deterministic • Mutual exclusion: synchronization mechanism to avoid race conditions by ensuring exclusive execution of critical sections • Deadlock: permanent blocking of threads • Starvation: execution but no progress CS 519: Operating System Theory

More Related