1 / 87

Processes, Threads, Synchronization

Processes, Threads, Synchronization. CS 519: Operating System Theory Computer Science, Rutgers University Instructor: Thu D. Nguyen TA: Xiaoyan Li Spring 2002. Von Neuman Model. Both text (program) and data reside in memory Execution cycle Fetch instruction Decode instruction

Download Presentation

Processes, Threads, Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes, Threads, Synchronization CS 519: Operating System Theory Computer Science, Rutgers University Instructor: Thu D. Nguyen TA: Xiaoyan Li Spring 2002

  2. Von Neuman Model • Both text (program) and data reside in memory • Execution cycle • Fetch instruction • Decode instruction • Execute instruction CPU Memory CS 519: Operating System Theory

  3. Image of Executing Program 100 load R1, R2 104 add R1, 4, R1 108 load R1, R3 112 add R2, R3, R3 … 2000 4 2004 8 R1: 2000 R2: R3: PC: 100 CPU Memory CS 519: Operating System Theory

  4. How Do We Write Programs Now? public class foo { static private int yv = 0; static private int nv = 0; public static void main() { foo foo_obj = new foo; foo_obj->cheat(); } public cheat() { int tyv = yv; yv = yv + 1; if (tyv < 10) { cheat(); } } } • How to map a program like this to a Von Neuman machine? • Where to keep yv, nv? • What about foo_obj and tyv? • How to do foo->cheat()? CS 519: Operating System Theory

  5. Global Variables • Dealing with “global” variables like yv and nv is easy • Let’s just allocate some space in memory for them • This is done by the compiler at compile time • A reference to yv is then just an access to yv’s location in memory • Suppose g is stored at location 2000 • Then, yv = yv + 1 might be compiled to something like • loadi 2000, R1 load R1, R2 add R2, 1, R2 store R1, R2 CS 519: Operating System Theory

  6. Local Variables • What about foo_obj defined in main() and tyv defined in cheat()? • 1st option you might think of is just to allocate some space in memory for these variables as well (as shown to the right) • What is the problem with this approach? • How can we deal with this problem? 2000 yv 2004 nv 2008 foo_obj tyv CS 519: Operating System Theory

  7. Local Variable yv globals foo->cheat(); tyv = yv; … foo->cheat(); tyv = yv; … tyv stack tyv’ tyv’’ • Allocate a new memory location to tyv every time cheat() is called at run-time • Convention is to allocate storage in a stack (often called the control stack) • Pop stack when returning from a method: storage is no longer needed • Code for allocating/deallocating space on the stack is generated by compiler at compile time CS 519: Operating System Theory

  8. What About “new” Objects? • foo foo_obj = new foo; • foo_obj is really a pointer to a foo object • As just explained, a memory location is allocated for foo_obj from the stack whenever main() is invoked • Where does the object created by the “new foo” actually live? • Is the stack an appropriate place to keep this object? • Why not? CS 519: Operating System Theory

  9. Memory Image • Suppose we have executed the following: • yv = 0nv = 0main()foo_obj = new foofoo->cheat()tyv = yvyv = yv + 1foo->cheat()tyv = yvyv = yv + 1foo->cheat()tyv = yvyv = yv + 1 yv globals foo_obj tyv stack tyv’ tyv’’ heap CS 519: Operating System Theory

  10. Data Access • How to find data allocated dynamically on stack? • By convention, designate one register as the stack pointer • Stack pointer always point at current activation record • Stack pointer is set at entry to a method • Code for setting stack pointer is generated by compiler • Local variables and parameters are referenced as offsets from sp activation record for cheat_yes() PC tyv SP CPU CS 519: Operating System Theory

  11. Data Access • The statement • tyv = tyv + 1 • Would then translate into something like • addi 0, sp, R1 # tyv is the only variable so offset is 0 • load R1, R2 • addi 1, R2 • store R1, R2 CS 519: Operating System Theory

  12. Activation Record • We have only talked about allocation of local variables on the stack • The activation record is also used to store: • Parameters • The beginning of the previous activation record • The return address • … Other stuff Local variables CS 519: Operating System Theory

  13. Run Time Storage Organization • Each variable must be assigned a storage class • Global (static) variables • Allocated in globals region at compile-time • Method local variables and parameters • Allocate dynamically on stack • Dynamically created objects (using new) • Allocate from heap • Objects live beyond invocation of a method • Garbage collected when no longer “live” Code Globals Stack Heap Memory CS 519: Operating System Theory

  14. Why Did We Talk About All That Stuff? • Process = system abstraction for the set of resources required for executing a program • = a running instance of a program • = memory image + registers’ content (+ I/O state) • The stack + registers’ content represent the execution context or thread of control CS 519: Operating System Theory

  15. What About The OS? • Recall that one of the function of an OS is to provide a virtual machine interface that makes programming the machine easier • So, a process memory image must also contain the OS OS Code Memory Globals Stack Code Heap Globals Stack OS data space is used to store things like file descriptors for files being accessed by the process, status of I/O devices, etc. Heap CS 519: Operating System Theory

  16. What Happens When There Are More Than One Running Process? OS Code Globals P0 Stack Heap P1 P2 CS 519: Operating System Theory

  17. Process Control Block • Each process has per-process state maintained by the OS • Identification: process, parent process, user, group, etc. • Execution contexts: threads • Address space: virtual memory • I/O state: file handles (file system), communication endpoints (network), etc. • Accounting information • For each process, this state is maintained in a process control block (PCB) • This is just data in the OS data space • Think of it as objects of a class CS 519: Operating System Theory

  18. Process Creation • How to create a process? System call. • In UNIX, a process can create another process using the fork() system call • int pid = fork(); /* this is in C */ • The creating process is called the parent and the new process is called the child • The child process is created as a copy of the parent process (process image and process control structure) except for the identification and scheduling state • Parent and child processes run in two different address spaces • By default, there’s no memory sharing • Process creation is expensive because of this copying • The exec() call is provided for the newly created process to run a different program than that of the parent CS 519: Operating System Theory

  19. System Call In Monolithic OS interrupt vector for trap instruction PSW PC in-kernel file system(monolithic OS) code for fork system call kernel mode trap iret user mode id = fork() CS 519: Operating System Theory

  20. exec() Process Creation fork() code PCBs fork() CS 519: Operating System Theory

  21. Example of Process Creation Using Fork • The UNIX shell is command-line interpreter whose basic purpose is for user to run applications on a UNIX system • cmd arg1 arg2 ... argn CS 519: Operating System Theory

  22. Process Death (or Murder) • One process can wait for another process to finish using the wait() system call • Can wait for a child to finish as shown in the example • Can also wait for an arbitrary process if know its PID • Can kill another process using the kill() system call • What all happens when kill() is invoked? • What if the victim process doesn’t want to die? CS 519: Operating System Theory

  23. Process Swapping • May want to swap out entire process • Thrashing if too many processes competing for resources • To swap out a process • Suspend all of its threads • Must keep track of whether thread was blocked or ready • Copy all of its information to backing store (except for PCB) • To swap a process back in • Copy needed information back into memory, e.g. page table, thread control blocks • Restore each thread to blocked or ready • Must check whether event(s) has (have) already occurred CS 519: Operating System Theory

  24. Process State Diagram ready (in memory) swap in swap out suspended (swapped out) CS 519: Operating System Theory

  25. Signals • OS may need to “upcall” into user processes • Signals • UNIX mechanism to upcall when an event of interest occurs • Potentially interesting events are predefined: e.g., segmentation violation, message arrival, kill, etc. • When interested in “handling” a particular event (signal), a process indicates its interest to the OS and gives the OS a procedure that should be invoked in the upcall. CS 519: Operating System Theory

  26. Handler B B A A Signals (Cont’d) • When an event of interest occurs the kernel handles the event first, then modifies the process’s stack to look as if the process’s code made a procedure call to the signal handler. • When the user process is scheduled next it executes the handler first • From the handler the user process returns to where it was when the event occurred CS 519: Operating System Theory

  27. Inter-Process Communication • Most operating systems provide several abstractions for inter-process communication: message passing, shared memory, etc • Communication requires synchronization between processes (i.e. data must be produced before it is consumed) • Synchronization can be implicit (message passing) or explicit (shared memory) • Explicit synchronization can be provided by the OS (semaphores, monitors, etc) or can be achieved exclusively in user-mode (if processes share memory) CS 519: Operating System Theory

  28. Inter-Process Communication • More on shared memory and message passing later • Synchronization after we talk about threads CS 519: Operating System Theory

  29. A Tree of Processes On A Typical UNIX System CS 519: Operating System Theory

  30. Process: Summary • System abstraction – the set of resources required for executing a program (an instantiation of a program) • Execution context(s) • Address space • File handles, communication endpoints, etc. • Historically, all of the above “lumped” into a single abstraction • More recently, split into several abstractions • Threads, address space, protection domain, etc. • OS process management: • Supports creation of processes and interprocess communication (IPC) • Allocates resources to processes according to specific policies • Interleaves the execution of multiple processes to increase system utilization CS 519: Operating System Theory

  31. Threads • Thread of execution: stack + registers (which includes the PC) • Informally: where an execution stream is currently at in the program and the method invocation chain that brought the execution stream to the current place • Example: A called B which called C which called B which called C • The PC should be pointing somewhere inside C at this point • The stack should contain 5 activation records: A/B/C/B/C • Thread for short • Process model discussed thus far implies a single thread CS 519: Operating System Theory

  32. Multi-Threading • Why limit ourselves to a single thread? • Think of a web server that must service a large stream of requests • If only have one thread, can only process one request at a time • What to do when reading a file from disk? • Multi-threading model • Each process can have multiple threads • Each thread has a private stack • Registers are also private • All threads of a process share the code and heap • Objects to be shared across multiple threads should be allocated on the heap CS 519: Operating System Theory

  33. OS OS Code Code Globals Globals Stack Stack Stack Heap Heap Process Address Space Revisited (a) Single-threaded address space (b) Multi-threaded address space CS 519: Operating System Theory

  34. Multi-Threading (cont) • Implementation • Each thread is described by a thread-control block (TCB) • A TCB typically contains • Thread ID • Space for saving registers • Pointer to thread-specific data not on stack • Observation • Although the model is that each thread has a private stack, threads actually share the process address space •  There’s no memory protection! •  Threads could potentially write into each other’s stack CS 519: Operating System Theory

  35. Thread Creation PC thread_create() code SP PCBs TCBs thread_create() new_thread_starts_here stacks CS 519: Operating System Theory

  36. Context Switching • Suppose a process has multiple threads …uh oh … a uniprocessor machine only has 1 CPU … what to do? • In fact, even if we only had one thread per process, we would have to do something about running multiple processes … • We multiplex the multiple threads on the single CPU • At any instance in time, only one thread is running • At some point in time, the OS may decide to stop the currently running thread and allow another thread to run • This switching from one running thread to another is called context switching CS 519: Operating System Theory

  37. Diagram of Thread State CS 519: Operating System Theory

  38. Context Switching (cont) • How to do a context switch? • Save state of currently executing thread • Copy all “live” registers to thread control block • For register-only machines, need at least 1 scratch register • points to area of memory in thread control block that registers should be saved to • Restore state of thread to run next • Copy values of live registers from thread control block to registers • When does context switching take place? CS 519: Operating System Theory

  39. Context Switching (cont) • When does context switching occur? • When the OS decides that a thread has run long enough and that another thread should be given the CPU • Remember how the OS gets control of the CPU back when it is executing user code? • When a thread performs an I/O operation and needs to block to wait for the completion of this operation • To wait for some other thread • Thread synchronization: we’ll talk about this lots in a couple of lectures CS 519: Operating System Theory

  40. How Is the Switching Code Invoked? • user thread executing  clock interrupt  PC modified by hardware to “vector” to interrupt handler  user thread state is saved for restart  clock interrupt handler is invoked  disable interrupt checking  check whether current thread has run “long enough”  if yes, post asynchronous software trap (AST) enable interrupt checking  exit interrupt handler  enter “return-to-user” code  check whether AST was posted  if not, restore user thread state and return to executing user thread; if AST was posted, call context switch code • Why need AST? CS 519: Operating System Theory

  41. How Is the Switching Code Invoked? (cont) • user thread executing  system call to perform I/O  user thread state is saved for restart  OS code to perform system call is invoked  I/O operation started (by invoking I/O driver)  set thread status to waiting  move thread’s TCB from run queue to wait queue associated with specific device  call context switching code CS 519: Operating System Theory

  42. Context Switching • At entry to CS, the return address is either in a register or on the stack (in the current activation record) • CS saves this return address to the TCB instead of the current PC • To thread, it looks like CS just took a while to return! • If the context switch was initiated from an interrupt, the thread never knows that it has been context switched out and back in unless it looking at the “wall” clock BC CS BC UC BC BC CS CS 519: Operating System Theory

  43. Context Switching (cont) • Even that is not quite the whole story • When a thread is switched out, what happens to it? • How do we find it to switch it back in? • This is what the TCB is for. System typically has • A run queue that points to the TCBs of threads ready to run • A blocked queue per device to hold the TCBs of threads blocked waiting for an I/O operation on that device to complete • When a thread is switched out at a timer interrupt, it is still ready to run so its TCB stays on the run queue • When a thread is switched out because it is blocking on an I/O operation, its TCB is moved to the blocked queue of the device CS 519: Operating System Theory

  44. Ready Queue And Various I/O Device Queues CS 519: Operating System Theory

  45. Switching Between Threads of Different Processes • What if switching to a thread of a different process? • Caches, TLB, page table, etc.? • Caches • Physical addresses: no problem • Virtual addresses: cache must either have process tag or must flush cache on context switch • TLB • Each entry must have process tag or must flush TLB on context switch • Page table • Typically have page table pointer (register) that must be reloaded on context switch CS 519: Operating System Theory

  46. Threads & Signals • What happens if kernel wants to signal a process when all of its threads are blocked? • When there are multiple threads, which thread should the kernel deliver the signal to? • OS writes into process control block that a signal should be delivered • Next time any thread from this process is allowed to run, the signal is delivered to that thread as part of the context switch • What happens if kernel needs to deliver multiple signals? CS 519: Operating System Theory

  47. Thread Implementation • Kernel-level threads (lightweight processes) • Kernel sees multiple execution context • Thread management done by the kernel • User-level threads • Implemented as a thread library which contains the code for thread creation, termination, scheduling and switching • Kernel sees one execution context and is unaware of thread activity • Can be preemptive or not CS 519: Operating System Theory

  48. User-Level vs. Kernel-Level Threads • Advantages of user-level threads • Performance: low-cost thread operations (do not require crossing protection domains) • Flexibility: scheduling can be application specific • Portability: user-level thread library easy to port • Disadvantages of user-level threads • If a user-level thread is blocked in the kernel, the entire process (all threads of that process) are blocked • Cannot take advantage of multiprocessing (the kernel assigns one process to only one processor) CS 519: Operating System Theory

  49. User-Level vs. Kernel-Level Threads user-level threads kernel-level threads threads thread scheduling user kernel threads process thread scheduling process scheduling process scheduling processor processor CS 519: Operating System Theory

  50. User-Level vs. Kernel-Level Threads • No reason why we shouldn’t have both • Most systems now support kernel threads • User-level threads are available as linkable libraries user-level threads thread scheduling user kernel kernel-level threads thread scheduling process scheduling processor CS 519: Operating System Theory

More Related