1 / 21

Operating Systems CSE 411

Operating Systems CSE 411. CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar. Threads. What’s wrong with a process?. Multi-programming was developed to allow multiplexing of CPU and I/O Multiple processes given the illusion of running concurrently

djefferson
Download Presentation

Operating Systems CSE 411

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating SystemsCSE 411 CPU Management Sept. 29 2006 - Lecture 11 Instructor: Bhuvan Urgaonkar

  2. Threads

  3. What’s wrong with a process? • Multi-programming was developed to allow multiplexing of CPU and I/O • Multiple processes given the illusion of running concurrently • Several applications would like to have multiple processes • Web server: a process blocks on a file I/O call, then another process can run on CPU • What would be needed? • Ability to create/destroy processes on demand • We already know how the OS does this • We may want control over the scheduling of related processes • This is totally controlled by the OS scheduler • Processes may need to communicate with each other • Message passing (e.g., signals) or shared memory (coming up) both need OS assistance • Processes may need to be synchronized with each other (coming up) • Consider two Web server processes updating the same data • Things not very satisfactory with multi-process applications: 1. Communication needs help from the OS (system calls) 2. Duplication of same code may cause wastage of memory 3. PCBs are large and eat up precious kernel memory 4. Process context-switching imposes overheads 5. No control over scheduling of processes comprising the same application

  4. Kernel-level threads 1. Communication between related processes needs OS system calls • OS intervention can be avoided if the processes were able to share some memory without any help from the OS • That is, we are looking for a way for multiple processes to have the same address space (almost) • Address space: Code, data (global variables and heap), stack • Option #1: Share global variables • Problem: We don’t know in advance what communication may occur, so we do not know how much memory need to be shared in advance • Option #2: Share data (globals and heap) 2. Duplication of code may cause waste of memory • Option #3: Share code and data • Note: Not all processes may want to execute the same code • Expose the same code to all, let each execute whatever part they want to • Different threads may execute different parts of the code • What we have now are called kernel-level threads • Cycle through the same 5 states that we had studied for a process • OS provides system calls (analogous to fork, exit, exec) for kernel-level threads

  5. Kernel Threads • PCB can contain things common across threads belonging to the process • Have a Thread Control Block (TCB) for things specific to a thread • Side-effect: TCBs are smaller than PCBs, occupy less memory

  6. Things not very satisfactory with a processand how we can address these Kernel-level threads help fix some problems with processes • 1. Share data, efficient communication made possible • 2. Share code • 3. TCBs are smaller than PCBs => take less kernel memory Now let us consider the remaining problems with processes 4.Process context-switching imposes overhead - Do threads impose a smaller overhead? 5. No control over scheduling of processes comprising the same process/application - Do threads help us here?

  7. Context Switch Revisited Context switch involves • Save all registers and PC in PCB: same for a kernel-level thread • Save process state in PCB: same for a kernel-level thread • Do not confuse “process state” (ready, waiting, etc.) with “processor state” (registers, PC) and “processor mode” (user or kernel) • Flush TLB (not covered yet): no if threads belong to same process • Run scheduler to pick the next process, change address space: same for kernel-level threads belonging to different processes Context switch between threads of the same process is faster than a process context switch - Due to address space change related operations Context switch between threads of different processes is almost as expensive as a process context switch Note: SGG: Thread creation and context switching faster than process creation and context switching - Only creation faster not necessarily context switching!!

  8. How can the context switch overhead be reduced? • If a multi-threaded application were able to switch between its threads without involving the OS … • Can this be achieved? What would it involve? • The application would have to • Maintain separate PC and stack for each thread • Easy to do: allocate and maintain PCs and stacks on the heap • Be able to switch from thread to thread in accordance with some scheduling policy • Need to save/restore processor state (PC, registers) while in user mode • Possible using setjmp()/longjmp() calls

  9. setjmp() and longjmp()

  10. setjmp() and longjmp()

  11. Reducing the context switch overhead • Requirement 1: Application maintains separate PC and stack for each thread • Requirement 2: Application has a way to switch from thread to thread without OS intervention • Final missing piece: • How does a thread scheduler get invoked? That is, when is a thread taken off the CPU and another scheduled? • Note: Only concerned with threads of the same process. Why? • Strategy 1: Require all threads to yield the CPU periodically • Strategy 2: Set timers that send SIGALRM signals to the process “periodically” • E.g., UNIX: settimer() system call • Implement signal handler • Handler saves CPU state for prev. running thread using setjmp() into jmp_buf struct • Copies the contents of jmp_buf into TCB on the heap • Calls the thread scheduler that picks the next thread to run • Copies the CPU state of chosen thread from heap into jmp_buf, calls longjmp() • What we have now are called user-level threads

  12. Kernel-level threads • Pro: OS knows about all the threads in a process • Can assign different scheduling priorities to each one • Can context switch between multiple threads in one process • Con: Thread operations require calling the kernel • Creating, destroying, or context switching require system calls

  13. User-level threads • Pro: Thread operations very fast • Typically 10-100X faster than going through kernel • Pro: Thread state is very small • Just CPU state and stack • Con: If one thread blocks, stall entire process • Con: Can’t use multiple CPUs! • Kernel only knows one CPU context • Con: OS may not make good scheduling decisions • Could schedule a process with only idle threads • Could de-schedule a process with a thread holding a lock

  14. Signal Handling with Threads • Recall: Signals are used in UNIX systems to notify a process that a particular event has occurred • Recall: A signal handler is used to process signals - Signal is generated by particular event - Signal is delivered to a process - Signal is handled • Options: • Deliver the signal to the thread to which the signal applies • Deliver the signal to every thread in the process • Deliver the signal to certain threads in the process • Assign a specific thread to receive all signals for the process

  15. Signal Handling (more) • When does a process handle a signal? • Whenever it gets scheduled next after the generation of the signal • We said the OS marks some members of the PCB to indicate that a signal is due • And we said the process will execute the signal handler when it gets scheduled • But its PC had some other address! • The address of the instruction the process was executing when it was scheduled last • Complex task due to the need to juggle stacks carefully while switching between user and kernel mode

  16. Signal Handling (more) • Remember that signal handlers are functions defined by processes and included in the user mode code segment • Executed in user mode in the process’s context • The OS forces the handler’s starting address into the program counter • The user mode stack is modified by the OS so that the process execution starts at the signal handler

  17. Combining the benefits of kernel and user-level threads • Read from text: Sections 4.2, 4.3, 4.4.6

  18. Inter-process Communication (IPC) • Two fundamental ways • Shared-Memory • E.g., playing tic-tac-toe or chess • Message Passing • E.g., Letter, Email • Any communication involves a combination of these two

  19. IPC: Message Passing • OS provides system calls that processes/threads can use to pass messages to each other • A thread library could provide user-level calls for the same • OS not involved

  20. IPC: Shared Memory • OS provides system calls using which processes can create shared memory that they can read to/write from • Threads: Can share memory without OS intervention

  21. Process/Thread synchronization • Fundamental problem that needs to be solved to enable IPC • Will study it next time

More Related