1 / 46

Threads

Threads. Chapter 4 Threads are a subdivision of processes Since there is less information associated with a thread than there is info associated with a process, thread switching is easier than process switching. Process Characteristics. Unit of resource ownership - process is allocated:

zubin
Download Presentation

Threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threads Chapter 4 Threads are a subdivision of processes Since there is less information associated with a thread than there is info associated with a process, thread switching is easier than process switching Chapter 4

  2. Process Characteristics • Unit of resource ownership - process is allocated: • a virtual address space to hold the process image • control of some resources (files, I/O devices...) • Unit of execution - process is an execution path through one or more programs • execution may be interleaved with other processes • the process has an execution state and a priority Chapter 4

  3. Process Characteristics • These 2 characteristics are treated independently by some recent OS • The unit of execution is usually referred to a thread or a lightweight process(book talks of unit of dispatching, similar concept) • The unit of resource ownership is usually referred to as a process or task Chapter 4

  4. Multithreading vs. Single threading • Multithreading: when the OS supports multiple threads of execution within a single process • Single threading: when the OS does not recognize the concept of thread • MS-DOS support a single user process and a single thread • UNIX supports multiple user processes but only supports one thread per process • Solaris, Linux, Windows NT and 2000, OS/2 support multiple threads Chapter 4

  5. Threads and Processes Chapter 4

  6. Processes • Have a virtual address space which holds the process image • Protected access to processors, other processes, files, and I/O resources Chapter 4

  7. Threads • Have execution state (running, ready, etc.) • Save thread context when not running • Have private storage for local variables and execution stack • Have shared access to the address space and resources (files…) of their process • when one thread alters (non-private) data, all other threads (of the process) can see this • threads communicate via shared variables • a file opened by one thread is available to others Chapter 4

  8. Single Threaded and Multithreaded Process Models Thread Control Block contains a register image, thread priority and thread state information (compare with Fig. 3.14) Chapter 4

  9. Benefits of Threads vs Processes • It takes less time to create a new thread than a new process • Less time to terminate a thread than a process • Less time to switch between two threads within the same process than to switch between processes • Mainly because a thread is associated with less information. Chapter 4

  10. Benefits of Threads • Example: a file server on a LAN • It needs to handle several file requests over a short period • Hence more efficient to create (and destroy) a single thread for each request • Such threads share files and variables • In Symmetric Multiprocessing: different threads can possibly execute simultaneously on different processors • Example 2: one thread displays menu and reads user input while the other thread executes user commands Chapter 4

  11. Application benefits of threads • Consider an application that consists of several independent parts that do not need to run in sequence • Each part can be implemented as a thread • Whenever one thread is blocked waiting for an I/O, execution could possibly switch to another thread of the same application (instead of switching to another process) Chapter 4

  12. Benefits of Threads • Since threads within the same process share memory and files, they can communicate with each other without invoking the kernel • Therefore necessary to synchronize the activities of various threads so that they do not obtain inconsistent views of the data (chap 5) Chapter 4

  13. Terminaison de processus et fils • Suspending a process involves suspending all threads of the process • Termination of a process terminates all threads within the process • Since all such threads share the same address space in the process Chapter 4

  14. Threads States • As for processes, three key states: running, ready, blocked • They cannot have suspend state because all threads within the same process share the same address space (same memory) • A thread is created by another thread using a command often called Spawn • Finish is the state of a process that is completing Chapter 4

  15. Remote Procedure Call Using one thread only Chapter 4

  16. Remote Procedure Call Using Multiple Threads:less waiting time Chapter 4

  17. Thread management • Thread management can be done in one of three fundamental ways: • User-level thread: threads are entirely managed by the application • Kernel-level threads: threads are entirely managed by the OS kernel • Combined approaches Chapter 4

  18. User-Level Threads (ULT) (ex. Standard UNIX) • The kernel is not aware of the existence of threads • All thread management is done by the application using a thread library • Thread switching does not require kernel mode privileges (no mode switch) • Scheduling is application specific Chapter 4

  19. Threads library • Contains code for: • creating and destroying threads • passing messages and data between threads • scheduling thread execution • saving and restoring thread contexts Chapter 4

  20. Kernel activity for ULTs • The kernel is not aware of thread activity but it still manages process activity • When a thread makes a system call, the whole process is blocked • But for the thread library that thread is still in running state • So thread states are independent of process states Chapter 4

  21. Advantages Thread switching does not involve the kernel: no mode switching Scheduling can be application specific: choose the best algorithm. Can run on any OS. Only needs a thread library Disadvantages Most system calls are blocking for processes. So all threads within a process will be blocked The kernel can only assign processes to processors. Two threads within the same process cannot run simultaneously on two processors Advantages and disadvantages of ULT Chapter 4

  22. Kernel-Level Threads (KLT) Ex:Windows NT, 2000, Linux and OS/2 • All thread management is done by kernel • No thread library but an API to the kernel thread facility • Kernel maintains context information for the process and the threads • Switching between threads requires the kernel • Scheduling on a thread basis Chapter 4

  23. Advantages the kernel can simultaneously schedule many threads of the same process on many processors blocking is done on a thread level kernel routines can be multithreaded Inconveniences thread switching within the same process involves the kernel. We have 2 mode switches per thread switch this may results in a significant slow down however kernel may be able to switch threads quicker than user’s lib Advantages and disadvantages of KLT Chapter 4

  24. Combined ULT/KLT Approaches (Solaris) • Thread creation done in the user space • Bulk of scheduling and synchronization of threads done in the user space • The programmer may adjust the number of KLTs • Combines the best of both approaches Chapter 4

  25. Solaris • Process includes the user’s address space, stack, and process control block • User-level threads (threads library) • invisible to the OS • are the interface for application parallelism • Kernel threads • the unit that can be dispatched on a processor • Lightweight processes (LWP) • each LWP supports one or more ULTs and maps to exactly one KLT • LWPs are visible to applications • therefore LWP are the way the user sees the KLT • constitute a sort of ’virtual CPU’ Chapter 4

  26. Process 2 is equivalent to a pure ULT approach ( = Unix) Process 4 is equivalent to a pure KLT approach ( = Win-NT, OS/2) We can specify a different degree of parallelism (process 3 and 5) Chapter 4

  27. Solaris: versatility • We can use ULTs when logical parallelism does not need to be supported by hardware parallelism (we save mode switching) • Ex: Multiple windows but only one is active at any one time • If ULT threads can block then we can add two or more LWPs to avoid blocking the whole application • Note versatility of SOLARIS that can operate like Windows-NT or like conventional Unix Chapter 4

  28. Solaris: user-level thread execution (threads library). • Transitions among states are under the control of the application: • they are caused by calls to the thread library • It’s only when a ULT is in the active state that it is attached to a LWP (so that it will run when the kernel level thread runs) • A thread may transfer to the sleeping state by invoking a synchronization primitive (chap 5) and later transfer to the runnable state when the event waited for occurs • A thread may force another thread to go to the stop state Chapter 4

  29. Solaris: user-level thread states (attached to a LWP) Chapter 4

  30. Decomposition of user-level Active state • When a ULT is Active, it is associated to a LWP and thus to a KLP. • Transitions among the LWP states is under the exclusive control of the kernel • A LWP can be in the following states: • running: assigned to CPU = executing • blocked because the KLT issued a blocking system call (but the ULT remains bound to that LWP and remains active) • runnable: waiting to be dispatched to CPU Chapter 4

  31. Chapter 4

  32. Solaris: Lightweight Process States LWP states are independent of ULT states (except for bound ULTs) Chapter 4

  33. Multiprocessing: Categories of Systems • Single Instruction Single Data (SISD) • single processor executes a single instruction stream to operate on data stored in a single memory • Single Instruction Multiple Data (SIMD) • each instruction is executed on a different set of data by the different processors • typical application: matrix calculations • Multiple Instruction Single Data (MISD) • several CPUs execute on a single set of data. Never implemented, probably not practical • Multiple Instruction Multiple Data (MIMD) • a set of processors simultaneously execute different instruction sequences on different data sets Chapter 4

  34. Chapter 4

  35. Symmetric Multiprocessing • Kernel can execute on any processor • Typically each processor does self-scheduling form the pool of available processes or threads Chapter 4

  36. Chapter 4

  37. Design Considerations • Managing kernel routines • they can be simultaneously in execution by several CPUs • Scheduling • can be performed by any CPU • Interprocess synchronization • becomes even more critical when several CPUs may attempt to access the same data at the same time • Memory management • becomes more difficult when several CPUs may be sharing the same memory • Reliability or fault tolerance • if one CPU fails, the others should be able to keep the system in function Chapter 4

  38. Microkernels: a trend • Small operating system core • Contains only essential operating systems functions • Many services traditionally included in the operating system kernel are now external subsystems • device drivers • file systems • virtual memory manager • windowing system • security services Chapter 4

  39. Layered Kernel vs Microkernel Chapter 4

  40. Client-server architecture in microkernels • the user process is the client • the various subsystems are servers Chapter 4

  41. Benefits of a Microkernel Organization • Uniform interface on request made by a process • All services are provided by means of message passing • Extensibility and flexibility • Facilitates the addition and removal of services and functionalities • Portability • Changes needed to port the system to a new processor may be limited to the microkernel Reliability • Modular design • Small microkernel can be rigorously tested Chapter 4

  42. More Benefits of a Microkernel Organization • Distributed system support • Message are sent without knowing what the target machine is • Object-oriented operating system • Components are objects with clearly defined interfaces Chapter 4

  43. Microkernel Elements • Low-level memory management • elementary mecanisms for memory allocation • support more complex strategies such as virtual memory • Inter-process communication • I/O and interrupt management Chapter 4

  44. Layered Kernel vs Microkernel Chapter 4

  45. Microkernel Performance • Microkernel architecture may be less performant than traditional layered architecture • Communication between the various subsystems causes overhead • Message-passing mechanisms less performant than simple system call Chapter 4

  46. Important concepts of Chapter 4 • Threads and difference wrt processes • User level threads, kernel level threads and combinations • Thread states and process states • Different types of multiprocessing • Microkernel architecture Chapter 4

More Related