1 / 52

Threads

Threads. Context switch between processes is an expensive operation. It leads to high overhead A thread is an alternative model for execution of a program that incurs smaller overhead while switching between threads of the same application Q: How is this achieved ?. Thread switching overhead.

cheveyo
Download Presentation

Threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threads • Context switch between processes is an expensive operation. It leads to high overhead • A thread is an alternative model for execution of a program that incurs smaller overhead while switching between threads of the same application Q: How is this achieved?

  2. Thread switching overhead • A process context switch involves: 1. saving the context of the process in operation 2. saving its CPU state 3. loading the state of the new process 4. loading its CPU state

  3. A threads is a program execution within the context of a process (i.e., it uses the resources of a process); many threads can be created within the same process • Switching between threads of the same process involves much less switching overhead

  4. PROCESS • B THREADS

  5. OOP WITH C++ & JAVA – D SAMANTA –PHI(B) • MULTITHREADING MEANS MULTIPLE FLOW OF CONTROL -- 2 ADVANTAGES • 1. BETTER UTLIZN OF RES • 2. SEVERAL PROBS SOLVED BETTER BY MULTITHREADS - EXS

  6. EXS(B) • PROGRAM TO DISPLAY ANIMATION, DOCUMENTS TO PLAY MUSIC, & DOWNLOAD FILES FROM THE NET AT THE SAME TIME • NETWORK SIMULATION EX –ADHOC NETW

  7. B • JAVA IS A MULTITHREADED LANG • BECAUSE JAVA THRS RUN IN THE SAME MEM SPACE, THEY CAN EASILY COMMN AMONG THEMSELVES, BECAUSE AN OBJECT IN ONE THR CAN CALL A METHOD IN ANOTHER THR WITHOUT ANY OVERHEAD REQ FROM THE OS

  8. B • T1=new testthread(“thread1”, (int) (math.random()*2000)); • T2, T3 • T2.start(); • T1.start(); • T3.start();

  9. B • Start(), stop(), suspend(), resume(); sleep(int n), setPriority(int p), yield() – yield method causes the run time to switch the context from the current thr to the next available runnable thr. • SPECIALITY –FACILITIES AT THE LANG LEVEL –USER CONTROL

  10. Threads • Where are threads useful? • If two processes share the same address space and the same resources, they have identical context • switching between these processes involves saving and reloading of their contexts. This overhead is redundant. • In such situations, it is better to use threads.

  11. Threads in process Pi : (a) concept, (b) implementation

  12. Advantages of threads

  13. Different kinds of threads • Kernel-level threads: Threads are created through system calls. The kernel is aware of their existence, and schedules them • User-level threads: Threads are created and maintained by a thread library, whose routines exist as parts of a process. Kernel is oblivious of their existences. • Hybrid threads: A combination of the above.

  14. Q: Why three kinds of threads? A: Consider switching overhead, concurrency/parallelism

  15. Scheduling of kernel-level threads

  16. EVENT SAVE THREAD STATE EVENT PROCESSING SCHEDULING YES THREAD OF SAME PR ? NO SAVE PR CONTEXT LOAD NEW CONTEXT DISPATCH THREAD

  17. Scheduling of user-level threads

  18. Kernel-level and user-level threads • Thread switching overhead • Concurrency and parallelism

  19. Actions of the threads library: N, R, B indicate running, ready and blocked --- this schematic should be replaced by the schematic of Ex. 3.6.

  20. Hybrid thread models • Hybrid threads have elements of both kernel-level and user-level threads • Each user-level thread has a thread control block (TCB) • Each kernel-level thread has a kernel thread control block (KTCB) • Three models of associating user-level and kernel-level threads • One-to-one association • Many-to-one association • Many-to-many association

  21. Hybrid thread models • This schematic should be added (Fig. 3.15)

  22. Process Interactions Processes may interact in four different ways: • Sharing of data – Data consistency should be preserved • Synchronization of actions– Processes should perform their actions in a desired order • Passing of messages – Messages should be stored and delivered appropriately

  23. 4. Sending of Signals – Processes should be able to send signals to other processes and specify signal handling actions when signals are sent to them The OS performs message passing and provides facilities for the other three modes of interaction

  24. An example of data sharing: airline reservations • MULTIPLE PROCESSES –EACH PR SERVICING ONE AGENT TERMINAL

  25. RACE CONDITIONS(B) • LET Ds BE A SHARED VAR • Ai & Aj BE PRS WHICH SHARE THIS VAR & OPERATE CONCURRENTLY • Ai : Ds=Ds+10; • Aj : Ds=Ds+5; • BECAUSE OF CONCURRENCY THERE IS UNCERTAINTY –RACE CONDN • SOLUTION – MUTUAL EXCLN

  26. Race conditions in data sharing Results of operations performed on shared data may be wrong if race conditions exist. Let us see why this may be so • Let operations Oi and Oj update value of a shared data d, let fi(d) and fj(d) represent the value of d after the operations • If processes Pi and Pj perform operations Oi and Oj • If Oi is performed before Oj, resulting value of d is fi(fj(d)) • If Oj is performed before Oi, resulting value of d is fj(fi(d))

  27. Race conditions in data sharing • A race condition is a situation in which the result of performing operations Oi and Oj in two processes is neither fi(fj(d)) nor fj(fi(d)). • Q: Why do race conditions exist?

  28. Data sharing by processes of a reservations system

  29. Race condition in the airline reservations system • Race conditions in the airline reservations system may have two consequences: • nextseatno may not be updated properly • Same seat number may be allocated to two passengers

  30. Race conditions in the airline reservations system

  31. Control synchronization between processes: Operation sj should be performed after si TIME

  32. (FIG a) PR Pi EXECUTES Si 1ST & THEN Pj EXECUTES Sj • (FIG b) PR Pj IS NOT ALLOWED TO EXEC Sj TILL Pi EXECUTES Si

  33. EX 4.6(1ST EDN) – DESIGN TO REDUCE THE TIME OF EXECN • Y=HCF(Amax, X) WHERE A IS ARRAY OF N ELEMENTS • INSERT Y IN ARR • ARR A IN ASC ORDER • TO DET NO. OF PRS & COMPUTATIONS IN EACH PR • PROBLEM SPLIT INTO FOLL STEPS

  34. READ N ELMENTS OF ARR A • FIND Amax • READ X • DET Y=HCF(Amax, X) • INCLUDE Y IN ARR A & ARR IN ASC ORDER • EACH ST IS CONSIDERED TO BE A SEPARATE PR –THEN DET WHICH OF THESE PRS INTERACT • 2, 4, 5 ARE INTERACTING PRS • CONC CAN BE ACHIEVED BY SPLITTING 2 & 5 INTO TWO PARTS

  35. 2(a) COPY ARR A TO ARR B 2(b) FIND Amax 5(a) ARRANGE ARR B IN ASC ORDER 5(b) INCLUDE Y IN ARR B AT APPROPR PLACE

  36. P1 READ ELS OF A COPY A TO B P2 FIND Amax P3 READ X P6 INCLUDE Y IN ARR B P4 Y=HCF(Amax, X) P5 ARR B IN ASC ORDER

  37. WHICH ARE THE PRS THAT CAN RUN CONCURRENTLY ? • P1, P3 –C • P1 & P2 CANNOT RUN CONC AS THEY SHARE ARR A • P2 MUST GO AFTER P1 FINISHES • P4 CAN BE INITD ONLY AFTER P1 & P3 TERMINATE • P5 CAN BE INITD AFTER P1 TERMS, WHILE P6 CAN START ONLY AFTER P4 & P5 TERMINATE

  38. Interprocess messages

  39. Advantages of message passing

  40. IMPLEMENTATION OF INTERACTING PROCESSESFORK - JOIN & PARBEGIN - PAREND • FORK SPAWNS A NEW PR & JOIN WAITS FOR A PREVIOUSLY CREATED PR TO TERMINATE

  41. PR 0 FORK A,J,3 A PR 0 FORK B B JOIN J PR 1 PR 2 JOIN J JOIN J J J+1 PR i

  42. FOR I= 1 TO 100 • READ A[I]; • M=3; • FORK LAB1; • FORK LAB2; • GOTO LAB3; • LAB1: X=MIN(A); • GOTO LAB3; • LAB2: Y=MAX(A); • LAB3: JOIN M; • RESULT=Y/X;

  43. FOR I= 1 TO 100 • READ A[I]; • PARBEGIN • X=MIN(A); • Y=MAX(A); • PAREND • RESULT=Y/X;

  44. Signals • A process can send signals to other processes to convey exceptional situations • A process must anticipate signals from other processes • It must provide a signal handler for each such signal. It specifies the handler through a system call. The kernel notes information about the signal handler • The kernel activates the signal handler when a signal is sent to the process • Schematic of signals on the next slide

  45. Signal handling by process Pi(a) Initialization, (b) Signal processing

  46. Processes in Unix • A process operates in two modes---user mode and kernel mode. • When a process makes a system call, it enters the kernel mode and itself executes the kernel code for handling the system call • It reenters the user mode after handling the system call • Hence two running states: User running and kernel running • A process in the kernel running state is non-interruptible • A process in kernel running mode gets blocked when it makes an I/O request • Kernel code is written in a reentrant manner so that a process can enter the kernel mode even if other processes are blocked in that mode

  47. Process state transitions in Unix

  48. Threads in Solaris • Provides three entities for concurrency: • User threads: Managed by a threads library • Light weight processes (LWP): A unit of parallelism within a process. Threads library maps user threads into LWPs. Several LWPs may be created within a process. • Kernel threads: A kernel thread is associated with each LWP. The kernel also creates some kernel threads for its own use, e.g., a thread to handle disk I/O. • Mapping between threads and LWPs influences parallelism (see Hybrid models’ schematic)

  49. Processes and threads in Linux • Linux supports kernel-level threads • Threads and processes are treated alike except at creation • A thread shares the information about memory management, current directory, open files and signal handlers of its parent process; a process does not share any information of its parent • A thread or process contains information about • Its parent • Its deemed parent, to whom its termination should be reported

  50. Processes and threads in Linux • Process and thread states: • Task_running: scheduled or waiting to be scheduled • Task_interruptible: sleeping on an event, but may receive a signal • Task_uninterruptible: sleeping and may not receive a signal • Task_stopped: operation has been stopped by a signal • Task_zombie: operation completed, but its parent has not issued a system call to check whether it has terminated • Interruptibility simplifies implementation of signals

More Related