1 / 35

Operating Systems CMPSCI 377 Lecture 5: Threads & Scheduling

Operating Systems CMPSCI 377 Lecture 5: Threads & Scheduling. Emery Berger University of Massachusetts Amherst. Last Time: Processes. Process = unit of execution Process control blocks Process state, scheduling info, etc. New, Ready, Waiting, Running, Terminated

tsmalls
Download Presentation

Operating Systems CMPSCI 377 Lecture 5: Threads & Scheduling

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating SystemsCMPSCI 377Lecture 5: Threads & Scheduling Emery Berger University of Massachusetts Amherst

  2. Last Time: Processes • Process = unit of execution • Process control blocks • Process state, scheduling info, etc. • New, Ready, Waiting, Running, Terminated • One at a time (on uniprocessor) • Change by context switch • Multiple processes: • Communicate by message passing or shared memory

  3. This Time: Threads & Scheduling • What are threads? • vs. processes • Where does OS implement threads? • User-level, kernel • How does OS schedule threads?

  4. Processes versus Threads • Process = • Control + address space + resources • fork() • Thread = • Control only • PC, stack, registers • pthread_create() • One process may contain many threads

  5. Threads Diagram • Address space in process:shared among threads • Cheaper, faster communication than IPC

  6. Threads Example, C/C++ • POSIX threads standard #include <pthread.h> #include <stdio.h> void * run (void * d) { int q = ((int) d); int v = 0; for (int i = 0; i < q; i++) { v = v + expensiveComputation(i); } return (void *) v; } main() { pthread_t t1, t2; int r1, r2; pthread_create (&t1, run, 100); pthread_create (&t2, run, 100); pthread_wait (&t1, (void *) &r1); pthread_wait (&t2, (void *) &r2); printf (“r1 = %d, r2 = %d\n”, r1, r2); }

  7. Threads Example, Java import java.lang.*; class Worker extends Thread implements Runnable { public Worker (int q) { this.q = q; this.v = 0; } public void run() { int i; for (i = 0; i < q; i++) { v = v + i; } } public int v; private int q; } public class Example { public static void main(String args[]) { Worker t1 = new Worker (100); Worker t2 = new Worker (100); try { t1.start(); t2.start(); t1.join(); t2.join(); } catch (InterruptedException e) {} System.out.println ("r1 = " + t1.v + ", r2 = " + t2.v); } }

  8. Classifying Threaded Systems • One or many address spaces, one or many threads per address space MS-DOS

  9. Classifying Threaded Systems • One or many address spaces, one or many threads per address space MS-DOS Embedded systems

  10. Classifying Threaded Systems • One or many address spaces, one or many threads per address space UNIX, Ultrix, MacOS (< X), Win95 MS-DOS Embedded systems

  11. Classifying Threaded Systems • One or many address spaces, one or many threads per address space UNIX, Ultrix, MacOS (< X), Win95 MS-DOS Mach,Linux, Solaris, WinNT Embedded systems

  12. This Time: Threads • What are threads? • vs. processes • Where does OS implement threads? • User-level, kernel • How does CPU schedule threads?

  13. Kernel Threads • Kernel threads: scheduled by OS • A.k.a. lightweight process (LWPs) • Switching threads requires context switch • PC, registers, stack pointers • BUT: no mem mgmt. = no TLB “shootdown” • Switching faster than for processes • Hide latency (don’t block on I/O) • Can be scheduled on multiple processors

  14. User-Level Threads • No OS involvement w/user-level threads • Only knows about process containing threads • Use thread library to manage threads • Creation, synchronization, scheduling • Example: Java green threads • Cannot be scheduled on multiple processors

  15. User-Level Threads: Advantages • No context switch when switching threads • But… • Flexible: • Allow problem-specific thread scheduling policy • Computations first, service I/O second, etc. • Each process can use different scheduling algorithm • No system calls for creation, context switching, synchronization • Can be much faster than kernel threads

  16. User-Level Threads: Disadvantages • Requires cooperative threads • Must yield when done working (no quanta) • Uncooperative thread can take over • OS knows about processes, not threads: • Thread blocks on I/O: whole process stops • More threads ≠ more CPU time • Process gets same time as always • Can’t take advantage of multiple processors

  17. Solaris Threads • Hybrid model: • User-level threads mapped onto LWPs

  18. Threads Roundup • User-level threads • Cheap, simple • Not scheduled, blocks on I/O, single CPU • Requires cooperative threads • Kernel-level threads • Involves OS – time-slicing (quanta) • More expensive context switch, synch • Doesn’t block on I/O, can use multiple CPUs • Hybrid • “Best of both worlds”, but requires load balancing

  19. threads thread scheduler thread scheduler processes kernel processors Load Balancing • Spread user-level threads across LWPs so each processor does same amount of work • Solaris scheduler: only adjusts load when I/O blocks

  20. Load Balancing • Two classic approaches:work sharing & work stealing • Work sharing:give excess work away • Can waste time

  21. Load Balancing • Two classic approaches:work sharing &work stealing • Work stealing:get threads from someone else • Optimal approach • Sun, IBM Java runtime • but what about OS?

  22. This Time: Threads • What are threads? • vs. processes • Where does OS implement threads? • User-level, kernel • How does OS schedule threads?

  23. Scheduling • Overview • Metrics • Long-term vs. short-term • Interactive vs. servers • Example algorithm: FCFS

  24. Scheduling • Multiprocessing: run multiple processes • Improves system utilization & throughput • Overlaps I/O and CPU activities

  25. Scheduling Processes • Long-term scheduling: • How does OS determinedegree of multiprogramming? • Number of jobs executing at once • Short-term scheduling: • How does OS select program from ready queue to execute? • Policy goals • Policy options • Implementation considerations

  26. Short-Term Scheduling • Kernel runs scheduler at least: • When process switches from running to waiting • On interrupts • When processes are created or terminated • Non-preemptive system: • Scheduler must wait for these events • Preemptive system: • Scheduler may interrupt running process

  27. Comparing Scheduling Algorithms • Important metrics: • Utilization = % of time that CPU is busy • Throughput = processes completing / time • Response time = time between ready & next I/O • Waiting time = time process spends on ready queue

  28. Scheduling Issues • Ideally: • Maximize CPU utilization, throughput &minimize waiting time, response time • Conflicting goals • Cannot optimize all criteria simultaneously • Must choose according to system type • Interactive systems • Servers

  29. Scheduling: Interactive Systems • Goals for interactive systems: • Minimize average response time • Time between waiting & next I/O • Provide output to user as quickly as possible • Process input as soon as received • Minimize variance of response time • Predictability often important • Higher average better than low average,high variance

  30. Scheduling: Servers • Goals different than for interactive systems • Maximize throughput (jobs done / time) • Minimize OS overhead, context switching • Make efficient use of CPU, I/O devices • Minimize waiting time • Give each process same time on CPU • May increase average response time

  31. Scheduling Algorithms Roundup • FCFS: • First-Come, First-Served • Round-robin: • Use quantum & preemption to alternate jobs • SJF: • Shortest job first • Multilevel Feedback Queues: • Round robin on each priority queue • Lottery Scheduling: • Jobs get tickets • Scheduler randomly picks winner

  32. Scheduling Policies FCFS (a.k.a., FIFO = First-In, First-Out) • Scheduler executes jobs to completionin arrival order • Early version: jobs did not relinquish CPU even for I/O • Assume: • Runs when processes blocked on I/O • Non-preemptive

  33. FCFS Scheduling: Example • Processes arrive 1 time unit apart:average wait time in these three cases?

  34. FCFS:Advantages & Disadvantages • Advantage: Simple • Disadvantages: • Average wait time highly variable • Short jobs may wait behind long jobs • May lead to poor overlap of I/O & CPU • CPU-bound processes force I/O-bound processesto wait for CPU • I/O devices remain idle

  35. Summary • Thread = single execution stream within process • User-level, kernel-level, hybrid • No perfect scheduling algorithm • Selection = policy decision • Base on processes being run & goals • Minimize response time • Maximize throughput • etc. • Next time: much more on scheduling

More Related