1 / 47

CS 162 Discussion Section Week 2

CS 162 Discussion Section Week 2. Who am I?. Wesley Chow chowwesley@berkeley.edu Office Hours: 12pm-2pm Friday @ 411 Soda Does Monday 1-3 work for everyone ?. Thread Stack Allocation. Statically allocated ulimit –s on Unix systems to see what yours is set at

ling
Download Presentation

CS 162 Discussion Section Week 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS 162Discussion SectionWeek 2

  2. Who am I? Wesley Chow chowwesley@berkeley.edu Office Hours: 12pm-2pm Friday @ 411 Soda Does Monday 1-3 workforeveryone?

  3. Thread Stack Allocation • Statically allocated • ulimit–s on Unix systems to see what yours is set at • Java –Xss to set thread stack size in Java VM

  4. Administrivia • Get to know Nachos • Start project 1

  5. Project 1 • Can be found in the course website • Under the heading “Projects and Nachos” • Stock Nachos has an incomplete thread system. Your job is to • complete it, and • use it to solve several synchronization problems

  6. Project 1 Grading • Design docs [40 points] • First draft [10 points] • Design review [10 points] • Final design doc [20 points] • Code [60 points]

  7. Design Document • Overview of the project as a whole along with its parts • Header must contain the following info • Project Name and # • Group Members Name and ID • Section # • TA Name

  8. Design Document Structure Each part of the project should be explained using the following structure • Overview • Correctness Constraints • Declarations • Descriptions • Testing Plan

  9. Design Doc Length • Keep under 15 pages • Will dock points if too long!

  10. Design Reviews • Design reviews • Every member must attend • Will test that every member understands • YOU are responsible for testing your code • We provide access to a simple autograder • But your project is graded against a much more extensive autograder

  11. Project Questions?

  12. True/False • Threads within the same process share the same heap and stack. • Preemptive multithreading requires threads to give up the CPU using the yield() system call. • Despite the overhead of context switching, multithreading can provide speed-up even on a single-core cpu.

  13. Acquire() {disable interrupts; if (value == BUSY) { put thread on wait queue; Go to sleep(); // Enable interrupts? } else { value = BUSY;}enable interrupts;} Critical Section New Lock Implementation: Discussion • Disable interrupts: avoid interrupting between checking and setting lock value • Otherwise two threads could think that they both have lock • Note: unlike previous solution, critical section very short • User of lock can take as long as they like in their own critical section • Critical interrupts taken in time

  14. Enable Position Enable Position Enable Position Interrupt re-enable in going to sleep Acquire() { disable interrupts; if (value == BUSY) { put thread on wait queue; go to sleep(); } else { value = BUSY; } enable interrupts;} • What about re-enabling ints when going to sleep? • Before putting thread on the wait queue? • Release can check the queue and not wake up thread • After putting the thread on the wait queue • Release puts the thread on the ready queue, but the thread still thinks it needs to go to sleep • Misses wakeup and still holds lock (deadlock!) • Want to put it after sleep(). But, how?

  15. contextswitch contextswitch How to Re-enable After Sleep()? • Since ints are disabled when you call sleep: • Responsibility of the next thread to re-enable ints • When the sleeping thread wakes up, returns to acquire and re-enables interrupts Thread AThread B . . disable ints sleep sleep return enable ints . . . disable int sleep sleep return enable ints . . • yield returnenable ints • disable int • yield

  16. Why Processes & Threads? Goals:

  17. Why Processes & Threads? Goals: • Multiprogramming: Run multiple applications concurrently • Protection: Don’t want a bad application to crash system! Solution:

  18. Why Processes & Threads? Goals: • Multiprogramming: Run multiple applications concurrently • Protection: Don’t want a bad application to crash system! Solution: • Process: unit of execution and allocation • Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Challenge:

  19. Why Processes & Threads? Goals: • Multiprogramming: Run multiple applications concurrently • Protection: Don’t want a bad application to crash system! Solution: • Process: unit of execution and allocation • Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Challenge: • Process creation & switching expensive • Need concurrency within same app (e.g., web server) Solution:

  20. Why Processes & Threads? Goals: • Multiprogramming: Run multiple applications concurrently • Protection: Don’t want a bad application to crash system! Solution: • Process: unit of execution and allocation • Virtual Machine abstraction: give process illusion it owns machine (i.e., CPU, Memory, and IO device multiplexing) Challenge: • Process creation & switching expensive • Need concurrency within same app (e.g., web server) Solution: • Thread: Decouple allocation and execution • Run multiple threads within same process

  21. Putting it together: Process (Unix) Process A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); … Memory Resources I/O State (e.g., file, socket contexts) Sequential stream of instructions CPU state (PC, registers..)

  22. Putting it together: Processes Process 1 Process 2 Process N • Switch overhead: high • CPU state: low • Memory/IO state: high • Process creation: high • Protection • CPU: yes • Memory/IO: yes • Sharing overhead: high (involves at least a context switch) Mem. Mem. Mem. … IO sate IO sate IO sate CPU sate CPU sate CPU sate CPU sched. OS 1 process at a time CPU (1 core)

  23. Putting it together: Threads Process 1 Process N threads threads • Switch overhead: low(only CPU state) • Thread creation: low • Protection • CPU: yes • Memory/IO: No • Sharing overhead: low (thread switch overhead low) Mem. Mem. IO sate … IO sate … … CPU sate CPU sate CPU sate CPU sate CPU sched. OS 1 thread at a time CPU (1 core)

  24. Putting it together: Multi-Cores Process 1 Process N threads threads • Switch overhead: low(only CPU state) • Thread creation: low • Protection • CPU: yes • Memory/IO: No • Sharing overhead: low (thread switch overhead low) Mem. Mem. IO sate … IO sate … … CPU sate CPU sate CPU sate CPU sate CPU sched. OS 4 threads at a time core 1 Core 2 Core 3 Core 4 CPU

  25. Putting it together: Hyper-Threading Process 1 Process N threads threads • Switch overhead between hardware-threads: very-low(done in hardware) • Contention to cache may hurt performance Mem. Mem. IO sate … IO sate … … CPU sate CPU sate CPU sate CPU sate CPU sched. OS hardware-threads (hyperthreading) 8 threads at a time core 1 core 2 core 3 core 4 CPU

  26. Thread State • State shared by all threads in process/addr space • Content of memory (global variables, heap) • I/O state (file system, network connections, etc) • State “private” to each thread • Kept in TCB  Thread Control Block • CPU registers (including, program counter) • Execution stack – what is this? • Execution Stack • Parameters, temporary variables • Return PCs are kept while called procedures are executing

  27. Dispatch Loop • Conceptually, the dispatching loop of the operating system looks as follows: Loop { RunThread(); ChooseNextThread(); SaveStateOfCPU(curTCB); LoadStateOfCPU(newTCB); } • This is an infinite loop • One could argue that this is all that the OS does

  28. An OS needs to mediate access to resources: how do we share the CPU? • Strategy 1: force everyone to cooperate • a thread willingly gives up the CPU by calling yield() which calls into the scheduler, which context-switches to another thread • what if a thread never calls yield()? • Strategy 2: use pre-emption • at timer interrupt, scheduler gains control and context switches as appropriate Recall, an OS needs to mediate access to resources: how do we share the CPU?

  29. Thread T Thread S A A Stack growth B(while) B(while) yield yield run_new_thread run_new_thread switch switch From Lecture: Two Thread Yield • Consider the following code blocks:“ " proc A() { B(); } proc B() { while(TRUE) { yield(); } } • Suppose we have 2 threads:“ Threads S and T

  30. Int Disable NMI Detour: Interrupt Controller Interrupt Mask Priority Encoder IntID CPU • Interrupts invoked with interrupt lines from devices • Interrupt controller chooses interrupt request to honor • Mask enables/disables interrupts • Priority encoder picks highest enabled interrupt • Software Interrupt Set/Cleared by Software • Interrupt identity specified with ID line • CPU can disable all interrupts with internal flag • Non-maskable interrupt line (NMI) can’t be disabled Interrupt Timer Control Software Interrupt Network

  31. Short Answers • What is the OS data structure that represents a running process? • What are some of the similarities and differences between interrupts and system calls? What roles do they play in preemptive and non-preemptive multithreading?

  32. Questions / Examples about Process and Thread?

  33. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Review: Execution Stack Example addrX: . . . • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages addrY: . . . addrU: . . . addrV: . . . addrZ:

  34. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages Stack Growth addrY: . . . addrU: . . . addrV: . . . addrZ:

  35. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages Stack Growth addrY: . . . addrU: . . . addrV: . . . addrZ:

  36. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages Stack Growth addrY: . . . addrU: . . . addrV: . . . addrZ:

  37. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages addrY: Stack Growth . . . addrU: . . . addrV: . . . addrZ:

  38. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages addrY: Stack Growth . . . addrU: . . . addrV: . . . addrZ:

  39. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages C: ret=addrU addrY: . . . Stack Growth addrU: . . . addrV: . . . addrZ:

  40. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages C: ret=addrU addrY: . . . Stack Growth addrU: . . . addrV: . . . addrZ:

  41. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages C: ret=addrU addrY: . . . A: tmp=2 ret=addrV Stack Growth addrU: . . . addrV: . . . addrZ:

  42. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY • Stack holds function arguments, return address • Permits recursive execution • Crucial to modern languages C: ret=addrU addrY: . . . A: tmp=2 ret=addrV Stack Growth addrU: . . . addrV: . . . addrZ:

  43. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY C: ret=addrU addrY: . . . A: tmp=2 ret=addrV Stack Growth addrU: . . . Output: 2 addrV: . . . addrZ:

  44. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY C: ret=addrU addrY: . . . Stack Growth addrU: . . . Output: 2 addrV: . . . addrZ:

  45. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . B: ret=addrY addrY: Stack Growth . . . addrU: . . . Output: 2 addrV: . . . addrZ:

  46. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Stack Pointer Review: Execution Stack Example A: tmp=1 ret=addrZ addrX: . . . Stack Growth addrY: . . . addrU: . . . Output: 2 1 addrV: . . . addrZ:

  47. A(int tmp) { if (tmp<2) B(); printf(tmp); } B() { C(); } C() { A(2); } A(1); exit; Review: Execution Stack Example addrX: . . . addrY: . . . addrU: . . . Output: 2 1 addrV: . . . addrZ:

More Related