1 / 26

CPS110: Implementing threads

CPS110: Implementing threads. Landon Cox. Recap and looking ahead. Applications. Threads, synchronization primitives. Atomic Load-Store, Interrupt enable- disable, Atomic Test-Set. OS. Where we’ve been. Where we’re going. Hardware. Recall, thread interactions.

lotta
Download Presentation

CPS110: Implementing threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CPS110: Implementing threads Landon Cox

  2. Recap and looking ahead Applications Threads, synchronization primitives Atomic Load-Store, Interrupt enable- disable, Atomic Test-Set OS Where we’ve been Where we’re going Hardware

  3. Recall, thread interactions • Threads can access shared data • E.g., use locks, monitors • What we’ve done so far • Threads also share hardware • CPU and memory • For this class, assume uni-processor • Single CPU core: one thread runs at a time • Unrealistic in the multicore era!

  4. Hardware, OS interfaces Thread lectures up to this point Applications Job 1 Job 2 Job 3 CPU, Mem CPU, Mem CPU, Mem OS Memory CPU Hardware Memory lectures Remaining thread lectures

  5. The play analogy • Process is like a play performance • Program is like the play’s script • One CPU is like a one-man-show • (actor switches between roles) Threads Address space

  6. Threads that aren’t running • What is a non-running thread? • thread=“stream of executing instructions” • non-running thread=“paused execution” • Blocked/waiting, or suspended but ready • Must save thread’s private state • Leave stack etc. in memory where it lies • Save registers to memory • Reload registers to resume thread

  7. Private vs global thread state • What state is private to each thread? • PC (where actor is in his/her script) • Stack, SP (actor’s mindset) • What state is shared? • Code (like lines of a play) • Global variables, heap • (props on set)

  8. Thread control block (TCB) The software that manages threads and schedules/dispatches them is the thread system or “OS” OS must maintain data to describe each thread • Thread control block (TCB) • Container for non-running thread’s private data • Values of PC, SP, other registers (“context”) • Each thread also has a stack Other OS data structures (scheduler queues, locks, waiting lists) reference these TCB objects.

  9. Thread control block Address Space TCB1 PC SP registers TCB2 PC SP registers TCB3 PC SP registers Ready queue Code Code Code Stack Stack Stack Thread 1 running CPU PC SP registers

  10. Thread states • Running • Currently using the CPU • Ready (suspended) • Ready to run when CPU is next available • Blocked (waiting or sleeping) • Stuck in lock (), wait () or down ()

  11. Switching threads • What needs to happen to switch threads? • Thread returns control to OS • For example, via the “yield” call • OS chooses next thread to run • OS saves state of current thread • To its thread control block • OS loads context of next thread • From its thread control block • Run the next thread Project 1: swapcontext

  12. 1. Thread returns control to OS • How does the thread system get control? • Voluntary internal events • Thread might block inside lock or wait • Thread might call into kernel for service • (system call) • Thread might call yield • Are internal events enough?

  13. 1. Thread returns control to OS • Involuntary external events • (events not initiated by the thread) • Hardware interrupts • Transfer control directly to OS interrupt handlers • From 104 • CPU checks for interrupts while executing • Jumps to OS code with interrupt mask set • OS may preempt the running thread (force yield) when an interrupt gives the OS control of its CPU • Common interrupt: timer interrupt

  14. 2. Choosing the next thread • If no ready threads, just spin • Modern CPUs: execute a “halt” instruction • Project 1: exit if no ready threads • Loop switches to thread if one is ready • Many ways to prioritize ready threads • Will discuss a little later in the semester

  15. 3. Saving state of current thread • What needs to be saved? • Registers, PC, SP • What makes this tricky? • Self-referential sequence of actions • Need registers to save state • But you’re trying to save all the registers • Saving the PC is particularly tricky

  16. Saving the PC • Why won’t this work? • Returning thread will execute instruction at 100 • And just re-execute the switch • Really want to save address 102 Instruction address 100 store PC in TCB 101 switch to next thread

  17. 4. OS loads the next thread • Where is the thread’s state/context? • Thread control block (in memory) • How to load the registers? • Use load instructions to grab from memory • How to load the stack? • Stack is already in memory, load SP

  18. 5. OS runs the next thread • How to resume thread’s execution? • Jump to the saved PC • On whose stack are these steps running? or Who jumps to the saved PC? • The thread that called yield • (or was interrupted or called lock/wait) • How does this thread run again? • Some other thread must switch to it

  19. Example thread switching Thread 1 print “start thread 1” yield () print “end thread 1” Thread 2 print “start thread 2” yield () print end thread 2” yield () print “start yield (thread %d)” swapcontext (tcb1, tcb2) print “end yield (thread %d)” swapcontext (tcb1, tcb2) save regs to tcb1 load regs from tcb2 // sp points to tcb2’s stack now! jump tcb2.pc // sp must point to tcb1’s stack! return Thread 1 output Thread 2 output -------------------------------------------- start thread 1 start yield (thread 1) start thread 2 start yield (thread 2) end yield (thread 1) end thread 1 end yield (thread 2) end thread 2 Note: this assumes no pre-emptions. If OS is preemptive, then other interleavings are possible.

  20. Thread states Running Thread is scheduled Thread calls Lock or wait (or makes I/O request) Thread is Pre-empted (or yields) Ready Blocked ? Another thread calls unlock or signal (or I/O completes)

  21. Creating a new thread • Also called “forking” a thread • Idea: create initial state, put on ready queue • Allocate, initialize a new TCB • Allocate a new stack • Make it look like thread was going to call a function • PC points to first instruction in function • SP points to new stack • Stack contains arguments passed to function • Project 1: use makecontext • Add thread to ready queue

  22. Creating a new thread Parent call return Parent thread_create (parent work) Child (child work)

  23. Thread join • How can the parent wait for child to finish? Parent join thread_create (parent work) Child (child work)

  24. Thread join child () { print “child works” } • Will this work? • Sometimes, assuming • Uni-processor • No pre-emptions • Child runs after parent • Never, ever assume these things! • Yield is like slowing the CPU • Program must work +- any yields parent () { create child thread print “parent works” yield () print “parent continues” } parent works child works parent continues child works parent works parent continues

  25. Thread join • Will this work? • No. Child can call signal first. • Would this work with semaphores? • Yes • No missed signals (increment sem value) parent () { create child thread lock print “parent works” wait print “parent continues” unlock } child () { lock print “child works” signal unlock } 2 1 3 parent works child works parent continues child works parent works parent continues

  26. How can we solve this? • Pair off for a couple of minutes parent () { } child () { } parent works child works parent continues child works parent works parent continues

More Related