1 / 37

CS252: Systems Programming

CS252: Systems Programming. Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization. Introduction to Threads. A thread is a path execution By default, a C/C++ program has one thread called "main thread" that starts the main() function.

felton
Download Presentation

CS252: Systems Programming

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. CS252: Systems Programming Ninghui Li Based on Slides by Prof. Gustavo Rodriguez-Rivera Topic 10: Threads and Thread Synchronization

  2. Introduction to Threads • A thread is a path execution • By default, a C/C++ program has one thread called "main thread" that starts the main() function. • main() • --- • --- • printf( "hello\n" ); • --- • }

  3. Introduction to Threads • You can create multiple paths of execution using: • POSIX threads ( standard ) • pthread_create( &thr_id, attr, • func, arg ) • Solaris threads • thr_create( stack, stack_size, func, • arg, flags, &thr_id ) • Windows • CreateThread(attr, stack_size, func, • arg, flags, &thr_id)

  4. Introduction to Threads • Every thread will have its own • Stack • PC – Program counter • Set of registers • State • Each thread will have its own function calls, and local variables. • The process table entry will have a stack, set of registers, and PC for every thread in the process.

  5. Applications of Threads • Concurrent Server applications • Assume a web server that receives two requests: • First, one request from a computer connected through a modem that will take 2 minutes. • Then another request from a computer connected to a fast network that will take .01 secs. • If the web server is single threaded, the second request will be processed only after 2 minutes. • In a multi-threaded server, two threads will be created to process both requests simultaneously. The second request will be processed as soon as it arrives.

  6. Application of Threads • Taking Advantage of Multiple CPUs • A program with only one thread can use only one CPU. If the computer has multiple cores, only one of them will be used. • If a program divides the work among multiple threads, the OS will schedule a different thread in each CPU. • This will make the program run faster.

  7. Applications of Threads • Interactive Applications. • Threads simplify the implementation of interactive applications that require multiple simultaneous activities. • Assume an Internet telephone application with the following threads: • Player thread - receives packets from the internet and plays them. • Capture Thread – captures sound and sends the voice packets • Ringer Server – Receives incoming requests and tells other phones when the phone is busy. • Having a single thread doing all this makes the code cumbersome and difficult to read.

  8. Advantages and Disadvantages of Threads vs. Processes • Advantages of Threads • Fast thread creation - creating a new path of execution is faster than creating a new process with a new virtual memory address space and open file table. • Fast context switch - context switching across threads is faster than across processes. • Fast communication across threads – threads communicate using global variables that is faster and easier than processes communicating through pipes or files.

  9. Advantages and Disadvantages of Threads vs. Processes • Disadvantages of Threads • Threads are less robust than processes – If one thread crashes due to a bug in the code, the entire application will go down. If an application is implemented with multiple processes, if one process goes down, the other ones remain running. • Threads have more synchronization problems – Since threads modify the same global variables at the same time, they may corrupt the data structures. Synchronization through mutex locks and semaphores is needed for that. Processes do not have that problem because each of them have their own copy of the variables.

  10. Synchronization Problems with Multiple Threads Threads share same global variables. Multiple threads can modify the same data structures at the same time This can corrupt the data structures of the program. Even the most simple operations, like increasing a counter, may have problems when running multiple threads.

  11. Example of Problems with Synchronization // Global counter int counter = 0; void increment_loop(void *arg){ inti; int max = * ((int *)arg); for(i=0;i<max;i++){ inttmp = counter; tmp=tmp+1; counter=tmp; } }

  12. Example of Problems with Synchronization int main(){ pthread_t t1,t2; int max = 10000000; void *ret; pthread_create(&t1,NULL, increment_loop,(void*)&max); pthread_create(&t2,NULL, increment_loop,(void*)&max); //wait until threads finish pthread_join(t1, &ret); pthread_join(t2, &ret); printf(“counter total=%d”,counter); }

  13. Example of Problems with Synchronization We would expect that the final value of counter would be 10,000,000+ 10,000,000= 20,000,000 but very likely the final value will be less than that (E.g. 12804354). The context switch from one thread to another may change the sequence of events so the counter may loose some of the counts.

  14. Example of Problems with Synchronization int counter = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ a)int tmp = counter; b)tmp=tmp+1; c)counter=tmp; } } int counter = 0; void increment_loop(int max){ for(int i=0;i<max;i++){ a)int tmp= counter; b)tmp=tmp+1; c)counter=tmp; } } T1 T2

  15. Example of Problems with Synchronization time

  16. Example of Problems with Synchronization As a result 23 of the increments will be lost. T1 will reset the counter variable to 1 after T2 increased it 23 times. Even if we use counter++ instead of a)b) c) we still have the same problem because the compiler will generate separate instructions that will look like a)b)c). Worse things will happen to lists, hash tables and other data structures in a multi-threaded program. The solution is to make certain pieces of the code Atomic.

  17. Atomicity • Atomic Section: • A portion of the code that needs to appear to the rest of the system to occur instantaneously. • Otherwise corruption of the variables is possible. • An atomic section is also called sometimes a “Critical Section”

  18. Atomicity by disabling interrupts • On uni-processor, operation is atomic as long as context switch doesn’t occur during operation • To achieve atomicity: disable interrupts upon entering atomic section, and enable upon leaving • Context switches cannot happen with interrupt disabled. • Available only in Kernel mode; Only used in kernel programming • Other interrupts may be lost. • Does not provide atomicity with multiprocessor

  19. Achieving Atomicity in Concurrent Programs • Our main goal is to learn how to write concurrent programs using synchronization tools • We also explain a little bit how these tools are implemented

  20. Atomicity by Mutex Locks • Mutex Locks are software mechanisms that enforce atomicity • Only one thread can hold a mutex lock at a time • When a thread tries to obtain a mutex lock that is held by another thread, it is put on hold (aka put to sleep, put to wait, blocked, etc). • The thread may be wake up when the lock is released.

  21. Mutex Locks Usage • Declaration: • #include <pthread.h> • pthread_mutex_tmutex; • Initialize • pthread_mutex_init( &mutex, atttributes); • Start Atomic Section • pthread_mutex_lock(&mutex); • End Atomic section • pthread_mutex_unlock(&mutex);

  22. Example of Mutex Locks #include <pthread.h> int counter = 0; // Global counter pthread_mutex_tmutex; void increment_loop(int max){ for(inti=0;i<max;i++){ pthread_mutex_lock(&mutex); inttmp = counter; tmp=tmp+1; counter=tmp; pthread_mutex_unlock(&mutex); } } Threads

  23. Example of Mutex Locks int main(){ pthread_t t1,t2; pthread_mutex_init(&mutex,NULL); pthread_create(&t1,NULL, increment,10000000); pthread_create(&t2,NULL, increment,10000000); //wait until threads finish pthread_join(&t1); pthread_join(&t2); printf(“counter total=%d”,counter); }

  24. Example of Mutex Locks time

  25. Example of Mutex Locks As a result, the steps a),b),c) will be atomic so the final counter total will be 10,000,000+ 10,000,000= 20,000,000 no matter if there are context switches in the middle of a)b)c)

  26. Mutual Exclusion Mutex Locks enforce the mutual exclusion of all code between lock and unlock Mutex_lock(&m) A B C Mutex_unlock(&m) Mutex_lock(&m) D E F Mutex_unlock(&m)

  27. Mutual Exclusion This means that the sequence ABC, DEF, can be executed as an atomic block without interleaving. Time ------------------------> T1 -> ABC ABC T2 -> DEF DEF T3 -> ABC DEF

  28. Mutual Exclusion • If different mutex locks are used (m1!=m2) then the sections are no longer atomic • ABC and DEF can interleave Mutex_lock(&m2) D E F Mutex_unlock(&m2) Mutex_lock(&m1) A B C Mutex_unlock(&m1)

  29. Atomicity by Spin Locks • Spinlocks make thread “spin” busy waiting until lock is released, instead of putting thread in waiting state. Why do this? • Using mutex blocks a thread if it fails to obtain the lock, and later unblocks it, this has overhead • If the lock will be available soon, then it is better to do busy waiting • Could provide better performance when locks are held for short period of time.

  30. Example of Spin Locks #include <pthread.h> int counter = 0; // Global counter int m = 0; void increment_loop(int max){ for(inti=0;i<max;i++){ spin_lock(&m); a) inttmp = counter; b) tmp=tmp+1; c) counter=tmp; spin_unlock(&m); } }

  31. Spin Locks Example

  32. Spin Locks vs. Mutex • http://stackoverflow.com/questions/5869825/when-should-one-use-a-spinlock-instead-of-mutex • On a single CPU, it makes no sense to use spin locks • Why? • Spin locks could be useful on multi-core/multi-CPU system when locks are typically held for short period of time. • In kernel code, spin locks can be useful for code that cannot be put to sleep (e.g., interrupt handlers)

  33. Implementing Mutex Locks using Spin Locks mutex_unlock() { spin_lock(); if (mutex.queue. nonEmpty) { t=mutex.dequeue(); t.setReadyState(); } else { mutex.lock=false; } spin_unlock(); } mutex_lock(mutex) { spin_lock(); if (mutex.lock) { mutex.queue( currentThread) spin_unlock(); setWaitState(); GiveUpCPU(); } else{ mutex.lock= true; spin_unlock(); } }

  34. Test_and_set There is an instruction test_and_set that is guaranteed to be atomic Pseudocode: int test_and_set(int *v){ int oldval = *v; *v = 1; return oldval; } This instruction is implemented by the CPU. You don’t need to implement it.

  35. A Semi-Spin Lock Implemented Using test_and_set int lock = 0; void spinlock(int * lock) { while (test_and_set(&lock) != 0) { } } void spinunlock(int*lock){ *lock = 0; }

  36. Review Questions What does the system need to maintain for each thread? Why one wants to use multiple threads? What are the pros and cons of using threads vs. processes? What is an atomic section? Why disabling interrupt ensures atomicity on a single CPU machine?

  37. Review Questions What is the meaning of the “test and set” primitive? What is a mutex lock? What is the semantics of lock and unlock calls on a mutex lock? How to use mutex locks to achieve atomicity? The exam does not require spin lock or implementation of mutex lock.

More Related