1 / 24

Distributed (Operating) Systems - Processes and Threads -

Distributed (Operating) Systems - Processes and Threads -. Computer Engineering Department Distributed Systems Course Asst. Prof. Dr. Ahmet Sayar Kocaeli University - Fall 201 2. Processes and Threads. Processes management and their scheduling

bambi
Download Presentation

Distributed (Operating) Systems - Processes and Threads -

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Distributed (Operating) Systems -Processes and Threads- Computer Engineering Department Distributed Systems Course Asst. Prof. Dr. AhmetSayar • Kocaeli University - Fall 2012

  2. Processes and Threads • Processes management and their scheduling • Process: Executing context of a programme • Threads • Distributed Scheduling/migration • can help in achieving scalability, but can also help to dynamically configure clients and servers • Multiprogramming versus multiprocessing • Multiprogramming: ability to run more than one process on any machine • multiple applications are running but only one may be executing in the core • Multiprocessing: you have more than one CPU core or processor in your machine, it enables multiple processors to run in parallel at the same time

  3. Processes: Review • Process is a program in execution • Kernel data structure: process control block (PCB) • keeps track of what the process keeps doing, when it started, where it resides, what files it has open, what sockets it has open • Each process has an address space • Contains code, global and local variables.. • Processes have code and data segments carried out and in through pages • Process state transitions • When processes is created its state is new • When the process is load into memory its state is called ready • When processes is selected by CPU scheduler its state turns into run • When process makes a blocking IO its state turns into wait state • When process finish it turns into terminate state

  4. Processes: Review • Uniprocessor scheduling algorithms • Round-robin, shortest job first, FIFO, lottery scheduling* • CPU bound processes • IO bound processes • Performance metrics: throughput, CPU utilization,turnaround time, response time, fairness

  5. Context Switching • Switching the CPU between 2 processes. – computationally expensive • CPU context • Register values, program counter, stack pointer, etc. • CPU and other hardware devices are shared by the processes. • OS keeps process table • Entries to store CPU register values, memory maps, open files, privileges, etc. • If OS supports more processes than it can simultaneously hold in main memory, it may have to swap processes between main memory and disk before the actual switch can take place.

  6. Context Switching

  7. Process Scheduling • Priority queues: multiples queues, each with a different priority • Use strict priority scheduling • Example: page swapper, kernel tasks, real-time tasks, user tasks • Multi-level feedback queue • Multiple queues with priority • Processes dynamically move from one queue to another • Depending on priority/CPU characteristics • Gives higher priority to I/O bound or interactive tasks • Lower priority to CPU bound tasks • Round robin at each level • Pre-emptive – Non-pre-emeptive scheduling • FCFS is non-preemptive scheduling • Round-robin is preemptive scheduling

  8. Example of processes and threads • Processes • Excel worksheet, email client tool, browser all running together • Threads • In an excel worksheet; when a user changes the value in a single cell such a modification can trigger large series of computation. While excel is making those computations users can still enter other values continuously.

  9. Processes and Threads • Traditional process • One thread of control through a large, potentially sparse address space • Address space may be shared with other processes (shared memory) • Collection of systems resources (files, semaphores) • Thread (light weight process) • A flow of control through an address space • Each address space can have multiple concurrent control flows • Each thread has access to entire address space • Potentially parallel execution, minimal state (low overheads) • May need synchronization to control access to shared variables

  10. Threads • A process has many threads sharing its execution environment • Each thread has its own stack, PC, registers • Share address space, files,…

  11. Why use Threads? • Large multiprocessors/multi-core systems need manycomputing entities (one per CPU or core ) • Switching between processes incurs high overhead • With threads, an application can avoid per-process overheads • Thread creation, deletion, switching cheaper than processes • Threads have full access to address space (easy sharing) • Threads can execute in parallel on multiprocessors

  12. Why Threads? • Single threaded process: blocking system calls, noparallelism • Finite-state machine [event-based]: non-blocking withparallelism • Threads retain the idea of sequential processes withblocking system calls, and yet achieve parallelism • Multi-threaded process: blocking system calls withparallelism • Applications are easier to structure as a collection of threads Each thread performs several [mostly independent] tasks • Threads retain the idea of sequential processes withblocking system calls, and yet achieve parallelism • Software engineering perspective • Applications are easier to structure as a collection of threads • Each thread performs several [mostly independent] tasks

  13. Thread Management • Creation and deletion of threads • Static versus dynamic • Critical sections • Synchronization primitives: blocking, spin-lock (busy-wait) • Condition variables • Global thread variables • Kernel versus user-level threads

  14. User-level versus kernel threads • Key issues: • Cost of thread management • More efficient in user space • Ease of scheduling • Flexibility: many parallel programming models and schedulers • Process blocking – a potential problem

  15. User-level Threads • Threads managed by a threads library • Kernel is unaware of presence of threads • Advantages: • No kernel modifications needed to support threads • Efficient: creation/deletion/switches don’t need system calls • Flexibility in scheduling: library can use different schedulingalgorithms, can be application dependent • Disadvantages • Need to avoid blocking system calls [all threads block] • Threads compete for one another • Does not take advantage of multiprocessors [no real parallelism] • Blocking system calls block the whole process

  16. Kernel-level threads • Kernel aware of the presence of threads • Better scheduling decisions, more expensive • Better for multiprocessors, more overheads for uniprocessors • Blocking system calls done by a process is no problem • Loss of efficiency (all operations are performed assystem calls) • Conclusion: Try to mix user-level and kernel-level threads into a single concept

  17. Light-weight Processes(Hybrid Approach) • Combining kernel-level lightweight processes and user-level threads

  18. Light-weight Processes (LWP) • Several LWPs per heavy-weight process • User-level threads package • Create/destroy threads and synchronization primitives • Multithreaded applications – create multiple threads,assign threads to LWPs (one-one, many-one, many-many) • When a thread calls a blocking user-level operation (e.g. block on a mutex or a condition variable), and another runnable thread is found, a context switch is made to that thread which is then bound to the same LWP • When a thread does a blocking system-level call, its LWP blocks. The thread remains bound to the LWP. The kernel schedules another LWP having a runnable thread bound to it. This also implies that a context switch is made back to user mode. The selected LWP will simply continue where it had previously had left off.

  19. Thread Packages • Posix Threads (pthreads) • Widely used threads package • Conforms to the Posix standard • Sample calls: pthread_create,… • Typical used in C/C++ applications • Can be implemented as user-level or kernel-level or via LWPs • Java Threads • Native thread support built into the language • Threads are scheduled by the JVM

  20. Quiz • At any specific time, how many threads can be running in the system? • Is the question correct? • Correct the question and answer it. • What does a thread need when it is created? • Lets assume there is a sequential process that cannot be divided into parallel task. Do you think it can utilize threads?

  21. Threads in Distributed Systems • Threads allow clients and servers to be constructed such that communication and local processing can overlap, resulting in a high-level performance. • They can provide a convenient means of allowing blocking system calls without blocking the entire process in which the thread is running • The main idea is to exploit parallelism to attain high performance (especially when executing a program on a multiprocessor system). • Multithreaded Clients • Multithreaded Servers

  22. Multi-threaded Clients Example • Main issue is hiding network latency • Browsers such as IE are multi-threaded • Such browsers can display data before entire documentis downloaded: performs multiple simultaneous tasks • Fetch main HTML page, activate separate threads for other parts • Each thread sets up a separate connection with the server • Uses blocking calls • Each part (gif image) fetched separately and in parallel • Advantage: connections can be setup to different sources • Ad server, image server, web server…

  23. Multi-threaded Server Example • Main issue is improved performance and better structure • Apache web server: pool of pre-spawned worker threads • Dispatcher thread waits for requests • For each request, choose an idle worker thread • Worker thread uses blocking system calls to service web request

  24. Three ways to construct a server • Threads • Parallelism, blocking system call • Single-threaded process • No parallelism, blocking system call. • How it runs? Lets say file server… • Finite-state machine • Single thread but imitating parallelism, non-blocking system calls

More Related