1 / 53

Threads

This article provides an overview of threads, processes, and scheduling in the context of operating systems. It explains the differences between threads and processes, their resource requirements, and how the scheduler manages their execution. The article also delves into the concepts of preemption, context switching, and the benefits of using threads. Examples and comparisons are given to illustrate the advantages of using threads over processes in certain scenarios.

smarie
Download Presentation

Threads

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Threads

  2. Today • Processes and Scheduling • Threads • Abstract Object Models • Computation Models • Java Support for Threads

  3. Process vs. Program • processes as the basic unit of execution • managed by OS • OS as any RTE, executes processes • OS creates processes from programs, and manages their resources

  4. Process resources • To keep running, a process requires at least the following resources: • program (from which the process is created) • two blocks of memory: stack , heap • resource tables: File handles, socket handles, IO handles, window handles. • control attributes: Execution state, Process relationship • processor state information: contents of registers, Program Counter (PC), Stack Pointer

  5. Program context • all the information the OS needs to keep track of the state of a process during its execution: • program counter • registers • memory associated with the process • Register: very fast memory inside the CPU

  6. The Scheduler - OS component responsible for sharing the CPU • OS run multiple processes simultaneously on a single CPU. • OS interleaves the execution of the active processes: • runs a process a little while • interrupts it and keeps its context in memory • runs another process for a little while Context switch

  7. Linux scheduler – task_struct

  8. Linux scheduler - main

  9. UP(uni-processor) - single CPU system single process executing at given time • SMP(symmetric multi processor) - several CPUs / cores in single CPU: number of processes concurrently execute

  10. Scheduler • list of active processes - scheduler switch from one executing process to another • Two interaction paradigms for scheduler with current active processes: • Non-Preemptive Scheduling • Preemptive Scheduling

  11. Non-preemptive scheduling Non-Preemptive - process signals to OS when process is ready to relinquish the CPU: • special system call • process waits for an external event (I/O) • scheduler select next process to execute according to a scheduling policy • pick the highest priority process • Dos, Windows 3.1 , older versions of Mac OS. 

  12. Preemptive scheduling Preemptive: scheduler interrupts executing process • interrupt issued using clock hardware: • sends interrupt at regular intervals • suspends currently executing process • starts scheduler • process passes control to OS: each time the process invokes a system call • scheduler selects process to execute, according to the scheduling policy • all modern OS support preemptive scheduling. time slice: duration between two clock interrupts . time process execute un-interrupted

  13. Context Switch - OS switches between one executing process and next one • consumes several milliseconds of processing time (10^4 simple CPU operations). • transparent to processes (process is not able to tell it was preempted)

  14. Context Switch - steps • Timer interrupt – suspend currently executing process, start scheduler • Save process context, for later resume • Select next process using scheduling policy • Retrieve next process context • Restore the state of the new process (registers , program counter) • Flush CPU cache (process has new memory map, can not use cache of old process) • Resume new process (start executing code from instruction that was interrupted)

  15. Cache • Accessing RAM memory is expensive • compared to computation step on the CPU. • CPU cache: frequently used memory addresses • small very-fast memory section, on the CPU. • a memory address copied in the CPU cache, can be R/W as fast as computation on the CPU. • context switch: CPU cache become invalid – cache ``flushed'' - cells written back to actual RAM and cleared.

  16. The video player example • The player should take the following steps in playing the video: • Read video from disk • Decompress video • Decode video • Display on screen • video is larger than actual RAM memory • video is watched before download completes

  17. Interleaving (sequential) solution • Read some video data (disk /network) • Decompress • Decode • Display on screen. • Repeat until the video ends • difficult to program correctly (modularity) • error prone • complexity explodes as concurrent activities increase A B C D

  18. Multi-process solution • playing the movie is decomposed into several independent tasks = processes: • read movie, decompress ,decode ,display + no need to control interleaving of tasks - processes communication

  19. Threads - definition • single process, multiple tasks simultaneous (no communication, no context switch…) • multiple threads executing concurrently • share all the resources allocated to process • communicate with each other using constructs which are built in most modern languages • share memory space / each thread own stack • share opened files and access rights

  20. Low cost context switch between threads • not all steps of a regular context switch need to be taken: • no need to restore context (share resources) • no need to flush the CPU cache (share memory) • switch stacks

  21. multi-threaded solution - concurrency • single process, several threads design: • thread reads video stream and place chunks in chunk queue. • thread reads chunks from queue, decompress and place in decompressed queue. • thread decodes into frames and queues in frame queue… • final thread takes frame and displays on screen.

  22. Concurrency advantages Liberate from invoking method and blocking, while waiting for reply • Reactive programming: programs do multiple things at a time, reactive response to input • video player, GUI…

  23. Concurrency advantages • Availability: common design pattern for service provider programs: • thread as gateway for incoming service consumers • thread for handling requests • FTP server: gateway thread to handle new clients connecting, thread (per client) to deal with long file transfers.

  24. Concurrency advantages • Controllability: thread can be suspended, resumed or stopped by another thread • Simplified design: Software objects usually model real objects... • real life objects are independent and parallel. • designing autonomous behavior is easier than sequential (interleaving between objects) • Simplified implementation than processes

  25. Concurrency advantages • Parallelization: • on multiprocessor machines, threads executes truly concurrently allowing one thread per CPU. • on single CPU, execution paths are interleaved • services provided by RTE operate in a concurrent manner

  26. Concurrency limitations - resource consumption, efficiency and program complexity • Safety: multiple threads share resources -need for synchronization mechanisms • guarantee consistent state vs. random looking, inconsistencies, real-time debugging • Liveness: keeping a thread alive in concurrent programming • Non determinism: executions of concurrent program are not identical - harder to predict understand and debug.

  27. Concurrency limitations - resource consumption, efficiency and program complexity • Context switching overhead: when a job performed in a thread is small • Request/Reply programming: threads are not a good choice in sequential tasks • synchronizing costs and introduce complexity • Synchronization overhead: execution time and complexity • Process: when activity is self contained and heavy - encapsulate in a different standalone program. Can be accessed via system services

  28. Abstract Object Models • programming issues related to threads • formal model of concurrent objects • can be implemented in different RTEs or programming languages. • abstract model -> implementation in Java

  29. represent/simulate a water tank • Attributes: capacity, currentVolume, represented as fields of WaterTank objects • Invariant state constraints: currentVolume always remains between zero and capacity • Operations describing behaviors such as those to addWater and removeWater. • Connections to other objects • Preconditions and postconditions on the effects of operations • Protocols constraining when and how messages (operation requests) are processed.

  30. Object models framework • structure of object: • class • internal attributes - state • connections to other objects • local internal methods • messaging methods - interface in Java, sending a message to an object is just calling one of the object's public methods

  31. Object models framework • Object identity: • objects are constructed any time (resources!) • by any object (access control!) • object maintains unique identity • Encapsulation: • separation between inside and outside parts - internal state can be modified only by object(assuming internal members are private)

  32. Object models framework • Communication between objects • message passing • objects issue messages - trigger actions in other • simple procedural calls/communication protocols. • Connections: • send message by identity/ by communication channel identities

  33. Object models framework • Objects can perform only the following operations: • Accept a message • Update their internal state • Send a message • Create a new object

  34. computation models: sequential mappings

  35. computation models: sequential mappings general-purpose computer can be exploited to pretend it is any object: • loading description of corresponding .class into VM • VM construct passive representation of instance • VM interpret associated operations. • extends to programs involving many objects of different classes • VM is itself an object - can pretend it is any other object

  36. On a sequential JVM - impossible to simulate concurrent interacting waterTank objects • message-passing is sequential - there is no need for concurrent processing

  37. active objects (actor models) • every object is autonomous and powerful as a sequential VM. • objects reside on different machines • message passing is performed via remote communication. • object-oriented view of OS-level processes: • process is independent of, and shares as few as possible resources with other processes.

  38. active objects • different objects may reside on different machines • location and administrative domain of an object are often important • message passing is arranged via remote communication (for example via sockets) • number of protocols: oneway messaging multicasts procedure-style request-reply

  39. active objects • agents programming paradigm:

  40. active objects • The active object design pattern decouples method execution from method invocation • introduce concurrency, by using asynchronous method invocation and scheduler for requests • pattern consists of six elements: • proxy - interface towards clients with publicly accessible methods. • interface defines the method request on an active object. • list of pending requests from clients • scheduler, which decides which request to execute next • implementation of the active object method. • callback or variable for the client to receive the result.

  41. Mixed models - multithreading • between the two extremes of passive and active models. • concurrent VM may be composed of multiple threads of execution • each thread acts in about the same way as a single sequential VM • all threads share access to the same set of passive representations (unlike pure active objects )

  42. Mixed models • can simulate the first two models, but not vice versa. • passive sequential models can be programmed using only one thread. • active models can be programmed by creating as many threads as there are active objects, • !!!avoiding situations in which more than one thread can access a given passive representation

  43. Thread-based concurrent object oriented models • separate passive from active objects. • passive objects show thread-awareness (locks)

  44. Java Support for Threads • JVM implements the mixed object model (until now, you have been using only passive object) • Runnable interface - implemented by class whose instances are intended to be executed by a thread • define a method run()

  45. classMessagePrinter • method run() - when invoked, the object prints a message which it received in constructor main:

More Related