1 / 14

Intertask Communication and Synchronization

Intertask Communication and Synchronization. In this context, the terms “task” and “process” are used interchangeably. Task Synchronization. Recall the previously defined mechanisms for task (thread) cooperation (block, suspend, resume, etc.)

bud
Download Presentation

Intertask Communication and Synchronization

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Intertask Communication and Synchronization In this context, the terms “task” and “process” are used interchangeably

  2. Task Synchronization • Recall the previously defined mechanisms for task (thread) cooperation (block, suspend, resume, etc.) • Recall the distinctions between processes (tasks) and threads • A contemporary process defines an address space in which multiple threads can execute and share common resources (code, data, etc.) • A traditional “heavyweight” process (or task) defines an address space and a single thread of execution • Because of the risks associated with shared data, we must have synchronization mechanisms for threads (or traditional processes that share data) • For consistency with terminology in the text, we will use the terms “task”, “process” and “thread” interchangeably

  3. The Critical Section Problem • Code that is executed by a process for the purpose of accessing and modifying shared data is called a critical section. • Only one process at a time must be allowed to enter its critical section. • In other words, mutual exclusion must be enforced at the entry to a critical section. • The critical-section problem involves finding a protocol that allows processes to cooperate in the required manner.

  4. The Critical Section Problem (continued) • The requirements that must be met are: • Mutual exclusion • Progress • Bounded waiting

  5. Evolution of Solutions to the Critical Section Problem (and the Implementation of Mutual Exclusion) • Software only implementations appeared first. • Several unsuccessful attempts were tried. The successful implementation became known as Dekker’s Algorithm. • All software only implementations require a busy wait.

  6. Evolution (continued) • Once a successful software implementation was demonstrated, computer designers considered the assertion that hardware and software are logically equivalent. They implemented a new machine instruction called Testandset (or an equivalent one called Swap) • Use of the Testandset still requires a busy wait. • The final and most elegant solution is the semaphore (developed by Dijkstra) • Use of the semaphore does not require a busy wait.

  7. Synchronization Hardware • The testandset instruction tests and modifies the contents of a word atomically. • FunctionTest-and-Set (var target.boolean): boolean; • begin • Test-and-Set :=target; • target:= true; • end;

  8. Semaphore • A semaphore may be viewed as an abstract data type (ADT) having both a scalar value and a queue of waiting tasks • Basic operations (not including initializing the scalar value) are “Wait” and “Signal”. • For semaphore S: • Wait(S) can be defined logically as: • if S > 0 then • S = S - 1 • else wait in Queue S • Signal(S): • if any task currently waits in Queue S then • awaken first task in the queue • else S = S + 1 • Both of the above operations are atomic.

  9. Semaphore (continued) • May be used for enforcing mutual exclusion and for signaling among different tasks • For enforcing mutual exclusion at the entry to a critical section: • Semaphore “mutex” has an initial value of 1. • For two tasks t1 and t2 accessing the same data: • t1 t2 • … … • wait(mutex) wait(mutex) • <critical section> <critical section> • signal(mutex) signal(mutex)

  10. Semaphores (continued) • For signaling between 2 tasks t1 and t2: • Semaphore “sem” has initial value of 0. • t2 waits for a signal from t1: • t1 t2 • … … • <generate data wait(sem) • needed by t2> • signal(sem) <use data generated • by t1>

  11. The Paradigm ofIntertask (process) Communication and Synchronization:The Producer-Consumer Problem • The producer task produces information that is consumed by a consumer task. A buffer is used to hold data between the two tasks. • The producer & consumer must be synchronized; that is, a producer must wait if it attempts to put data into a full buffer whereas a consumer must wait if it attempts to extract data from an empty buffer. • This represents the basis for intertask communication and can take two forms: • Message passing by way of a separate “mailbox” or”message queue” • The operating system usually provides this structure and the corresponding functions SEND and RECEIVE • A producer SEND’s to the mailbox, while the consumer RECEIVE’s from the mailbox • Message passing by way of a shared-memory buffer • Usually implemented directly with semaphores • Assumes a fixed buffer size.

  12. Message Passing by way of a Mailbox, or Message Queue • Convenient to the programmer, because the level of abstraction is higher (via SEND and RECEIVE) • Typically exhibits higher overhead, because data has to be moved more (sender process to mailbox and mailbox to receiver process)

  13. Message Passing by way of a Shared-Memory Buffer • Lower level of abstraction, requiring the use of semaphores • More effort for the programmer • Greater risk of mistakes in the use of semaphores • Better performance due to less movement of data

  14. Message Passing by way of a Shared-Memory Buffer (continued) • Two possible design approaches: • Traditional Bounded Buffer, in which both sender and receiver process can access the shared buffer if it is not completely full and not completely empty • Sender and receiver processes separated by double buffers • While one buffer is being filled by the sender process, the other buffer is being emptied by the receiver process • Once one buffer is filled and the other emptied, the sender and receiver processes swap buffers and continue

More Related