1 / 14

Chapter 7 - Interprocess Communication Patterns

Chapter 7 - Interprocess Communication Patterns. Why study IPC? Not typical programming style, but growing. Typical for operating system internals, though. Typical for many network-based services. Will take a process level view of IPC (Figure 7.1). Recall three methods used for IPC

pierce
Download Presentation

Chapter 7 - Interprocess Communication Patterns

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 7 - Interprocess Communication Patterns • Why study IPC? • Not typical programming style, but growing. • Typical for operating system internals, though. • Typical for many network-based services. • Will take a process level view of IPC (Figure 7.1).

  2. Recall three methods used for IPC • Message passing • File I/O (via pipes) • Shared memory • Processes either compete (e.g., for CPU cycles, printers, etc.) or cooperate (e.g., a pipeline) as they run. • Competing processes can lead to incorrect data. • Example: multiple simultaneous edits on the same file by different users • Each user loads up a copy of the file • Last user to write() “wins” the competition

  3. Figure 7.2 & table on page 234 demonstrate how the ordering of the read() and write() events between the users can potentially cause the loss of one of the user’s editing sessions. • This is called the mutual exclusion problem - either one of the two can edit, but not both at the same time (think of Exclusive OR Boolean operator). • Critical section - sequence of actions that must be done one at a time. • Serially reusable resource - a resource that can only be used by one process as a time.

  4. Race condition - when the order of process completion makes a difference in the outcome (like in the two edit problem; it’s a race to see who comes in last!). • Potential for race condition exists if two or more processes are allowed to run in parallel; serializing their execution prevents race condition (but is infeasible). • Figure 7.4 demonstrates a race condition at the “atomic” instruction level -- two processes incrementing a shared memory counter. • Race condition can be prevented by use of critical sections.

  5. Figure 7.5 -- create a critical section for each process. • The critical section represents a code segment that must be serialized (that is, only allow one process at a time to be in the critical section). It’s better to serialize just one segment than the execution of the entire process! • How to enforce a critical section? • One solution uses a hardware ExchangeWord() mechanism introduced in chapter 6, which we aren’t going to consider now. • A more likely solution is to use the operating system as the serializer enforcer.

  6. So, we will look at different ways to solve the mutual exclusion problem. • First technique: use SOS messages! First, though, let’s create two new system calls that allow us to name the queues, rather than relying on the parent passing qIDs to children. • int AttachMessageQueue(char *msg_q_name) - look up the message queue name in the file name space; create it if it doesn’t exist and set it’s attach count == 1 (one process has it attached). If it already exists, increment the attach count (one more process has it attached). Return the qID for later SendMessage()/ReceiveMessage() calls (or -1).

  7. int DetachMessageQueue(int msg_q_id) - Decrement the attach count by one; if the attach count reaches zero then delete the message queue file from the file system. Return -1 if msg_q_id is invalid. • Extend Exit() so that any attached queues are automatically detached when the process ends, to keep things straight (just like auto-closing of any opened files). • We will surround the critical section of code with a ReceiveMessage() call, which will block the calling process until a message is actually in the queue. The SendMessage() call will be used after the critical section to signal the other waiting process.

  8. Two-process Mutual Exclusion using a Message Queue (page 239 & Figure 7.6) • Note that the message content is not used. • One process has to start the ball rolling (“seed” the first ReceiveMessage() to not block). • The messages are like a ticket or token that permits the ticket holder to access the file. Notice that the sender and receiver can be the same, depending on which process gets scheduled by the O/S. • This algorithm works for more than 2 processes, due to the serial nature of the message queue (FIFO). • This algorithm also nicely avoids busy waiting, since the waiting processes are blocked.

  9. IPC as a signal (Figure 7.7) • We want a way to synchronize code between two processes. • Use ReceiveMessage() to wait for the signal and SendMessage() to send the signal. • An example: The Wait() call of a parent will block until a “signal” is received from the Exit() of a child. • Rendezvous • Want a way to know that two processes are synchronized so they can begin a common task at roughly the same time. • Use a two-way symmetric signaling method with SendMessage()/ReceiveMessage() pairs (Figure 7.8)

  10. Producer-Consumer IPC pattern • Most basic pattern of IPC; combines signaling with data exchange. • Easily visualized as a pipeline (Figure 7.10) xlsfonts | grep adobe • Note that the producer can “get ahead” of the consumer by generating more output than the consumer can consume (vice versa); the O/S must perform buffering. • The O/S does not have access to infinite disk and/or memory, so buffering is limited; the producer- consumer code should be written with this in mind. This also helps keep “slack on the line” if process scheduling is “bursty”.

  11. The Producer-Consumer pattern with limited buffering (page 248-249) • Makes use of two message queues; one for the data being produced/consumed and another to throttle the producer. • This is a symmetric signaling example. • The consumer “fills” the throttle queue with (in this case) 20 messages -- in other words, the producer has 20 signals already queued up and can then send up to 20 messages before receiving another “throttle up” signal. • Note if the buffer limit is set to one this algorithm is equivalent to the rendezvous (except we continuously rendezvous while producing/consuming). • The Producer-Consumer model is versatile; many variations (Figures 7.12 & 7.13)

  12. Client-Server IPC pattern • Many resources lend themselves well to having a single centralized server that responds to requests from multiple clients. Examples abound - print server, file server, web server, telnet server, email server, etc. • Server will wait on a “public” message queue, waiting for requests and servicing them as they arrive. • Example: “Squaring” server (page 251) -- yes, this is the same rendezvous/consumer-producer code that pairs up SendMessage()/ReceiveMessage(). The difference is in the client vs server relationship.

  13. Client-Server IPC pattern • Another difference - servers are usually always running, waiting for requests; clients “get in and get out”. • Usually servers provide multiple services and are written to handle such; Figure 7.14 is an example of a file server’s algorithm. • Multiple Servers/Client & Database Access/Update: (Individually review 7.11 & 7.12). • One interesting tidbit: readers & writers problem. • Can have parallel readers but only one writer accessing database at at time.

  14. Nice summary of the IPC Patterns on page 262-265: Mutual Exclusion (mutex) Signaling Rendezvous Producer-Consumer Client-Server Multiple Servers & Clients Database Access & Update

More Related