1 / 43

Chapter 4: Processes

Chapter 4: Processes. Process Concept Process Scheduling Operations on Processes Interprocess Communication Examples of IPC Systems Communications in Client-Server Systems. Objectives.

pnathan
Download Presentation

Chapter 4: Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4: Processes • Process Concept • Process Scheduling • Operations on Processes • Interprocess Communication • Examples of IPC Systems • Communications in Client-Server Systems

  2. Objectives • To introduce the notion of a process -- a program in execution, which forms the basis of all computation • To describe the various features of processes, including scheduling, creation and termination, and communication • To explore interprocess communication using shared memory and message passing

  3. Process Concept • An operating system executes a variety of programs(multiple programs loaded in the memory): • Operating System’s code • User programs or tasks • Textbook uses the terms job and process almost interchangeably • Process – a program in execution; process execution must progress in sequential fashion • A process includes: • program counter • CPU registers • Stack, containing temporary variables • data section, containing global variables • Program is passive entity stored on disk (executable file), process is active • Program becomes process when executable file loaded into memory • A C++ source file on disk is not a process • One program can be several processes • Consider multiple users executing the same program

  4. Process State • As a process executes, it changes state. The state of a process is defined in part by the current activity of that process. • new: The process is being created • running: Instructions are being executed • waiting: The process is waiting for some event to occur • ready: The process is waiting to be assigned to a processor • terminated: The process has finished execution

  5. Diagram of Process State

  6. Process in Memory • Multiple parts of a process in memory: • The program code, also called text section. • Current activity including programcounter, processor (CPU) registers • Stackcontaining temporary data • Function parameters, return addresses, local variables • Data sectioncontaining global variables • Heapcontaining memory dynamically allocated during run time • Two techniques to load executable files into memory: • double-clicking on icon representing executable file • entering the name of the executable file on the command line.

  7. Process Control Block (PCB) Information associated with each process. (also called task control block) in the operating system • Process state – running, waiting, etc. • Program counter – location of next instruction to execute • CPU registers – contents of all process-centric registers • CPU-scheduling information- priorities, scheduling queue pointers • Memory-management information – memory allocated to the process, base and limit registers • Accounting information – CPU used, clock time elapsed since start, time limits, etc. • I/O status information – I/O devices allocated to process, list of open files Vary from process to process

  8. CPU Switch From Process to Process

  9. Process Scheduling • Some systems allow execution of only a single process at a time. Such are called uni-programming systems. • Others allow more than one process, concurrent execution of many processes. These are called multi-programming (or multi-processing, but not multi-processor) systems. • Objective of multi-programming is to have some process running at all times, to maximize CPU utilization. • Objective of time sharing is to switch the CPU among processes so frequently. • In a multi-programming system, the CPU switches automaticallyamong processes running each for tens or hundreds of milliseconds. Hence, users can interact with each program while it is running.

  10. Process Scheduling Queues • Maximize CPU use, quickly switch processes onto CPU for time sharing • Process scheduler selects among available processes for next execution on CPU • Maintains scheduling queues of processes • Job queue – set of all processes in the system • Ready queue – set of all processes residing in main memory, ready and waiting to execute • Device queues – set of processes waiting for an I/O device. Each device has its own queue. • Processes migrate among the various queues • Processes that reside in main memory that are ready to execute are put on the ready queue. • If a process issues an I/O request, it is placed in the corresponding I/O queue. • When a process terminates, it is removed from all queues and its PCB is deleted.

  11. Ready Queue And Various I/O Device Queues • Queue header contains pointers to the first and final PCBs. • Each PCB includes a pointer field that points to the next PCB in the ready queue. • Each queue is generally stored as a linked list.

  12. Representation of Process Scheduling • Queueing diagram represents queues(ready queue and set of device queues), resources(CPU, I/O), flows • While a process in execution, one or several events could occur: • Process could issue an I/O request • Proces could create a new child process • An interrupt delivered to the CPU

  13. Process Schedulers • The selection of processes from the queues is managed by schedulers. • A process migrates among the various scheduling queues through its lifetime. • Short-term scheduler (or CPU scheduler) – selects which ready process should be executed next (which is already in the memory) and allocates CPU • Sometimes the only scheduler in a system • Short-term scheduler is invoked frequently (milliseconds)  (must be fast) • Long-term scheduler (or job scheduler) – selects which processes (from a mass-storage(i.e., hard disk) should be brought into the ready queue (main memory) • Long-term scheduler is invoked less frequently (seconds, minutes)  (may be slow). Invoked when a process leave the system. • The long-term scheduler controls the degree of multiprogramming. This corresponds to the number of processes in memory. • Processes can be described as either: • I/O-bound process– spends more time doing I/O than computations, many short CPU bursts • CPU-bound process – spends more time doing computations; few very long CPU bursts

  14. Process Schedulers (Cont.) • Long-term scheduler should pick a relatively good mix of I/O- and CPU-bound processes so that the system resources are better utilized. • If all processes are I/O-bound, the ready queue will almost always be empty. • If all processes are CPU-bound, I/O queue will be almost always empty. • A balanced combination should be selected for system efficiency. Remove process from memory, store on disk, bring back in from disk to continue execution: swapping

  15. Context Switch • An interrupt cause the OS to change a CPU from its current task and run a kernel routine for context switching. • When an interrupt occurs, the system must save the state(context)of the current process and load the saved state for the new process.This task is known as context switch. • Context of a process represented in the PCB • Context switching involves copying registers, local and global data, file buffers and other information to the PCB. • Context-switch time is overhead; the system does no useful work while switching • The more complex the OS and the PCB  the longer the context switch • Time is dependent on hardware support • Some hardware provides multiple sets of registers per CPU  multiple contexts loaded at once

  16. Operations on Processes • System must provide mechanisms for: • process creation, • process termination, • and so on as detailed next

  17. Process Creation • Parentprocess createschildrenprocesses, which, in turn creates other processes, forming a tree of processes • Generally, process identified and managed via aprocess identifier (pid). It is used as an index to access process atributes. • Resource sharing options • Parent and children share all resources • Children share subset of parent’s resources • Parent and child share no resources • Execution options • Parent and children execute concurrently • Parent waits until children terminate • Address space • Child duplicate of parent • Child has a program loaded into it • UNIX example • fork()system call creates new process

  18. Process Termination • Process executes last statement and then asks the operating system to delete it using the exit() system call. • Returns status data from child to parent (via wait()) • Process’ resources are deallocated by operating system • Parent may terminate the execution of children processes using the abort() system call. Some reasons for doing so: • Child has exceeded allocated resources • Task assigned to child is no longer required Some operating systems do not allow child to exists if its parent has terminated. If no parent waiting (did not invoke wait()) process is a zombie If parent terminated without invoking wait , process is an orphan. Parent of orphan processes will beinitprocess (a system process)

  19. Cooperating Processes • Processes within a system may be independentor cooperating • Any process that does not share any data with any other process is independent. • Independent process cannot affect or be affected by the execution of another process (no data sharing between processes) • Cooperating process can affect or be affected by the execution of another process (there is data sharing between processes) • Advantages of process cooperation • Information sharing - e.g. a shared database • Computation speedup (divide a task into subtasks and execute them in parallel) • Modularity - system can be constructed in modular form (dividing system functions into separate processes, or threads) • Convenience - different system tools may be used in parallel printing, editing, compiling etc.

  20. Interprocess Communication(IPC) • Cooperating processes need interprocess communication (IPC) mechanisms. • Mechanisms for processes to communicate(exchange data and information) and to synchronize their actions. • Two models of IPC • Shared memory: A region of memory is establish to share data between processes. • Message passing: Communication takes place by means of messages exchanged between the cooperating proesses. Notes: • Shared memory can be faster than message passing. • Message passing is usefull for exchanging smaller amounts of data. • Shared memory suffers from cache coherency issues.

  21. Communications Models (a) Message passing. (b) shared memory.

  22. Interprocess Communication – Shared Memory • An area of memory shared among the processes that wish to communicate • Processes can exchange information by writing and reading data in the shared areas. • The communication is under the control of the users processes not the operating system. The form of data and the location are determined by these processes. • Major issues is to provide mechanism that will allow the user processes to synchronize their actions when they access shared memory. • Synchronization is discussed in great details later

  23. Example: Producer-Consumer Problem • Paradigm for cooperating processes, producerprocess produces information that is consumed by a consumer process • For example, a web server produces(that is provides) HTML files and images, which are consumed (that is reads) by the client web browser. • Buffer is defined to be a shared memory space • Producer places the produced item into a buffer and consumer takes its consumption from the buffer. • When buffer is full, producer must wait until consumer consumes at least an item; likewise, when buffer is empty, consumer must wait until producer places at least an item into the buffer(processes must be synchronized). • unbounded-buffer places no practical limit on the size of the buffer (producer can always produce new items). • bounded-buffer assumes that there is a fixed buffer size (consumer must wait if buffer is empty and the producer must wait if buffer is full).

  24. Bounded-Buffer – Shared-Memory Solution memory buffer Consumer process Producer process

  25. Producer-Consumer Problem Shared variables (in the buffer): //The shared buffer is implemented as a circular array with two logical pointers const int N=?; // buffer size int buffer[N]; // kind of items kept in the buffer int in=0,// points next free position in the buffer intout=0;// points the first full position in the buffer out N-1 Buffer is empty when: in == out Buffer is full when: (in + 1) % N == out 0 x 1 y 2 z 3 in

  26. Producer – Consumer Processes Producer code: Consumer code: int nextc // local variable of item consumed do{ while (in == out) ; //buffer empty, do nothing nextc = buffer [out]; out = (out + 1) % N; …. Consume the item in nextc; …. } while (true); int nextp; // local variable of item produced do{ …. produce an item in nextp; …. while ((in + 1)% N == out) ; //buffer full, do nothing, wait buffer [in] = nextp; in = (in + 1) % N; } while (true); REMARK: At most (N-1) items can be available in the buffer

  27. Producer – Consumer Processes(alternative solution) Shared variables: const int N = ?; int full[N] = {0};// the buffer is initially empty int buffer[N], in=0, out=0; Producer code: Consumer code: int nextp; int nextc; do do { { … while (!full[out]); // do nothing produce an item in nextp; nextc = buffer[out]; … full[out] = 0; while (full[in]); // do nothingout = (out + 1) % N; buffer[in] = nextp; … full[in] = 1; consume the item in nextc; in = (in + 1) % N; … } while(true); } while(true); REMARK: At most N items can be available in the buffer

  28. Interprocess Communication – Message Passing • Message passing systems– processes communicate with each other without sharing the same address space • It is usefull in a distributed environment • IPC facility provides two operations: • send(message) • receive(message) • Themessage size is either fixed or variable • If processes P and Q wish to communicate, they need to: • Establish a communicationlinkbetween them (physical implementation or logical implementation of the link) • Exchange messages via send/receive

  29. Message Passing (Cont.) • Implementation issues: • How are links established? • Can a link be associated with more than two processes? • How many links can there be between every pair of communicating processes? • What is the capacity of a link? • Is the size of a message that the link can accommodate fixed or variable? • Is a link unidirectional or bi-directional?

  30. Message Passing (Cont.) • Implementation of communication link • Physical: • Shared memory • Hardware bus • Network • Logical: • Direct or indirect • Synchronous or asynchronous • Automatic or explicit buffering Here, logical link implementation is considered.

  31. Direct Communication • In direct communication processes must name each other explicitly: • send (P, message) – send a message to process P • receive(Q, message) – receive a message from process Q • Properties of communication link • Links are established automatically between every pairs of processes that want to communicate • A link is associated with exactly one pair of communicating processes • Between each pair there exists exactly one link • The link may be unidirectional, but is usually bi-directional

  32. Direct CommunicationProducer-Consumer problem using message passing Producer code: Consumer code: int nextp; int nextc; do do { { …receive(producer,nextc); produce an item in nextp;… …consume the item in nextc; send (consumer,nextp); … } while(true); } while(true); The main disadvantage is limited modularity in direct communication. The names of producer(sender) and consumer(receiver) should be correctly set in every time they are used.

  33. Indirect Communication • Messages are directed and received from mailboxes(also referred to as ports). A mailbox is an object which messages can be placed and removed by proceses. • Each mailbox has a unique id(identification) • Processes can communicate only if they share a mailbox • Properties of communication link • Link established only if processes share a common mailbox • A link may be associated with many processes • Each pair of processes may share several communication links • Link may be unidirectional or bi-directional

  34. Indirect Communication • Operations of processes • create a new mailbox (port) • send and receive messages through mailbox • delete (remove) a mailbox • Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive a message from mailbox A

  35. Indirect Communication • Mailbox sharing • P1, P2, and P3 share mailbox A • P1, sends; P2and P3 receive • Who gets the message? • Solutions • Allow a link to be associated with at most two processes • Allow only one process at a time to execute a receive operation • Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

  36. Synchronization • Message passing may be either blocking or non-blocking • Blocking is considered synchronous • Blocking send --the sender is blocked until the message is received • Blocking receive --the receiver is blocked until a message is available • Non-blocking is considered asynchronous • Non-blocking send -- the sender sends the message and continue • Non-blocking receive -- the receiver receives: • A valid message, or • Null message • Different combinations possible • If both send and receive are blocking, this synchronization is called rendezvous

  37. Buffering • Queue of messages attached to the link. • implemented in one of three ways 1.Zero capacity – no messages are queued on a link( 0 message).Sender must wait for receiver (rendezvous) 2.Bounded capacity – finite length of n messagesSender must wait if link full • Unbounded capacity – infinite length Sender never waits Zero capacity case is message system with no buffering, other cases provide automatic buffering.

  38. Acknowledgement • After receiving a message, in order to inform sender of the receipt of the message, receiver sends an acknowledgement message. Process P sends a message to Process Q send (Q, message) receive (P, message) receive (Q, message) send (P, “Acknowledgement”)

  39. Exception Conditions • Message system is useful in distributed environmentssince the probability of error during communication is larger than that of a single machine environment. • In a single machine environment, shared memory system is generally used. • If a communication error occurs, error recovery or exception condition handling(see next slide) must take place. • Possible errors are: • A process terminates • Lost messages • Scrambled messages

  40. P P Q Q Waiting a message Sender waiting an ACK Exception Conditions – Terminated Process • A sender or receiver process may terminate before a message is processed. • This situation will leave message that will never be received or processes waiting for messages that will never be sent. • In these cases, the operating system should either terminate P or notify P that Q has terminated. Receiver process (P) waits a message from Q that has terminated. Then P will be blocked forever. P sends a message to a terminated process. If P requires an acknowledgement(ACK) from Q for further processing, P will be blocked forever.

  41. Q P The message is lost due to hardware or communication line failure. Exception Conditions – Lost Messages • P sends a message to Q, which is lost somewhere in the link due to link failure. • In such cases, • The OS is responsible for detecting and responding by notifying the sending process that the message is lost, or • The sending process is responsible for detecting and should re-transmit the message. • The most common method for detecting lost messages is to use time-outs. • For instance, if the acknowledgement signal does not arrive at the sending process in a specified time interval, it is assumed that the signal is lost and resent.

  42. Exception Conditions – Scrambled messages • Because of the noise in communication channels, the delivered message may be scrambled (modified) in on the way. Error checking codes are commonly used to detect this type of error.

  43. Communications in Client-Server Systems • Sockets • Remote Procedure Calls • Pipes

More Related