1 / 45

Chapter 4: Processes

Chapter 4: Processes. Process Concept (% 4.1) Process Scheduling (% 4.2) Operations on Processes (% 4.3) Cooperating Processes (% 4.4)

acton-buck
Download Presentation

Chapter 4: Processes

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 4: Processes • Process Concept (% 4.1) • Process Scheduling (% 4.2) • Operations on Processes (% 4.3) • Cooperating Processes (% 4.4) • Interprocess Communication (% 4.5) • Communication in Client-Server Systems (% 4.6)

  2. Sec 4.1: Process Concept • An operating system executes a variety of programs: • Batch system – jobs • Time-shared systems – user programs or tasks • Textbook uses the terms joband process almost interchangeably • Process – a program in execution; a process (an active entity) is more than the program code (a passive entity, known as the text section). It also includes the current activities. • A process includes: • the value of the program counter • the contents of processor’s registers • text section • stack • data section global variables & possibly a heap

  3. Process State • As a process executes, it changes state (defined in part by the current activity of the process) • Each process may be in one of the following states: • New: The process is being created • Running: Instructions are being executed • Waiting: The process is waiting for some event to occur • Ready: The process is waiting to be assigned to a processor • Terminated: The process has finished execution • Only one process can be running on a processor at any instant; many processes may be ready and waiting.

  4. Diagram of Process State

  5. Process Control Block (PCB) Information associated with each process • Process ID • Process state (new, ready, running, waiting, halted, …) • Program counter • CPU registers (accumulators, index registers, stack pointers, …) • CPU scheduling information (process priority, pointer to scheduling queues, …) • Memory-management information (base reg., limit reg., segment tables, page tables, …) • Accounting information (CPU & real-time used, time limits, …) • I/O status information (I/O devices allocated, files opened, …) Each process is represented in the OS by a process control block (also called a task control block)

  6. Process Control Block (PCB)

  7. Sec 4.2: Process Scheduling Multiprogramming and time-sharing requires a process scheduler to select an available process for execution on the CPU. A uniprocessor system  only one running process, others have to wait in a queue until CPU is free and be scheduled • Job queue – the list of all processes in the system • Ready queue – the list of all processes residing in main memory, ready and waiting to execute • Device queues – the list of processes waiting for an I/O device, a queue for each available device • Process migration between the various queues Process Scheduling Queues

  8. Ready Queue And Various I/O Device Queues device queues

  9. Representation of Process Scheduling:a Queuing Diagram may be multiple queues for processes of different priorities dispatched wait for I/O completion put back in ready queue wait for child to terminate wait for event to occur event event

  10. Schedulers executes infrequently • Long-term scheduler (or job scheduler) – selects which processes should be loaded into the memory and then brought into the ready queue • Short-term scheduler (or CPU scheduler) – selects which process should be executed next and allocates CPU from those that are ready to execute executes frequently Some systems do not have a long-term scheduler (UNIX, Windows for example). Every process is put in memory for the short-term scheduler.

  11. Schedulers (Cont.) • Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) • Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow) • The long-term scheduler controls the degree of multiprogramming • Processes can be described as either: • I/O-bound process – spends more time doing I/O than computations, many short CPU bursts • CPU-bound process – spends more time doing computations; few very long CPU bursts

  12. Addition of Medium Term Scheduling Key idea: Sometimes it is advantageous to remove processes from memory and thus reduce the degree of multiprogramming. (swapped out) Later, the process can be reintroduced into memory and its execution can be continued where it left off. (swapped in) This scheme is called swapping

  13. Context Switch The context of a process is represented by the PCB of the process (process descriptor) • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process • Context-switch time is overhead; the system does no useful work while switching • Time dependent on hardware support (some processors have multiple set of registers. Each set can be used for a process to store its state. Context switch  changing the pointer to the current register set in this case if there are enough sets.

  14. CPU Switch From Process to Process

  15. Process Creation • Parent process create children processes, which, in turn create other processes, forming a tree of processes • Alternatives of resource sharing • Parent and children share all resources • Children share subset of parent’s resources • Parent and child share no resources • Two possibilities in terms of execution • Parent and children execute concurrently • Parent waits until children terminate

  16. Process Creation (Cont.) • Two possibilities in terms of address space • Child is a duplicate of the parent • Child has a program loaded into it • UNIX examples • fork system call creates new process Child & parent share all open files, but do not share memory space • exec system call used after a fork to replace the process’ memory space with a new program

  17. C Program Forking Separate Process #include <stdio.h> #include <unistd.h> int main(int argc, char *argv[]) { int pid; /* fork another process */ pid = fork(); if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls","ls",NULL); } else { /* parent process */ /* parent will wait for the child to complete */ wait(NULL); printf("Child Complete"); exit(0); } } fork() creates a new process, which consists of the address space of the original process Both processes continue execution starting here Fork() returns 0 for the child return the child’s PID (> 0) for the parent To terminate the list of aguments file Name of the process execlp() loads a binary file into the memory. The child process overlays its address with the UNIX command file /bin/ls

  18. Processes Tree on a UNIX System

  19. Process Termination • 1. Process executes last statement and asks the operating system to delete it (exit()) • Output data (return a status value) from child to parent (via wait()) • Process’ resources are deallocated by operating system (physical/virtual memory, open files, I/O buffers) • 2. Parent may terminate execution of children processes (abort()) • Child has exceeded allocated resources • Task assigned to child is no longer required • If parent is exiting • Some operating system do not allow child to continue if its parent terminates • All children terminated - cascading termination • Parent needs to know the ID of its children. This is why the ID of a newly created process is passed to the parent

  20. Sec 4.4: Cooperating Processes • Independent process: cannot affect or be affected by the execution of another process • Cooperatingprocess: can affect or be affected by the execution of another process • Advantages of process cooperation • Information sharing (eg., several users may be interested in the same piece of information) • Computation speed-up (task  subtasks, executed in parallel) • Modularity (system functions  separate processes or threads) • Convenience (editing, printing, compiling at one time)

  21. Producer-Consumer Problem • A common paradigm for cooperating processes, producer process produces information (eg., a file server) that is consumed by a consumer process (eg., a client requesting the file) (assembly code) (object module) (Compiler  assembler  loader) • unbounded-buffer places no practical limit on the size of the buffer The consumer may have to wait for new items The producer can always produce new items • bounded-buffer assumes that there is a fixed buffer size The consumer must wait if the buffer is empty The producer must wait if the buffer is full • buffermay be provided by the OS through the use of an IPC facility or explicitly coded by the application programmer with the use of shared memory

  22. Bounded-Buffer – Shared-Memory Solution public interface Buffer { // producers call this method public abstract void insert(Object item); // consumers call this method public abstract Object remove(); }

  23. Bounded-Buffer – Shared Memory Solution import java.util.*; public class BoundedBuffer implements Buffer { private static final int BUFFER SIZE = 8; private int count; // number of items in the buffer private int in; // points to the next free position private int out; // points to the next filled position private Object[] buffer; // buffer shared by both P & C public BoundedBuffer() { // buffer is initially empty count = 0; in = 0; out = 0; buffer = new Object[BUFFER SIZE]; } // producers calls this method public void insert(Object item) { // Slide 4.24 } // consumers calls this method public Object remove() { // Figure 4.25 } } in * * * * out count = 4

  24. Bounded-Buffer – Insert() Method public void insert(Object item) { while (count == BUFFER SIZE) // blocks here if the buffer is full ; // do nothing, blocks here // no free buffers ++count; buffer[in] = item; // add an item to the buffer in = (in + 1) % BUFFER SIZE; // buffer is implemented as a circular array }

  25. Bounded Buffer – Remove() Method public Object remove() { Object item; while (count == 0) // blocks here if the buffer is empty ; // do nothing -- nothing to consume --count; item = buffer[out]; // remove an item from the buffer out = (out + 1) % BUFFER SIZE; return item; }

  26. Sec 4.5: Interprocess Communication (IPC) • Mechanism for processes to communicate and to synchronize their actions • Message system – processes communicate with each other without resorting to shared variables • IPC facility provides two operations: • send(message) – message size fixed or variable • receive(message) • If P and Q wish to communicate, they need to: • establish a communicationlink between them • exchange messages via send/receive • Implementation of communication link • physical (e.g., shared memory, hardware bus) • logical (e.g., logical properties)

  27. Implementation Questions • How are links established? • Can a link be associated with more than two processes? • How many links can there be between every pair of communicating processes? • What is the capacity of a link? • Is the size of a message that the link can accommodate fixed or variable? • Is a link unidirectional or bi-directional? Methods for logically implementing a link and the send/receive operation: * Direct or indirect communication * Symmetric or asymmetric addressing * Automatic or explicit buffering * Send by copy or send by reference * Fixed-sized or variable-sized messages

  28. Direct Communication • Processes must name each other explicitly: • send (P, message) – send a message to process P • receive(Q, message) – receive a message from process Q • Properties of communication link • Links are established automatically btw every pair of processes that want to communicate. The processes need to know each other’s identity to communicate. • A link is associated with exactly one pair of communicating processes • Between each pair there exists exactly one link • The link may be unidirectional, but is usually bi-directional by the O.S. Addressing: symmetric

  29. Indirect Communication • Messages are directed and received from mailboxes (also referred to as ports, or globalports) • Each mailbox has a unique id • Processes can communicate only if they share a mailbox • Properties of communication link • Link established only if processes share a common mailbox • A link may be associated with many processes • Each pair of processes may share several communication links • Link may be unidirectional or bi-directional (mailbox) (mailboxes)

  30. Indirect Communication • Operations • create a new mailbox • send and receive messages through mailbox • destroy a mailbox • Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive a message from mailbox A • A MX may be owned by a process or by the O.S. • Owned by a process (MX is part of the address space of that process): • the owner: can only receive messages thru the MX • the user: can only send to the MX • Owned by the O.S., the O.S. must provide a mechanism for a process to • perform the operations shown above. The process that creates the MX • becomes the owner by default. The ownership may be passed to other • processes thru system calls.

  31. Indirect Communication • Mailbox sharing: suppose that • P1, P2, and P3 share mailbox A P1 P2 P3 • P1, sends; P2and P3 receive • Who gets the message? • Solutions • Allow a link to be associated with at most two processes • Allow only one process at a time to execute a receive operation • Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was. A mailbox

  32. Synchronization • Message passing may be either blocking or non-blocking • Blocking is considered synchronous • Blocking sendhas the sender block until the message is received • Blocking receivehas the receiver block until a message is available • Non-blocking is considered asynchronous • Non-blocking send has the sender send the message and then continue • Non-blocking receive has the receive a valid message or null When both S and R are blocking, we have a rendezvous btw them.

  33. Buffering • Queue of messages attached to the link; implemented in one of three ways 1. Zero capacity – 0 messagesSender must wait for receiver (rendezvous) 2. Bounded capacity – finite length of n messagesSender must wait if link full 3. Unbounded capacity – infinite length Sender never waits Producer-Consumer Example: see p.123-125 & p. 166-167 for Java code

  34. Sec 4.6: Client-Server Communication • Sockets • Remote Procedure Calls • Remote Method Invocation (Java)

  35. Sockets • A socket is defined as an endpoint for communication • Concatenation of IP address and port • The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8 • Communication consists of a pair of sockets • Java example: Socket s = new Socket(“127.0.0.1”, 47301); DataOutputStream dos = new DataOutputStream(s.getOutputStream()) DataInputStream dis = new DataInputStream(s.getInputStream()) dos: for write, dis: for read

  36. Socket Communication

  37. Remote Procedure Calls • Remote procedure call (RPC) abstracts procedure calls between processes on networked systems. • Stubs – client-side proxy for the actual procedure on the server. • The client-side stub locates the server and marshalls the parameters. • The server-side stub receives this message, unpacks the marshalled parameters, and performs the procedure on the server.

  38. Execution of RPC

  39. Remote Method Invocation • Remote Method Invocation (RMI) is a Java mechanism similar to RPCs. • RMI allows a Java program on one machine to invoke a method on a remote object.

  40. Marshalling Parameters

  41. Threads • A thread (orlightweight process) is a basic unit of CPU utilization; it consists of: • program counter • register set • stack space • A thread shares with its peer threads its: • code section • data section • operating-system resources collectively know as a task. • A traditional or heavyweight process is equal to a task with only one thread

  42. Threads (Cont.) • In a multiple threaded task, while one server thread is blocked and waiting, a second thread in the same task can run. • Cooperation of multiple threads in same job confers higher throughput and improved performance. • Applications that require sharing a common buffer (i.e., producer-consumer) benefit from thread utilization. • Threads provide a mechanism that allows sequential processes to make blocking system calls while also achieving parallelism. • Kernel-supported threads (Mach and OS/2). • User-level threads; supported above the kernel, via a set of library calls at the user level (Project Andrew from CMU). • Hybrid approach implements both user-level and kernel-supported threads (Solaris 2).

  43. Multiple Threads within a Task

  44. Threads Support in Solaris 2 • Solaris 2 is a version of UNIX with support for threads at the kernel and user levels, symmetric multiprocessing, and real-time scheduling. • LWP – intermediate level between user-level threads and kernel-level threads. • Resource needs of thread types: • Kernel thread: small data structure and a stack; thread switching does not require changing memory access information – relatively fast. • LWP: PCB with register data, accounting and memory information,; switching between LWPs is relatively slow. • User-level thread: only need stack and program counter; no kernel involvement means fast switching. Kernel only sees the LWPs that support user-level threads.

  45. Solaris 2 Threads

More Related