1 / 46

Processes (and Threads)

Silberschatz, ch. 3 p.81 - 116. Processes (and Threads). Processes Process features, scheduling Inter-process communication Threads Classical IPC/Thread Problems. Application- Process. User- Process. Utilities Shells, Editor, Compiler. Libraries e.g. libx.a. User-

signa
Download Presentation

Processes (and Threads)

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Silberschatz, ch. 3 p.81 - 116 Processes (and Threads) • Processes • Process features, scheduling • Inter-process communication • Threads • Classical IPC/Thread Problems

  2. Application- Process User- Process Utilities Shells, Editor, Compiler Libraries e.g. libx.a User- Process Application Application Application Programming Interface (API) e.g. open, close, read, write, fork, exec, kill Login Memory- Management File System Access Control Process/Thread Management Authentication Passwords Interrupt- System OS kernel Device Drivers Networking TCP/IP, PPP, ... I/O-System Hardware (centralized or distributed)

  3. Process • A process is a program in “execution”. Program is a passive entity (specification), while process is an active entity (execution of specification). • Process needs resources to accomplish its task (unit of work within the system) • CPU, memory, I/O, files • Initialization data • Process termination requires reclaim of any reusable resources • Typically system has many processes, some user, some kernel running concurrently on one or more CPUs • Concurrency by multiplexing the CPUs among the processes / threads

  4. Lec1: OS Process Management The operating system is responsible for the following activities in connection with process management: • Creating and deleting both user and system processes • Suspending and resuming processes • Providing mechanisms for • process communication • process synchronization • process deadlock handling

  5. The Process States Processes Termination Conditions - Normal exit (voluntary) - Error exit (voluntary) - Fatal error (involuntary) - Killed by another process (involuntary) blocked As a process executes, it changes state: • new: Process (parameters) initialized • ready: Waiting to be assigned to CPU • running: Instructions are being executed • waiting: Waiting for data/I/O or events • terminated: Finished execution

  6. Multiprogramming Example process # P4 P3 P2 P1 CPU slice time 

  7. Process Control Block (PCB) Information associated with each process: • Process state • Program counter/Stack Pointer • CPU registers • CPU scheduling information • Memory-management information • Resource accounting information • I/O status information

  8. CPU Switch From Process to Process Memory Proc 0 glob. var Proc 1 OS

  9. Implementation of Processes: Process Table Fields of a process table entry: 1 entry per process

  10. Process Scheduling? • Lowest layer of process-structured OS • handles interrupts, scheduling • Above that layer are sequential processes

  11. (Multi-level) Process Scheduling Queues • Job queue – set of *all* processes in the system • Ready queue – set of all processes residing in main memory, ready and waiting to execute • Device queues – set of processes waiting for an I/O device • Processes migrate among the various queues

  12. Ready Queue + I/O Device Queues

  13. Representation of Process Scheduling

  14. Schedulers • Long-term scheduler (or job scheduler) – selects which processes should be brought into the ready queue from mass-storage repository • Short-term scheduler (or CPU scheduler) – selects which process from ready queue should be executed next and allocates CPU • Windows/Unix skip “long term scheduler” and put all new process in memory for short term scheduler: simple but stability of load handling/balancing gets affected!

  15. Schedulers • Short-term scheduler is invoked very frequently (milliseconds)  (must be fast) • Long-term scheduler is invoked very infrequently (seconds, minutes)  (may be slow) • The long-term scheduler controls the degree of multiprogramming • Processes can be described as either: • I/O-bound process – spends more time doing I/O than computations, only short (or infrequent) CPU bursts • CPU-bound process – spends more time doing computations; only short (or infrequent) I/O bursts

  16. Addition of Medium Term Scheduling Provision for swap-in/swap-out of processes to control degree of multi-programming (swapping)

  17. Context Switch • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process • Context-switch time is overhead; the system does no useful work while switching • Time dependent on hardware support

  18. Process Issues • Management • Creation • Forking • Termination • Communication • Co-operating processes • Inter-process communication • …

  19. Process Creation • Parent process create children processes, which, in turn create other processes, forming a tree of (unique ID: PID) processes (inter-process ordering issues considered later) • Issue/Options: Resource sharing • Parent and children share all resources • Children share subset of parent’s resources • Parent and child share no resources • Issue/Options: Execution • Parent and children execute concurrently • Parent waits until some/all children terminate • Issue/Option: Address space • Child is a duplicate of parent • Child has another program loaded into it • UNIX • fork system call creates new (clone) process • exec system call used after a fork to replace the process’ memory space with a new program

  20. Tree of processes on a typical Solaris

  21. Process Hierarchies • Parent creates a child process, child processes can create its own process (differing in address spaces, different identity w.r.t local changes, sharing mem. image/files) • Forms a hierarchy • UNIX calls this a "process group“ • Parent – child relation cannot be dropped • Parent – child maintain distinct address spaces; initially child inherits/shares parent’s address space • Windows has no concept of process hierarchy • all processes are created equal sans heritage relations (though a parent can control a child using a “handle”) • distinct address space from start

  22. Process Creation (UNIX) • * fork system call creates new (clone) process • * exec system call used after a fork to replace the process’ memory space with a new program

  23. C Program: Forking Separate Process (UNIX) int main() { Pid_t pid; pid = fork(); /* fork another process */ if (pid < 0) { /* error occurred */ fprintf(stderr, "Fork Failed"); exit(-1); } else if (pid == 0) { /* child process */ execlp("/bin/ls", "ls", NULL); } else { /* parent process */ wait (NULL); /* parent will wait for the child to complete */ printf ("Child Complete"); exit(0); } }

  24. Process Termination • Process executes last statement and asks the operating system to delete it (exit()) • Output data from child to parent (via wait) • Process’ resources are deallocated by operating system • Parent may terminate execution of children processes (abort(), TerminateProcess()) if • Child has exceeded allocated resources • Task assigned to child is no longer required • Parent is exiting • Some operating system do not allow child to continue if its parent terminates (zombie control) • All children terminated - cascading termination

  25. Cooperating Processes • Independent process cannot affect or be affected by the execution of another process • Cooperating process can affect or be affected by the execution of another process • Advantages of process cooperation • Information sharing • Computation speed-up: subtask model for distributed systems • Modularity • Convenience: subtask for multiprogramming (not multiprocessing)

  26. Inter-process Communication (IPC) Models Message Passing Shared Memory

  27. Shared-Memory: Producer-Consumer Problem • Paradigm for cooperating processes, producer process produces information (eg. printer program) that is consumed by a consumer process (eg. printer) • unbounded-buffer places no practical limit on the size of the buffer • bounded-buffer assumes that there is a fixed buffer size buffer

  28. Process Ordering? • Concurrent consumer & producer access? • Serialization: RAW models! • Precedence sequences

  29. Inter-process Communication (IPC) • Mechanism for processes to communication and synchronize their actions [shared Mem: PC model; msg passing: IPC, RPC] • Message passing – processes communicate with each other without resorting to shared variables • IPC facility provides two operations: • send(message) – message size fixed or variable • receive(message) • If P and Q wish to communicate, they need to: • establish a communicationlink between them • exchange messages via send/receive • Implementation of communication link • physical (e.g., shared memory, hardware bus) • logical (e.g., logical properties)

  30. Link Implementation Questions • How are links established? • Can a link be associated with more than two processes? • How many links can there be between every pair of communicating processes? • What is the capacity of a link? • Is the size of a message that the link can accommodate fixed or variable? • Is a link unidirectional or bi-directional? • Direct Communication • Indirect Communication • Synchronization Aspects

  31. Direct Communication • Processes must name each other explicitly: • A: send (P, message) – A sends a message to process P • B: receive(Q, message) – B receives a message from process Q • Symmetric: send(P,msg); receive(Q, msg) • Asymmetric: send(P, msg); receive(pid, msg) • Properties of communication link • Links are established automatically • Link associated with exactly one pair of communicating processes • Between each pair there exists exactly one link • The link may be unidirectional, but is usually bi-directional • Queue models (zero, bounded, unbounded)

  32. Indirect Communication • Messages are directed and received from mailboxes (Unix) … also referred to as ports • Each mailbox (X) has a unique id send(X,msg), receive(X,msg) • Processes can communicate only if they share a mailbox • Unix pipes: cat xyz > lpr ; implicit FIFO mailbox • Properties of communication link • Link established only if processes share a common mailbox • A link may be associated with many processes • Each pair of processes may share several communication links • Link may be unidirectional or bi-directional

  33. Indirect Communication • Operations • create a new mailbox • send and receive messages through mailbox • destroy a mailbox • Primitives are defined as: send(A, message) – send a message to mailbox A receive(A, message) – receive amessage from mailbox A

  34. Indirect Communication • Mailbox sharing • P1, P2, and P3 share mailbox A • P1, sends; P2and P3 execute receive • Who gets the message? • Solution choices based on whether we • allow a link to be associated with at most two processes • allow only one process at a time to execute a receive operation • allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.

  35. Issues: Synchronization • Message passing may be either blocking or non-blocking • Blocking is considered synchronous • Blocking send has the sender block until the message is received • Blocking receive has the receiver block until a message is available • “Rendezvous” across blocking send and receive • is this similar to shared memory communication? • Non-blocking is considered asynchronous • Non-blockingsend has the sender send the message and continue • Non-blockingreceive has the receiver receive a valid message or null

  36. Issues: Buffering (Direct or Indirect) • Messages of communicating processes reside in temporary queues. Queue of messages attached to the link; implemented in one of three ways 1.Zero capacity– 0 messages – no msg. waiting possible Sender must wait for receiver (rendezvous) 2.Bounded capacity– finite length of n messagesSender must wait (block) if link full 3.Unbounded capacity– infinite length Sender never waits (blocks)

  37. WinXP IPC Implem. (Local Call Procedure: LCP) • Client opens a handle to subsystem connection port • Client sends a conn. req. • Server creates 2 private conn. ports and returns handle to one of them to the client • Client and server use the corresponding port handle to send msgs. or callbacks (and to listen to replies) Server Client conn. req Conn. Port Object handle handle conn. port: client handle conn. port: server shared object

  38. Client-Server Communication • (Shared Memory Communication) • (Message Passing) • Sockets • Remote Procedure Calls • Remote Method Invocation (Java)

  39. Sockets • Socket: an endpoint for communication. A pair of communicating processes employ sockets at each end (1 per process) • Concatenation of IP address and port • The socket 161.25.19.8:1625 refers to port 1625 on host 161.25.19.8 • Communication consists between a pair of sockets

  40. Socket Communication

  41. Remote Procedure Calls • Remote procedure call (RPC) abstracts procedure calls between processes on networked systems. • Stubs – client-side proxy for the actual procedure on the server. • The client-side stub locates the server and marshalls the parameters. • The server-side stub receives this message, unpacks the marshalled parameters, and peforms the procedure on the server.

  42. Client-Server Models Behind RPC server 1. Register 5. Send Reply 4. Send Req name server 3. Send Right client 2. Inquire

  43. Execution of RPC

  44. Remote Method Invocation • Remote Method Invocation (RMI) is a Java mechanism similar to RPCs. • RMI allows a Java program on one machine to invoke a method on a remote object.

  45. RMI: Marshalling Parameters

  46. Summary • Process • Definition, PCB • States (&changing states) • Creation, forking, termination • Process Queues • I/O queue • Ready queue • Scheduling • Long-term • Short-term • (Medium-term) • Communication • Shared memory • Message passing • Links • Naming, synchronization, buffering • May use • Sockets • RPC • RMI (Java)

More Related