chapter 4 processes and chapter 5 threads
Download
Skip this Video
Download Presentation
CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS

Loading in 2 Seconds...

play fullscreen
1 / 32

CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS - PowerPoint PPT Presentation


  • 123 Views
  • Uploaded on

CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS. CGS 3763 - Operating System Concepts UCF, Spring 2004. Process Concept. An operating system executes a variety of programs: Batch system – jobs Time-shared systems – user programs or tasks

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'CHAPTER 4 - PROCESSES and CHAPTER 5 - THREADS' - akamu


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
chapter 4 processes and chapter 5 threads

CHAPTER 4 - PROCESSESandCHAPTER 5 - THREADS

CGS 3763 - Operating System Concepts

UCF, Spring 2004

process concept
Process Concept
  • An operating system executes a variety of programs:
    • Batch system – jobs
    • Time-shared systems – user programs or tasks
  • Textbook uses the terms job and process almost interchangeably.
    • For this class assume a job is a program in executable form waiting to be brought into the computer system
    • A process is a program in execution and includes:
      • Process Control Block
      • Program Counter
    • A process is created when it is assigned memory and a PCB is created (by the OS) to hold its status
process states
Process States
  • As a process executes, it moves through “states”:
    • new: The job is waiting to be turned into a process
      • during process creation, memory is allocated for the job’s instruction and data segments and a PCB is populated.
    • ready: The process is waiting to be assigned the CPU
    • running: Instructions are being executed by the CPU
      • the # of processes in the running state can be no greater than the # of processors (CPUs) in the system
    • waiting: The process is waiting for some event to occur
      • often associated with explicit requests for I/O operations
    • terminated: The process has finished execution
      • resources assigned to the process are reclaimed
queue or state
Queue or State?
  • Some states in the previous diagram are actually queues
    • New or Job queue – set of all processes waiting to enter the system.
    • Ready queue – set of all processes residing in main memory, ready and waiting to execute.
    • Waiting - In this class, we have somewhat abstracted away the idea of a queue for this state. In reality, processes may be placed in a device queue to wait for access to a particular I/O device.
  • Processes are selected from queued states by schedulers
process schedulers
Process Schedulers
  • Short-term scheduler (or CPU scheduler)
    • selects which process in the ready queue should be executed next and allocates CPU.
    • invoked very frequently (milliseconds)  (must be fast).
  • Long-term scheduler (or job scheduler)
    • selects which processes should be created and brought into the ready queue from the job queue.
    • invoked infrequently (seconds, minutes)  (may be slow).
    • controls the degree of multiprogramming.
  • Medium-term scheduler
    • Helps manage process mix by swapping in/out processes
process mix
Process Mix
  • Processes can be described as either:
    • I/O-bound process
      • spends more time doing I/O than computations
      • many short CPU bursts.
    • CPU-bound process
      • spends more time doing computations
      • few very long CPU bursts.
  • Need to strike a balance between the two.
    • Otherwise either the CPU or I/O devices underutilized.
moving between states
Moving Between States
  • Events as well as schedulers can cause a process to move from one state to another
    • Trap - during execution, process encounters and error. OS traps error and may abnormally end (abend) the process moving it from running to terminate.
    • SVC(end) - process ends voluntarily and moves from running to terminate
    • SVC(I/O) - process requests OS perform some I/O operation. If synchronous I/O, process moves from running to waiting state.
    • I/O Hardware Interrupt - signals the completion of some I/O operation for which a process was waiting. Process can be moved from waiting to ready.
    • Timer Interrupt - signals that a process has used up its current time slice (timesharing systems). Process returned from running to ready state to await its next turn.
resource allocation
Resource Allocation
  • A process requires certain resources in order to execute
    • Memory, CPU Time, I/O Devices, etc.
  • Resources can be allocated in one of two ways:
    • Static Allocation - resources assigned at start of process, released during termination
      • Can cause reduction in throughput
    • Dynamic Allocation - resources assigned as needed while process running
      • Can cause deadlock
  • Resources can be “shared” to reduce conflicts
    • Example: Print spooling
process control block pcb
Process Control Block (PCB)
  • Saves the status of a process
    • Process state
    • Program counter
    • CPU registers
    • Scheduling information
      • e.g., priority
    • Memory-management information
      • e.g., Base and Limit Registers
    • Accounting information
    • I/O status information
    • Pointer to next PCB
  • PCB generated during process creation
context switching
Context Switching
  • When CPU switches to another process, the system must save the state of the old process and load the saved state for the new process.
  • Context switch may involve more than one change in the program counter
    • Process 1 executing
    • OS manages the switch
    • Process 2 starts executing
  • Context-switch time is overhead; the system does no useful work while switching.
  • Time dependent on hardware support.
threads lightweight process
Threads (Lightweight Process)
  • Used to reduce context switching overhead
  • Allows sharing of instructions, data, files and other resources among several related tasks
  • Threads also share a common PCB
  • Each thread has its own “thread descriptor”
    • Program Counter
    • Register Set
    • Stack
  • Control of CPU can be shared among threads associated with the same process without a full-blown context switch.
    • Only change of PC and registers required.
benefits of threads
Benefits of Threads
  • Responsiveness
    • Faster due to reduced context switching time
    • Process can continue doing useful work while waiting for some event (isn’t blocked)
  • Resource Sharing (shared memory/code/data)
  • Economy
    • can get more done with same processor
    • less memory required
  • Utilization of multiprocessor architectures
    • different threads can run on different processors
different types of threads
Different Types of Threads
  • User Level Threads
    • thread management done by user-level library
    • e.g., POSIX Pthreads, Mach C-threads, Solaris threads
  • Kernel Level Threads
    • supported by the Kernel
    • e.g., Windows 95/98/NT/2000, Solaris, Linux
multithreading models
Multithreading Models
  • Many-to-One
    • Many user-level threads mapped to single kernel thread.
    • Used on systems that do not support kernel threads.
  • One-to-One
    • Each user-level thread maps to kernel thread.
  • Many-to-Many Model
    • Allows many user level threads to be mapped to many kernel threads.
    • Operating system to create a sufficient number of kernel threads.
cooperating processes
Cooperating Processes
  • Independent processes cannot affect or be affected by the execution of another process.
  • Dependent processes can affect or be affected by the execution of another process
    • a.k.a., Cooperating Processes
  • Processes may cooperate for:
    • Information sharing
    • Computation speed-up (Requires 2 or more CPUs)
    • Modularity
    • Convenience
interprocess communication ipc
Interprocess Communication (IPC)
  • Mechanism needed for processes to communicate and to synchronize their actions.
    • Shared Memory
      • Tightly coupled systems
      • Single processor systems allowing overlapping base & limit registers
      • Mutli-threaded systems (between threads associated with same process)
    • Message Passing
      • Processes communicate with each other without resorting to shared variables.
      • Uses send and receive operations to pass information
      • Better for loosely coupled / distributed systems
    • Can use both mechanisms on same system
message passing
Message Passing
  • If P and Q wish to communicate, they need to:
    • establish a communication link between them
    • exchange messages via send/receive
  • Implementation of communication link
    • physical (e.g., hardware bus, highspeed network)
    • logical (e.g., direct vs. indirect, other logical properties)
  • Implementation Questions
    • How are links established?
    • Can a link be associated with more than two processes?
    • How many links can there be between every pair of communicating processes?
    • Does the link support variable or fixed size messages?
    • Is a link unidirectional or bi-directional?
direct communication
Direct Communication
  • Processes must name each other explicitly:
    • send (P, message) – send message to process P
    • receive(Q, message) – receive message from process Q
  • Properties of communication link
    • Links are established automatically.
    • A link is associated with exactly one pair of communicating processes.
    • Between each pair there exists exactly one link.
    • The link may be unidirectional, but is usually bi-directional.
  • Changing name of process causes problems
    • All references to old name must be found & modified
    • Requires re-compilation of affected programs
indirect communication
Indirect Communication
  • Messages are directed to and received from mailboxes (also referred to as ports).
    • Each mailbox has a unique id.
    • Processes can communicate only if they share a mailbox.
  • Properties of communication link
    • Link established only if processes share a common mailbox.
    • A link may be associated with many processes.
    • Each pair of processes may share several communication links (requires multiple mailboxes).
    • Link may be unidirectional or bi-directional.
  • No names to change, more modular
indirect communication operations
Indirect Communication Operations
  • Create a new mailbox
    • User/Application can create mailboxes through shared memory
    • Otherwise, mailboxes created by OS at the request of a user/application process
  • Send and receive messages through mailbox
  • Destroy a mailbox
    • User/Application process can destroy any mailbox created in shared memory
    • OS can destroy mailbox at request of mailbox owner (a user/application process)
    • OS can destroy unused mailboxes during garbage collection
indirect communication1
Indirect Communication
  • Mailbox sharing
    • P1, P2, and P3 share mailbox A.
    • P1, sends; P2 and P3 receive.
    • Who gets the message?
  • Solutions
    • Allow a link to be associated with at most two processes.
    • Allow only one process at a time to execute a receive operation.
    • Allow the system to select arbitrarily the receiver. Sender is notified who the receiver was.
message synchronization
Message Synchronization
  • Message passing may be either blocking or non-blocking.
  • Blocking is considered synchronous
    • Process must wait until send or receive completed
    • Blocking Send
    • Blocking Receive
  • Non-blocking is considered asynchronous
    • Process can continue executing while waiting for send or receive to complete
    • Non-blocking Send
    • Non-blocking Receive
buffering
Buffering
  • Message queues attached to the link can be implemented in one of three ways.
    • Zero capacity – 0 messages
      • Sender must wait for receiver (rendezvous).
    • Bounded capacity – finite length of n messages
      • Sender must wait if link’s message queue full.
    • Unbounded capacity – infinite number of messages
      • Sender never waits.
ad