final review
Download
Skip this Video
Download Presentation
Final Review

Loading in 2 Seconds...

play fullscreen
1 / 99

Final Review - PowerPoint PPT Presentation


  • 103 Views
  • Uploaded on

Final Review. Bernard Chen Spring 2007. 1.1 What is Operating System?. An operating system is a program that manages the computer hardware. It also provides a basis for application programs and acts as an intermediary between the computer user and computer hardware. Process Concept.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Final Review' - maili


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
final review

Final Review

Bernard Chen

Spring 2007

1 1 what is operating system
1.1 What is Operating System?
  • An operating system is a program that manages the computer hardware.
  • It also provides a basis for application programs and acts as an intermediary between the computer user and computer hardware.
process concept
Process Concept
  • Early computer systems allow only one program running at a time. In contrast, current day computer systems allow multiple programs to be loaded into memory and executed concurrently.
  • Process concept makes it happen
  • Process: a program in execution
process state
Process State
  • new: The process is being created
  • running: Instructions are being executed
  • waiting: The process is waiting for some event to occur
  • ready: The process is waiting to be assigned to a process
  • terminated: The process has finished execution
schedulers
Schedulers
  • Long-term scheduler

(or job scheduler) –selects which processes should be brought into the ready queue

  • Short-term scheduler

(or CPU scheduler) –selects which process should be executed next and allocates CPU

schedulers8
Schedulers
  • Sometimes it can be advantage to remove process from memory and thus decrease the degree of multiprogrammimg
  • This scheme is called swapping
interprocess cpmmunication ipc
Interprocess Cpmmunication (IPC)
  • Two fundamental models
  • Share Memory
  • Message Passing
communication models
Communication Models
  • (a): MPI (b): Share memory
share memory parallelization system example
Share Memory Parallelization System Example

m_set_procs(number): prepare number of child for execution

m_fork(function): childes execute “function”

m_kill_procs(); terminate childs

real example
Real Example

int array_size=1000

int global_array[array_size]

main(argc , argv)

{

int nprocs=4;

m_set_procs(nprocs); /* prepare to launch this many processes */

m_fork(sum); /* fork out processes */

m_kill_procs(); /* kill activated processes */

}

void sum()

{

int id;

id = m_get_myid();

for (i=id*(array_size/nprocs); i<(id+1)*(array_size/nprocs); i++)

global_array[id*array_size/nprocs]+=global_array[i];

}

shared memory systems
Shared-Memory Systems
  • The producer and consumer must be synchronized, so that consumer does not try to consume an item that has not yet been produced.
  • Two types of buffer can be used:
  • Unbounded buffer
  • Bounded buffer
shared memory systems15
Shared-Memory Systems
  • Unbounded Buffer: the consumer may have to wait for new items, but producer can always produce new items.
  • Bounded Buffer: the consumer have to wait if buffer is empty, the producer have to wait if buffer is full
message passing systems
Message-Passing Systems
  • A message passing facility provides at least two operations: send(message),receive(message)
mpi program example
MPI Program example

#include "mpi.h"

#include <math.h>

#include <stdio.h>

#include <stdlib.h>

int main (int argc, char *argv[])

{

int id; /* Process rank */

int p; /* Number of processes */

int i,j;

int array_size=100;

int array[array_size]; /* or *array and then use malloc or vector to increase the size */

int local_array[array_size/p];

int sum=0;

MPI_Status stat;

MPI_Comm_rank (MPI_COMM_WORLD, &id);

MPI_Comm_size (MPI_COMM_WORLD, &p);

mpi program example18
MPI Program example

if (id==0)

{

for(i=0; i<array_size; i++)

array[i]=i; /* initialize array*/

for(i=0; i<p; i++)

MPI_Send(&array[i*array_size/p], /* Start from*/

array_size/p, /* Message size*/

MPI_INT, /* Data type*/

i, /* Send to which process*/

MPI_COMM_WORLD);

}

else

MPI_Recv(&local_array[0],array_size/p,MPI_INT,0,0,MPI_COMM_WORLD,&stat);

mpi program example19
MPI Program example

for(i=0;i<array_size/p;i++)

sum+=array[i];

MPI_Reduce (&sum, &sum, 1, MPI_INT, MPI_SUM, 0, MPI_COMM_WORLD);

if (id==0)

printf("%d ",sum);

}

thread overview
Thread Overview
  • A thread is a basic unit of CPU utilization.
  • Traditional (single-thread) process has only one single thread control
  • Multithreaded process can perform more than one task at a time

example: word may have a thread for displaying graphics, another respond for key strokes and a third for performing spelling and grammar checking

multithreading models
Multithreading Models
  • Support for threads may be provided either at the user level, for user threads, or by the kernel, for kernel threads
  • User threads are supported above kernel and are managed without kernel support
  • Kernel threads are supported and managed directly by the operating system
multithreading models22
Multithreading Models
  • Ultimately, there must exist a relationship between user thread and kernel thread
  • User-level threads are managed by a thread library, and the kernel is unaware of them
  • To run in a CPU, user-level thread must be mapped to an associated kernel-level thread
many to one model
Many-to-one Model

User Threads

Kernel thread

one to one model
One-to-one Model

User Threads

Kernel threads

many to many model
Many-to-many Model

User Threads

Kernel threads

cpu scheduler
CPU Scheduler
  • Whenever the CPU becomes idle, the OS must select one of the processes in the ready queue to be executed
  • The selection process is carried out by the short-term scheduler
preemptive scheduling vs non preemptive scheduling
Preemptive scheduling vs. non-preemptive scheduling
  • When scheduling takes place only under circumstances 1 and 2, we say that the scheduling scheme is non-preemptive; otherwise, its called preemptive
  • Under non-preemptive scheduling, once the CPU has been allocated to a process, the process keep the CPU until it release the CPU either by terminating or by switching to waiting state. (Windows 95 and earlier)
scheduling criteria
Scheduling Criteria

CPU utilization – keep the CPU as busy as possible (from 0% to

100%)

Throughput – # of processes that complete their execution per

time unit

Turnaround time – amount of time to execute a particular

Process

Waiting time – amount of time a process has been waiting in the

ready queue

Response time – amount of time it takes from when a request was submitted until the first response is produced

scheduling algorithems
Scheduling Algorithems
  • First Come First Serve Scheduling
  • Shortest Job First Scheduling
  • Priority Scheduling
  • Round-Robin Scheduling
  • Multilevel Queue Scheduling
  • Multilevel Feedback-Queue Scheduling
first come first serve scheduling fcfs
First Come First Serve Scheduling (FCFS)

ProcessBurst time

P1 24

P2 3

P2 3

first come first serve scheduling
First Come First Serve Scheduling
  • Suppose we change the order of arriving job P2, P3, P1
priority scheduling
Priority Scheduling

A priority number (integer) is associated with each process

The CPU is allocated to the process with the highest priority

(smallest integer ≡ highest priority)

  • Preemptive
  • Non-preemptive

SJF is a special priority scheduling where priority is the predicted next CPU burst time, so that it can decide the priority

multilevel queue
Multilevel Queue

Ready queue is partitioned into separate queues:

  • foreground (interactive)
  • background (batch)

Each queue has its own scheduling algorithm

foreground – RR background – FCFS

multilevel queue example
Multilevel Queue example
  • Foreground P1 53 (RR interval:20)

P2 17

P3 42

  • Background P4 30 (FCFS)

P5 20

multilevel feedback queue
Multilevel Feedback Queue

Three queues:

  • Q0 – RR with time quantum 8 milliseconds
  • Q1 – RR time quantum 16 milliseconds
  • Q2 – FCFS

Scheduling

A new job enters queue Q0 which is served FCFS. When it

gains CPU, job receives 8 milliseconds. If it does not finish in 8

milliseconds, job is moved to queue Q1.

At Q1 job is again served FCFS and receives 16 additional

milliseconds. If it still does not complete, it is preempted and

moved to queue Q2.

algorithm evaluation
Algorithm Evaluation
  • Deterministic Modeling
  • Simulations
  • Implementation
deterministic modeling
Deterministic Modeling
  • Deterministic Modeling:
  • Process Burst Time

P1 10

P2 29

P3 3

P4 7

P5 12

deterministic modeling42
Deterministic Modeling
  • Deterministic model is simple and fast. It gives the exact numbers, allowing us to compare the algorithms. However, it requires exact numbers for input, and its answers apply only to these cases.
implementation
Implementation
  • Even a simulation is of limited accuracy.
  • The only completely accurate way to evaluate a scheduling algorithm is to code it up, put it in the operating system and see how it works.
chapter 6 process synchronization

Chapter 6Process Synchronization

Bernard Chen

Spring 2007

bounded buffer producer view
Bounded Buffer (producer view)

while (true) {

/* produce an item and put in next Produced*/

while (count == BUFFER_SIZE)

; // do nothing

buffer [in] = nextProduced;

in = (in + 1) % BUFFER_SIZE;

count++;

}

bounded buffer consumer view
Bounded Buffer (Consumer view)

while (true) {

while (count == 0)

; // do nothing

nextConsumed= buffer[out];

out = (out + 1) % BUFFER_SIZE;

count--;

/* consume the item in next Consumed

}

race condition
Race Condition
  • We could have this incorrect state because we allowed both processes to manipulate the variable counter concurrently
  • Race Condition: several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place.
  • Major portion of this chapter is concerned with process synchronization and coordination
6 2 the critical section problem
6.2 The Critical-Section Problem

A solution to the critical-section problem must satisfy the following three requirements:

  • 1. Mutual Exclusion - If process Pi is executing in its critical section, then no other processes can be executing in their critical sections
  • 2.Progress - If no process is executing in its critical section and there exist some processes that wish to enter their critical section, then the selection of the processes that will enter the critical section next cannot be postponed indefinitely
  • 3.Bounded Waiting - A bound must exist on the number of times that other processes are allowed to enter their critical sections after a process has made a request to enter its critical section and before that request is granted
6 3 peterson s solution
6.3 Peterson’s Solution
  • It is restricted to 2 processes that alternate execution between their critical section and remainder section

The two processes share two variables:

  • Int turn;
  • Boolean flag[2]
algorithm for process pi
Algorithm for Process Pi

do {

acquire lock

critical section

release lock

remainder section

}

6 5 semaphore
6.5 Semaphore
  • It’s a hardware based solution
  • Semaphore S –integer variable
  • Two standard operations modify S: wait() and signal()
6 5 semaphore54
6.5 Semaphore

Can only be accessed via two indivisible (atomic) operations

wait (S) {

while S <= 0

; // no-op

S--;

}

signal (S) {

S++;

}

6 5 semaphore55
6.5 Semaphore

Provides mutual exclusion

Semaphore S; // initialized to 1

do {

wait (S);

//Critical Section

signal (S);

//Remainder Section

} while (true)

readers writers problem
Readers-Writers Problem
  • A data set is shared among a number of concurrent processes
  • Readers –only read the data set; they do not perform any updates
  • Writers –can both read and write.
  • First readers-writers problem: requires that no reader will be kept waiting unless a writer has already obtained permission to use the shared object
monitors
Monitors
  • A high-level abstraction that provides a convenient and effective mechanism for process synchronization
  • Only one process may be active within the monitor at a time
timestamp timestamp based protocols
Timestamp Timestamp-based Protocols
  • Timestamps determine serializability order
  • If TS(Ti) < TS(Tj), system must ensure produced schedule equivalent to serial schedule where Ti appears before Tj
timestamp timestamp based protocols59
Timestamp Timestamp-based Protocols

Data item Q gets two timestamps

  • W-timestamp(Q) –largest timestamp of any transaction that executed write(Q) successfully
  • R-timestamp(Q) –largest timestamp of successful read(Q)

Updated whenever read(Q) or write(Q) executed

timestamp timestamp based protocols60
Timestamp Timestamp-based Protocols
  • Suppose Ti executes read(Q)
  • If TS(Ti) < W-timestamp(Q), Ti needs to read value of Q that was already overwritten

Read operation rejected and Ti rolled back

  • If TS(Ti) ≥W-timestamp(Q)

Read executed, R-timestamp(Q) set to

max(R-timestamp(Q), TS(Ti))

timestamp timestamp based protocols61
Timestamp Timestamp-based Protocols

Suppose Ti executes write(Q)

  • If TS(Ti) < R-timestamp(Q), value Q produced by Ti was needed previously and Ti assumed it would never be produced

Write operation rejected, Ti rolled back

  • If TS(Ti) < W-tiimestamp(Q), Ti attempting to write obsolete value of Q

Write operation rejected and Ti rolled back

  • Otherwise, write executed
timestamp timestamp based protocols62
Timestamp Timestamp-based Protocols
  • Any rolled back transaction Ti is assigned new timestamp and restarted
  • Algorithm ensures conflict serializability and freedom from deadlock
chapter 7 deadlocks

Chapter 7 Deadlocks

Bernard Chen

Spring 2007

deadlock characterization
Deadlock Characterization

Deadlock can arise if four conditions hold simultaneously:

  • Mutual Exclusion
  • Hold and Wait
  • No Preemption
  • Circular Wait
resource allocation graph
Resource-Allocation Graph
  • A set of vertices V and a set of edges E.

V is partitioned into two types:

  • P= {P1, P2, …, Pn}, the set consisting of all the processes in the system.
  • R= {R1, R2, …, Rm}, the set consisting of all resource types in the system.
resource allocation graph66
Resource-Allocation Graph

E is also partitioned into two types:

  • request edge –directed edge P1 →Rj
  • assignment edge –directed edge Rj→Pi
avoidance algorithms
Avoidance algorithms
  • Single instance of a resource type. Use a resource allocation graph
  • Multiple instances of a resource type. Use the banker’s algorithm
banker s algorithm
Banker’s Algorithm
  • Two algorithms need to be discussed:
  • 1. Safety state check algorithm
  • 2. Resource request algorithm
7 6 deadlock detection
7.6 Deadlock Detection
  • The overhead of the detection includes not only the run-time cost of maintaining the necessary information and execute the detection algorithm but also the recovery part
detection algorithms
Detection Algorithms
  • Single instance of a resource type. Use a wait-for graph
  • Multiple instances of a resource type. Use the algorithm similar to banker’s algorithm
chapter 8 main memory

Chapter 8 Main Memory

Bernard Chen

Spring 2007

dynamic storage allocation problem
Dynamic Storage-Allocation Problem
  • How to satisfy a request of size n from a list of free holes
  • First-fit: Allocate the first hole that is big enough
  • Best-fit: Allocate the smallest hole that is big enough; must search entire list, unless ordered by size (Produces the smallest leftover hole)
  • Worst-fit: Allocate the largest hole; must also search entire list (Produces the largest leftover hole)
fragmentation
Fragmentation
  • All strategies for memory allocation suffer from external fragmentation
  • external fragmentation: as process are loaded and removed from memory, the free memory space is broken into little pieces
  • External fragmentation exists when there is enough total memory space to satisfy the request, but available spaces are not contiguous
8 4 paging
8.4 paging
  • Paging is a memory-management scheme that permits the physical address space of a process to be non-contiguous.
  • The basic method for implementation involves breaking physical memory into fixed-sized blocks called FRAMES and break logical memory into blocks of the same size called PAGES
paging
Paging
  • Every address generated by the CPU is divided into two parts: Page number (p) and Page offset (d)
  • The page number is used as an index into a Page Table
hardware support on paging
Hardware Support on Paging
  • If we want to access location I, we must first index into page table, this requires one memory access
  • With this scheme, TWO memory access are needed to access a byte
  • The standard solution is to use a special, small, fast cache, called Translation look-aside buffer (TLB) or associative memory
slide85
TLB
  • The percentage of times that a particular page number is found in the TLN is called hit ratio
  • If it takes 20 nanosecond to search the TLB and 100 nanosecond to access memory
  • If our hit ratio is 80%, the effective memory access time equal:

0.8*(100+20) + 0.2 *(100+100)=140

  • If our hit ratio is 98%, the effective memory access time equal:

0.98*(100+20) + 0.02 *(100+100)=122

(detail in CH9)

hierarchical paging
Hierarchical paging
  • One way is to

use a two-level

paging algorithm

segmentation
Segmentation
  • Although the user can refer to objects in the program by a two-dimensional address, the actual physical address is still a one-dimensional sequence
  • Thus, we need to map the segment number
  • This mapping is effected by a segment table
  • In order to protect the memory space, each entry in segment table has a segment base and a segment limit
example of segmentation
For example, segment 2 starts from 4300 with size 400, if we reference to byte 53 of segment 2, it mapped to 4335

A reference to segment 3, byte 852?

A reference to segment 0, byte 1222?

Example of Segmentation
virtual memory
Virtual Memory
  • Only part of the program needs to be in memory for execution
  • Logical address space can therefore be much larger than physical address space
  • Allows for more efficient process creation

Virtual memory can be implemented via:

  • Demand paging
  • Demand segmentation
page replacement algorithms
Page Replacement Algorithms
  • Goal:

Want lowest page-fault rate

Evaluate algorithm by running it on a particular string of memory references (reference string) and computing the number of page faults on that string

  • In all our examples, the reference string is

1, 2, 3, 4, 1, 2, 5, 1, 2, 3, 4, 5

9 6 thrashing
9.6 Thrashing
  • If the process does not have “enough” number of frames it needs to support page in active use, it will quickly page-fault
  • If the number of frames allocated to a process falls below the minimum number required by the process, thrashing will happen
  • Thrashing: The process spending more time paging than executing
  • Thrashing results in severe performance problems
ad