processes and operating systems
Download
Skip this Video
Download Presentation
Processes and operating systems

Loading in 2 Seconds...

play fullscreen
1 / 64

Processes and operating systems - PowerPoint PPT Presentation


  • 78 Views
  • Uploaded on

Processes and operating systems. Scheduling policies: RMS EDF - Scheduling modeling assumptions. - Interprocess communication. - Power management. Metrics. How do we evaluate a scheduling policy: Ability to satisfy all deadlines.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Processes and operating systems' - elana


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
processes and operating systems

Processes and operating systems

  • Scheduling policies: RMS EDF- Scheduling modeling assumptions.- Interprocess communication.- Power management.
metrics
Metrics
  • How do we evaluate a scheduling policy:
    • Ability to satisfy all deadlines.
    • CPU utilization---percentage of time devoted to useful work.
    • Scheduling overhead---time required to make scheduling decision.
      • themore sophisticated the scheduling policy, the more CPU time it takes during system operation.
posix
POSIX

#include <sched.h>

int i, my_process_id;

Struct sched_param my_sched_params;

i = sched_setscheduler(my_process_id, SCHED_FIFO, &sched_params);

rate monotonic scheduling
Rate monotonic scheduling
  • RMS (Liu and Layland): widely-used, analyzable scheduling policy.
  • Analysis is known as Rate Monotonic Analysis (RMA).
rma model
RMA model
  • Assumptions
    • All process run on single CPU.
    • Zero context switch time.
    • No data dependencies between processes.
    • Process execution time is constant.
    • Deadline is at end of period.
    • Highest-priority ready process runs.
process parameters

Ci

Ci

Ci

i

period Ti

Process parameters
  • Ci is computation time of process ti; Ti is period of process ti.
rate monotonic analysis
Rate-monotonic analysis
  • Response time: time required to finish process.
  • Critical instant: scheduling state that gives worst response time.
  • Critical instant occurs when all higher-priority processes are ready to execute.
critical instant

interfering processes

P1

P1

P1

P1

P2

P2

P3

Critical instant

P1

P2

P3

critical

instant

P4

rms priorities
RMS priorities
  • Optimal (fixed) priority assignment:
    • shortest-period process gets highest priority;
    • priority inversely proportional to period;
    • break ties arbitrarily.
  • No fixed-priority scheme does better.
rms example

1

C1

C1

C1

4

8

2

C2

C2

time

0

5

10

RMS example
rms cpu utilization
RMS CPU utilization
  • Utilization for n processes is
    • S = (1, 2, …, n)
    • U = Si=1..n Ci / Ti
rate monotonic liu layland 1973

Utilization

1.0

0.828

0.78

0.69

nTasks

Rate Monotonic, Liu & Layland, 1973
  • For a configuration of n periodic tasks to be scheduled, a sufficient condition for schedulability is:

All n tasks are schedulable if U ≤ n ( 21/n - 1)

As number of tasks approaches infinity, maximum utilization approaches 69%.

slide13
예외적인 경우의 예

C1

C1

C1

C1

C1

1 (C1/T1=1/2)

2

4

6

8

C2

C2

2 (C2/T2=1/4)

0

4

8

C3

C3

3 (C3/T3=2/8)

0

8

time

U = ½ + ¼ + 1/8 = 1  3(21/3 -1) = 0.78

rms cpu utilization cont d
RMS CPU utilization, cont’d.
  • RMS cannot asymptotically guarantee 100% use of CPU, even with zero context switch overhead.
  • Must keep idle cycles available to handle worst-case scenario.
  • However, RMS guarantees all processes will always meet their deadlines.
rms implementation
RMS implementation
  • Efficient implementation:
    • scan processes;
    • choose highest-priority active process.
earliest deadline first scheduling
Earliest-deadline-first scheduling
  • EDF: dynamic priority scheduling scheme.
  • Process closest to its deadline has highest priority.
  • Requires recalculating process priority at every timer interrupt.
edf example
EDF example

1

2

edf analysis
EDF analysis
  • EDF can use 100% of CPU.
    • If U <= 1, a given set of tasks is schedulable
  • But EDF may miss a deadline due to some scheduling overhead.
edf implementation
EDF implementation
  • On each timer interrupt:
    • compute time to deadline;
    • choose process closest to deadline.
  • Generally considered too expensive to use in practice.
fixing scheduling problems
Fixing scheduling problems
  • What if your set of processes is unschedulable?
    • Change deadlines in requirements.
    • Reduce execution times of processes.
    • Get a faster CPU.
priority inversion
Priority inversion
  • Priority inversion: low-priority process keeps high-priority process from running.
  • Improper use of system resources can cause scheduling problems:
    • Low-priority process grabs I/O device.
    • High-priority device needs I/O device, but can’t get it until low-priority process is done.
  • Can cause deadlock.
priority inversion example

Task priority

Priority Inversion: Example
  • Comment:
    • The interruption of task 2 which is generated by task 1 is normal since task 1 has a higher priority than task 2.
    • The interruption generated by task 2 is an incorrect behavior of the scheduler.
solving priority inversion
Solving priority inversion
  • Give priorities to system resources.
  • Have process inherit the priority of a resource that it requests.
    • Low-priority process inherits priority of device if higher.
priority inheritance protocol
Priority Inheritance Protocol
  • The resource management protocol is:

•if the resource is free: a task gets the resource

•if the resource is not free: the task is blocked and the task needing the resource inherits the priority of the blocked task

Task 2

with using priority inheritance protocol
with using Priority Inheritance Protocol
  • Management of the tasks by priority inheritance to avoid the expected need for a critical resource: the task 0 was delayed by the task 2 with lower priority.
  • The temporal sequence is modified by the fact that task 3 takes the priority equals of task 0 (inherit) when this one needs the resource already allocated to task 3. This inheritance of the priority allows task3 to release the critical resource as early as possible and thus task 0 to finish its execution without execution of the task 2.
priority inheritance protocol example
Priority Inheritance Protocol: example
  • Task 2 is at most delayed by the longest critical section of task 3
  • Bi inf(n, m).CRmax
data dependencies
Data dependencies allow us to improve utilization.

Restrict combination of processes that can run simultaneously.

P1 and P2 can’t run simultaneously.

Data dependencies

1

2

P1

P3

P2

What if P3 has higher priority than P1 and P2?

P3 can interfere with only one of these twos.

context switching time
Context-switching time
  • Non-zero context switch time can push limits of a tight schedule.
  • Hard to calculate effects---depends on order of context switches.
  • In practice, OS context switch overhead is small.
what about interrupts
What about interrupts?
  • Interrupts take time away from processes.
  • Perform minimum work possible in the interrupt handler.

P1

OS

P2

intr

OS

P3

device processing structure
Device processing structure
  • Interrupt service routine (ISR) performs minimal I/O.
    • Get register values, put register values.
  • Interrupt service process/thread performs most of device function.
posix scheduling policies
POSIX scheduling policies
  • SCHED_FIFO: RMS
  • SCHED_RR: round-robin
    • within a priority level, processes are time-sliced in round-robin fashion
    • The length of quantum can vary with priority level
  • SCHED_OTHER: undefined scheduling policy used to mix non-real-time and real-time processes.
interprocess communication
Interprocess communication
  • OS provides interprocess communication mechanisms:
    • various efficiencies;
    • communication power.
  • Types
    • Shared memory
    • Message passing
    • Signal (Unix)
signals
Signals
  • A Unix mechanism for simple communication between processes.
  • Analogous to an interrupt---forces execution of a process at a given location.
    • But a signal is caused by one process with a function call
  • No data---can only pass type of signal.
posix signal types
POSIX signal types
  • Termination
    • SIGABRT: abort
    • SIGTERM: terminate process
  • Exceptions
    • SIGFPE: floating point exception
    • SIGILL: illegal instruction
    • SIGKILL: unavoidable process termination
  • User-defined
    • SIGUSR1, SIGUSR2: user defined
    • Etc…
posix signals
POSIX signals
  • Must declare a signal handler for the process using sigaction().
  • Handler is called when signal is received.
  • A signal can be sent with sigqueue().
non real time signal program in posix
Non-real-time signal program in Posix

#include <signal.h>

extern void usr1_handler(int); // declare SIGUSR1 handler

struct sigaction act, oldact;

int retval;

//setup the descriptor data structure

act.sa_flags = 0;

sigemptyset(&act.sa_mask); // initilize the signal set to empty

act.sa_handler = usr1_handler; // add SIGUSR1 handler to the set

//tell the OS about this handler

retval = sigaction(SIGUSR1, &act, &oldact); // oldact has old action set

real time signals posix 4
Real-time Signals (POSIX.4)
  • Range [SIGRTMIN, SIGRTMAX]
  • If (sigqueue(destpid, SIGRTMAX - 1, sval) < 0) {
  • //error
  • }
  • ** Queuing can be turned on using the SA_SIGINFO bit in the sa_flags field of the sigaction structure.
signals in uml
Signals in UML
  • More general than Unix signal---may carry arbitrary data:

someClass

<<signal>>

aSig

<<send>>

sigbehavior()

p : integer

aSig object indicated by the <<signal>> streotype

posix semaphore
POSIX Semaphore
  • POSIX supports counting semaphores with _POSIX_SEMAPHORES option.
    • Semaphore with N resources will not block until N processes hold the semaphore.
  • Semaphores are given name:
    • /sem1
  • P() is sem_wait(), V() is sem_post().
posic semaphore
POSIC Semaphore
  • int i, oflags;
  • sem_t *my_sem;
  • my_sem = sem_open(“/sem1”, oflags); //create a new one
  • //Do useful works
  • i = sem_close(my_sem); //remove the semaphore
  • int i;
  • i = sem_wait(my_sem);
  • //access critial section
  • i= sem_post(my_sem);
  • //test w/o blocking
  • i=sem_trywait(my_sem);
posix shared memory
POSIX Shared Memory

// only one process open these two function calls

objdesc = shm_open(“/memobj1”, O_RDWR); //cf. O_RDONLY

if (ftrucate(objdesc, 1000) < 0) { //set the size of the shared memory

//error

}

// all processes that wants to use the shared memory have use the mmap

if (mmap(addr, len, O_RDWR, MAP_SHARED, objdesc, 0) == NULL) {

//error

}

If (munmap(startadrs, len) < 0 {//error}

close (objdesc); // dispose of the shared memory

All processes call mmap(), munmap().

Only one process calls shm_open(), close().

posic mmap function parameters
POSIC mmap() function parameters

len

startaddr

len

objdesc

offset

objdesc

Backing store

Memory

posix message based communication
POSIX message-based communication
  • Unix pipe supports messages between processes.
  • Parent process uses pipe() to create a pipe.
    • Pipe is created before child is created so that pipe ID can be passed to child.
posix pipe example
POSIX pipe example

/* create the pipe */

if (pipe(pipe_ends) < 0) //return an array of file descriptors, pipe_ends[0] and pipe_ends[1]

{ perror(“pipe”); break; }

//pipe_ends[0] for the write end, pipe_ends[1] for the rea end

/* create the process */

childid = fork();

if (childid == 0) { /* child reads from pipe_ends[1] */

//pass the read ebd descriptor to the new incarnation of child

childargs[0] = pipe_ends[1]; //file descriptor

execv(“mychild”,childargs);

perror(“execv”);

exit(1);

}

else { /* parent writes to pipe_ends[0] */ … }

* Parent writes something using pipe_ends[0] and child reads it using pipe_ends[1].

cache effects cache mgmt
Cache Effects (cache mgmt)

Each process use half the cache. Use LRU policy.

How many cache-miss?

P1

P1

P2

P2

P3

P3

In cache:

P1 P1, P2 P2, P3 P1, P3 P2, P1 P3, P2

What if we reserve half the cache for P1?

evaluating performance
Evaluating performance
  • May want to test:
    • context switch time assumptions;
    • scheduling policy.
  • Can use OS simulator to exercise a set of process, trace system behavior.
processes and caches
Processes and caches
  • Processes can cause additional caching problems.
    • Even if individual processes are well-behaved, processes may interfere with each other.
  • Execution time with bad cache behavior is usually much worse than execution time with good cache behavior.
power optimization
Power optimization
  • Power management: determining how system resources are scheduled/used to control power consumption.
  • OS can manage for power just as it manages for time.
  • OS reduces power by shutting down units.
    • May have partial shutdown modes.
power management and performance
Power management and performance
  • Power management and performance are often at odds
  • Entering power-down mode consumes
    • energy,
    • time.
  • Leaving power-down mode consumes
    • energy,
    • time.
simple power management policies
Simple power management policies
  • Request-driven: power up once request is received. Adds delay to response.
  • Predictive shutdown: try to predict how long you have before next request.
    • May start up in advance of request in anticipation of a new request.
    • If you predict wrong, you will incur additional delay while starting up.
probabilistic shutdown
Probabilistic shutdown
  • Assume service requests are probabilistic.
  • Optimize expected values:
    • power consumption;
    • response time.
  • Simple probabilistic: shut down after time duration Ton, turn back on after waiting for Toff.
graphics terminal usage pattern 1994
graphics terminal usage pattern(1994)

Shutdown interval varies widely

Toff

Shutdown interval is short

Ton

L-shaped usage distribution

advanced configuration and power interface acpi
Advanced Configuration and Power Interface (ACPI)
  • ACPI: open standard for power management services.

applications

device

drivers

OS kernel

ACPI driver, AML interpreter

power

management

ACPI

ACPI Tables, ACPI registers, ACPI BIOS

Hardware platform

acpi global power states
ACPI global power states
  • G3: mechanical off
  • G2: soft off
      • S1: low wake-up latency with no loss of system context
      • S2: low latency with loss of CPU/cache state
      • S3: low latency with loss of all state except memory
      • S4: lowest-power state with all devices off
  • G1: sleeping state
  • G0: working state
supplement
Supplement

For more information, refer to the paper:

Scheduling Algorithms for Multiprogramming in a Hard-

Real-Time Environment

C. L. LIU

Project MAC, Massachusetts Institute of Technology

JAMES W. LAYLAND

Jet Propulsion Laboratory, California Institute of Technology

prove the optimality of rm

t

1

0

2

4

8

6

t

2

0

5

T1

T2

Prove the Optimality of RM
  • First, consider two tasks 1 and 2, where

T1 < T2 (D1 = T1, D2 = T2)

  • If 2 happens to have higher priority than 1, then the following inequality must be satisfied:

C1 + C2 T1

proof cont
Proof (cont…)
  • Now, suppose that the RM algorithm is used
  • Then, 1 will always be scheduled before 2 if they are competing for the processor
prove the optimality of rm1
Prove the Optimality of RM
  • Define two parameters
    • β =  T2/T1 
      • The number of periods of task 1 completely contained in task 2
    • α = (T2/T1) - β
      • Fractional part left, two cases
c 2 max in case 1
C2, max in case 1?
  • Case 1: a computational time of task 1 which is short enough for all the instances of 1 to complete before the second request of 2.

C1 T2–T1 (2.2)

In this case,

C2,max = T2– (+1) C1

Then it holds that C2 + (+1) C1 T2(2.4)

c 2 max in case 2
C2, max in case 2?
  • Case 2: a computational time of 1 which is large enough for the last request of 1 not to be completed before the second request of 2.

C1 T2–T1 (2.5)

In this case,

C2,max = (T1 - C1)

Then it holds that C1 + C2T2(2.7)

proof cont1
Proof (cont…)
  • In order to prove that the RM is optimal, we need to show that (2.1) implies (2.4) or (2.7)
  • Proof:
    • Assume that (2.1) is true.

(1) We prove that (2.4) is true.

(2) We prove that (2.7) is true.

prove 1
Prove (1)
  • C1 + C2 T1
  • =>  C1 +  C2 T1
  • =>  C1 + C2 T1 since  1
  • => ( + 1)C1 + C2 T1 + C1
  • => ( + 1)C1 + C2 T2 from (2.2’) below

C1 T2–T1 (2.2)

=> C1 + T1  T2 (2.2’)

prove 2
Prove (2)
  • Exercise!!
ad