Resource management and synchronization
This presentation is the property of its rightful owner.
Sponsored Links
1 / 17

Resource management and Synchronization PowerPoint PPT Presentation


  • 71 Views
  • Uploaded on
  • Presentation posted in: General

Resource management and Synchronization. Akos Ledeczi EECE 354, Fall 2010 Vanderbilt University. Resource Sharing. Disable/Enable interrupts: only where critical region is very quick. Should be used sparingly in RT systems. Lock/Unlock scheduler: for quick critical sections still.

Download Presentation

Resource management and Synchronization

An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -

Presentation Transcript


Resource management and synchronization

Resource management and Synchronization

Akos Ledeczi

EECE 354, Fall 2010

Vanderbilt University


Resource sharing

Resource Sharing

  • Disable/Enable interrupts: only where critical region is very quick. Should be used sparingly in RT systems.

  • Lock/Unlock scheduler: for quick critical sections still.

  • Semaphore: when affected tasks do not have hard deadlines (priority inversion)

  • Mutex: preferred method. Largest overhead though because it does extra work to avoid priority inversion


Disable enable interrupts

Disable/Enable Interrupts

void OSFunction()

{

CPU_SR_ALLOC();

CPU_CRITICAL_ENTER();

:

/* critical section */

:

CPU_CRITICAL_EXIT();

}

  • OS uses this method (even when implementing the other methods)

  • Allocates storage for the interrupt disable status

  • Enter()saves status, disables interrupts

  • Exit()restores status


Lock unlock scheduler

Lock/Unlock Scheduler

void Function()

{

OS_ERR err;

OSSchedLock(&err);

if (err = OS_ERR_NONE) {

:

/* critical section */

:

OSSchedUnlock(&err);

/* check err */

}

else {

/* check err */

}

}

  • Interrupts are not disabled, but ISR returns to this task even if higher priority tasks are ready: behaves as a non-preemptive kernel

  • Lock/Unlock can be nested

  • Unlock runs scheduler (only when nesting is unwound)

  • Must not make blocking calls in critical section

  • Behaves as if current task had the highest priority


Semaphores

Semaphores

  • Dijkstra in 1959

  • “Key” to “locked code.” You need to acquire it to access the code.

  • Semaphore operations are “atomic.”

  • Binary and counting semaphores

  • Can be used for resource sharing and synchronization

  • Functions:

OSSemCreate()

OSSemPend()

OSSemPost()

OSSemDel()

OSSemSet()

OSSemPendAbort()


Binary semaphores

Binary Semaphores

  • Accessing a printer

  • Hiding behind an API


Binary semaphores cont d

Binary Semaphores cont’d.

OS_SEM MySem;

void main()

{

OS_ERR err;

OSSemCreate(&MySem, ”My Semaphore”, 1, &err);

}

void Task1 (void *p_arg)

OS_ERR err;

CPU_TS ts;

while (1) {

OSSemPend(&MySem, 0, OS_OPT_PEND_BLOCKING, &ts, &err);

switch(err)

case OS_ERR_NONE:

/*critical section */

OSSemPost(&MySem, OS_OPT_POST_1, &err);

/* check err */

break;

case OS_ERR_PEND_ABORT:

case OS_ERR_OBJ_DEL:

default:

/* other errors */

}

}

}


Counting semaphores

Counting Semaphores

  • When multiple resources/resource elements are available

  • E.g., memory pool or circular buffer

  • Initialize semaphore to the number of items available

  • Need to manage consumed/available items

  • Pend() waits on 0, otherwise, decrements counter and returns

  • Post() increments counter


Notes

Notes

  • Semaphores do not increase interrupt latency

  • Note that while the OS handles the semaphore itself, interrupts are disabled

  • Hence, protecting access to a single variable, for example, is much faster using interrupt disable/enable directly. Must be careful (e.g., a floating point operation on a CPU with no FPU can take very long)!

  • Application can have as many semaphores as necessary


Notes too

Notes too

  • OSSemPend():

    • If ctr > 0, ctr is decremented, fn returns immediately

    • If ctr == 0

      • OS_OPT_PEND_NON_BLOCKING: returns immediately with appropriate error code. Task can do something else and check back later.

      • OS_OPT_PEND_BLOCKING: blocks, gets inserted in pend list, highest priority ready task gets the CPU

    • Non-zero timeout:

      • prevents waiting forever

      • inserted in tick list

      • when expired, error code specifies that timeout expired

      • not the best way to break deadlock

  • OSSemPost():

    • If waiting task(s), gets highest priority task and switches to it

    • If not, increments counter

    • Option allows not to run scheduler (in case task wants multiple posts)


Semaphore internals

Semaphore Internals

  • No distinction between binary and counting semaphores (see mutex later)

  • Type is needed to distinguish between different kernel objects when using the generic OS_PEND_OBJECT type

  • Pend list is needed as multiple tasks may be waiting on a semaphore

  • Timestamp indicating the last Post() operation

  • Do not access fields directly, use API

  • typedefstructos_semOS_SEM;

  • structos_sem {

  • OS_OBJ_TYPE Type; /* Should be set to OS_OBJ_TYPE_SEM */

  • CPU_CHAR *NamePtr;

  • OS_PEND_LIST PendList;

  • OS_SEM_CTR Ctr;

  • CPU_TS TS;

  • };


Priority inversion with semaphores

Priority Inversion with Semaphores

  • A medium priority task can have the CPU when a lower priority holds a semaphore that a high priority task is waiting for


Mutual exclusion semaphore mutex

Mutual Exclusion Semaphore: Mutex

  • Binary semaphore with special handling to avoid priority inversion

  • Changes task priorities when necessary: called priority inheritance: if a higher priority task requests a resource, the priority of the current owner will be raised to that of the requester


Mutex

Mutex

  • Can be nested: it is only available when Post() was called as many times as Pend()

  • Can be blocking or non-blocking

  • Check return value/error code of Pend() to see why it returned (timeout, abort, delete, etc.)

  • Do not make function calls in critical sections

OSMutexCreate()

OSMutexPend()

OSMutexPost()

OSMutexDel()

OSMutexPendAbort()


Mutex internals

Mutex Internals

  • Type is needed to distinguish between different kernel objects when using the generic OS_PEND_OBJECT type

  • Pend list is needed as multiple tasks may be waiting on a semaphore

  • Timestamp indicating the last Post() operation

  • Owner task and its original priority need to be stored

  • Do not access fields directly, use API

  • Post()

    • gets timestamp

    • decrements the nesting counter and if it reaches 0

      • resets priority if necessary

      • sets owner to 0

      • Gets highest priority task from pend list

  • typedefstructos_mutexOS_MUTEX;

  • structos_mutex {

  • OS_OBJ_TYPE Type; /* Should be set to OS_OBJ_TYPE_MUTEX */

  • CPU_CHAR *NamePtr;

  • OS_PEND_LIST PendList;

  • OS_TCB *OwnerTCBPtr;

  • OS_PRIO OwnerOriginalPrio;

  • OS_NESTING_CTR OwnerNestingCtr; /* Mutex is available when the counter is 0 */

  • CPU_TS TS;

  • };


Deadlock

Deadlock

  • Void Task1(void *p_arg)

  • {

  • while (1) {

  • Acquire M1

  • Access R1

  • /* INTERRUPT */

  • Acquire M2

  • Access R2

  • Release M2

  • Release M1

  • }

  • }

  • Void Task2(void *p_arg)

  • {

  • while (1) {

  • Acquire M2

  • Access R2

  • Acquire M1

  • Access R1

  • Release M1

  • Release M2

  • }

  • }


Deadlock avoidance

Deadlock Avoidance

Tips to avoid:

  • Never acquire more than one mutex at a time

  • Never acquire a mutex directly (hide them in functions)

  • Acquire all resources before proceeding

  • Always acquire resources in the same order

  • Void Task1(void *p_arg)

  • {

  • while (1) {

  • Acquire M1

  • Acquire M2

  • Access R1

  • Access R2

  • Release M1

  • Release M2

  • }

  • }

  • Void Task2(void *p_arg)

  • {

  • while (1) {

  • Acquire M1

  • Acquire M2

  • Access R1

  • Access R2

  • Release M1

  • Release M2

  • }

  • }


  • Login