1 / 60

Chapter 2 Real-Time Systems Concepts

Chapter 2 Real-Time Systems Concepts. Background. Foreground. ISR. Time. ISR. ISR. Code execution. Foreground/Background Systems. application consists of an infinite loop Interrupt service routines (ISRs) handle asynchronous event (foreground)

rodd
Download Presentation

Chapter 2 Real-Time Systems Concepts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2 Real-Time Systems Concepts

  2. Background Foreground ISR Time ISR ISR Code execution Foreground/Background Systems • application consists of an infinite loop • Interrupt service routines (ISRs) handle asynchronous event (foreground) • The worst case task-level response time depends on how long the background loop takes to execute • Because the execution time of typical code is not constant, the time for successive passes through a portion of the loop is nondeterministic

  3. Critical Section • The critical section • A piece of code which cannot be interrupted during execution • Cases of critical sections • Modifying a block of memory shared by multiple kernel services • Process table • Ready queue, waiting queue, delay queue, etc. • Modifying global variables used by kernel • Entering a critical section • Disable global interrupt - disable() • Leaving a critical section • Enable global interrupt - enable()

  4. Resource • EX: I/O, CPU, Memory, Printer, … • Shared Resource • The resources which are shared among the tasks • Each task should gain exclusive access to the shared resource to prevent data corruption  Mutual exclusion • Task and Thread (herein both terminology represent a same thing) • A simple program that thinks it has the CPU all to itself • Each task is an infinite loop • It has five state: Dormant, Ready, Running, Waiting, and ISR (Interrupted)

  5. Figure 2.2 Multiple Task

  6. Figure 2.3 Task States

  7. Context Switch (or Task Switch) • Each task has its stack to store the associated information • The purpose of context switch • To make sure that the task, which is forced to give up CPU, can resume operation later without lost of any vital information or data • The procedure of interrupt    MP_addr(n) : instruction(n) MP_addr(n+1) : instruction(n+1) MP_addr(n+2) : instruction(n+2)    Push flags Push MP_addr(n+1) Push CPU registers   ; isr codes  pop CPU registers return ; jump back to ; MP_addr(n+1) Interrupt return from interrupt

  8. Foreground/background programming • Context (CPU registers and interrupted program address) save and restore using one stack • Multi-tasking context switch • Using each task’s own stack CPU registers when Task 2 was switched Current CPU stack pointer Task 2’s code address when it was switched Task1’s local variables Task 2’s local variables when it was switched Return address (optional) Return address (optional) Task 1 (to be switched from) stack Task 2 (to be switched to) stack

  9. CPU registers when Task 2 was switched • During context switch Task 2’s code address when it was switched Task 2’s local variables when it was switched Task1’s local variables Return address (optional) Return address (optional) Task 1 stack Task 2 stack Task 1 TCB Task 2 TCB Current Program address Current CPU registers ‧‧‧ ‧‧‧ Stack pointer Stack pointer Current CPU stack pointer

  10. After context switch CPU registers when Task 1 was switched Current CPU stack pointer Task 1’s code address when it was switched Task 1’s local variables when it was switched Task1’s local variables Return address (optional) Return address (optional) Task 1 (suspended) stack Task 2 (current) stack

  11. The Operations of Context Switch • Rely Interrupt (hardware or software, into kernel mode) to do context switch • Push return address • Push FLAGS register • ISR (context switch routine) • Push all register • Store sp to TCB (task control block) • Select a ready task which has the highest priority (Scheduler) • Restore the sp from the TCB of the new selected task • Pop all register • iret (interrupt return which will pop FLAGS and return address) • Switch to new task

  12. Scheduler • Also called the dispatcher • Determine which task will run next • In a priority-based kernel, control of the CPU is always given to the highest priority task ready to run • Two types of priority-based kernels • Non-preemptive • preemptive

  13. Non-preemptive • Task auto gives up the control of the CPU • Also called cooperative multitasking • Tasks cooperate with each other to share the CPU • advantages • Can use non-reentrant function • Without fear of corruption by another task (less need to guard shared data through the use of semaphores)  don’t require to disable Interrupt • Interrupt latency is typically low • Def: Interrupt latency = Maximum amount of time interrupts are disable + Time to start executing the first instruction in the ISR • Task-level response time much lower than the foreground/background • Worst case is the longest task time

  14. Figure 2.4 Non-preemptive kernel

  15. Preemptive Kernel • Used in the high system responsiveness • The highest priority task ready to run is always given control of the CPU • When a task make a higher priority task ready to run, the current task is preempted and the higher priority task is immediately given control of the CPU • If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended and the new higher priority task is resumed • The use of non-reentrant functions requires to cooperate with mutual exclusion semaphores

  16. Figure 2.5 Preemptive kernel

  17. Reentrancy • Reentrant function • Can be used by more than one task in concurrent without fear of data corruption • Can be interrupted at any time and resumed at a later time without loss of data • Use local variable int Temp; // global variable void swap(int *x, int *y){ Temp = *x; *x = *y; *y = Temp;} void strcpy(char *dest, char *src){ while (*dest++ = *src++) { ; } *dest = NUL;} Listing 2.2 Non-reentrant function • You can make swap() reentrant • Declare Temp local to swap() • Disable interrupts before the operation and enable them afterwards • Use a semaphore Listing 2.1 Reentrant function The arguments are placed on the task’s stack

  18. Figure 2.6 Non-reentrant function

  19. Task Priority • Static priorities • The priority of each task does not change during the application’s execution • Dynamic priorities • The priority of tasks can be changed during the application’s execution

  20. Figure 2.7 Priority Inversion problem • Task1 waits for the resource owned by Task3 • The situation was aggravated when Task 2 preempted Task 3, which further delayed the execution of Task 1

  21. Figure 2.8 Kernel that supports priority inheritance • Raising the priority of Task 3 when the Task 1 want to access the resource • When the Task3 release the resource, the priority of task3 restores to original priority level

  22. Assigning Task Priorities • Assigning task priorities is not a trivial undertaking • More real-time systems have a combination of soft and hard requirements • Rate Monotonic Scheduling (RMS) is used to assign task priorities based on how often tasks execute • The highest rate of execution are given the highest priority • RMS makes a number of assumptions: • All tasks are periodic (they occur at regular intervals). • Tasks do not synchronize with one another, share resources, or exchange data. • The CPU must always execute the highest priority task that is ready to run. In other words, preemptive scheduling must be used. • If meet the following inequality equation, all task HARD real-time deadlines will be met CPU utilization of all time-critical tasks should be less than 70% Other 30% can be used by non-time critical tasks

  23. Mutual Exclusion • Multiple tasks access same area (critical section) must ensure that each task has exclusive access to the data to avoid contention and data corruption • The method of exclusive access • Disabling interrupts • Performing test-and-set operations • Disabling scheduling • Using semaphores

  24. Disable interrupts; Access the resource (read/write from/to variables); Reenable interrupts; • Disabling and enabling interrupts • uc/OS-II provides two macros to disable/enable interrupt void Function (void) { OS_ENTER_CRITICAL(); . . /* You can access shared data in here */ . OS_EXIT_CRITICAL(); } • Not to disable interrupts for too long • Affect the response of your system to interrupts (interrupt latency) • This method is the only way that a task can share variables or data structures with an ISR

  25. Disable interrupts; if (‘Access Variable’ is 0) { Set variable to 1; Reenable interrupts; Access the resource; Disable interrupts; Set the ‘Access Variable’ back to 0; Reenable interrupts; } else { Reenable interrupts; /* You don’t have access to the resource, try back later; */ } • Test-and-Set (TAS) • Must be performed indivisibly (by the processor), or you must disable interrupts when doing the TAS on the variable • Some processor actually implement a TAS operation in hardware (e.g., 68000 family)

  26. Disabling and Enabling the Scheduler • If no shared variables or data structures with an ISR, we can disable and enable scheduling • Two or more tasks can share data without contention • While the scheduler is locked, interrupts are enable • If the ISR generates a new event which enables a higher priority, the higher priority will be run when the OSSchedunlock() is called. void Function (void) { OSSchedLock(); . . /* You can access shared data in here */ . OSSchedUnlock(); }

  27. Semaphores • Semaphores are used to • Control access to a shared resource (mutual exclusion) • Signal the occurrence of an event • Allow two tasks to synchronize their activities • Two types of semaphores • Binary semaphores • Counting semaphores • Three operations of a semaphore • INITIALIZE (called CREATE) • WAIT (called PEND) • SIGNAL (called POST) --Release semaphore • Highest priority task waiting for the semaphore is activized (uCOS-II supports this one) • First task that requested the semaphore is activized (FIFO)

  28. Accessing shared data by obtaining a semaphore OS_EVENT *SharedDataSem; void Function (void) { INT8U err; OSSemPend(SharedDataSem, 0, &err); . . /* You can access shared data in here (interrupts are recognized) */ . OSSemPost(SharedDataSem); }

  29. Control of shared resources (multual exclusion) • e.g., single display device Results may be: task1( ... ) { ... printf("This is task 1."); ... } task2( ... ) { ... printf("This is task 2."); ... } ThiThsi siis ttasaks k12.. Exclusive usage of certain resources (e.g., shared memory)

  30. Solution: using a semaphore and initialized it to 1 • Each task must know about the existence of semaphore in order to access the resource • Some situations may encapsulate the semaphore is better

  31. INT8U CommSendCmd(char *cmd, char *response, INT16U timeout){ Acquire port's semaphore; Send command to device; Wait for response (with timeout); if (timed out) { Release semaphore; return (error code); } else { Release semaphore; return (no error); }} Figure 2.11 Hiding a semaphore from tasks

  32. Counting semaphore BUF *BufReq(void){ BUF *ptr; Acquire a semaphore; Disable interrupts; ptr = BufFreeList; BufFreeList = ptr->BufNext; Enable interrupts; return (ptr);} void BufRel(BUF *ptr){ Disable interrupts; ptr->BufNext = BufFreeList; BufFreeList = ptr; Enable interrupts; Release semaphore;}

  33. Deadlock • To avoid a deadlock the tasks is • Acquire all resources before proceeding • Acquire the resources in the same order • Release the resources in the reverse order • Using timeout when acquiring a semaphore • When a timeout occur, a return error code prevents the task form thinking it has obtained the resource. • Deadlocks generally occur in large multitasking systems, not in embedded systems

  34. Synchronization • A task can be synchronized with an ISR or a task • A task initiates an I/O operation and waits for the semaphore • When the I/O operation is complete, an ISR (or another task) signals the semaphore and the task is resumed Unilateral rendezvous • Tasks synchronizing their activities bilateral rendezvous

  35. Bilateral rendezvous Task1(){ for (;;) { Perform operation; Signal task #2; (1) Wait for signal from task #2; (2) Continue operation; }} Task2(){ for (;;) { Perform operation; Signal task #1; (3) Wait for signal from task #1; (4) Continue operation; }}

  36. Event Flags (uCOS-II supports Event flags) • Used in a task needs to synchronize with the occurrence of multiple events

  37. Common events can be used to signal multiple tasks

  38. Intertask Communication • A task or an ISR to communicate information to another task • There are two ways of intertask communication • Through global data (disable/enable interrupts or using semaphore) • Task can only communicate information to an ISR by using global variables • Can only use disable interrupt to ensure exclusive access • Task can not aware any global variable is changed (unless using semaphore or task periodical polling) • Sending messages • Message mailbox or message queue (task can be aware of the data change)

  39. Message Mailboxes (using pointer-size variable) • A task desiring a message from an empty mailbox is suspended and placed on the waiting list until a message is received • Kernel allows the task waiting for a message to specify a timeout • When a message is deposited into the mailbox, how to select a task from the waiting list • Priority based • FIFO • uC/OS-II provides following kernel mailbox services • Initialize the contents of a mailbox (create) • Deposit a message (post) • Wait for a message (pend0 • Get a message from a mailbox, if one is present, but do not suspnd the caller if the mailbox is empty (accept) Waiting list

  40. Message queues • Is used to send one or more messages to a task • A task or an ISR can deposit a message into a message queue • One or more tasks can receive messages through a service provided by the kernel • A waiting list is associated with each message queue • Allow the task waiting for a message to specify a timeout • Is basically an array of mailboxes • The first message inserted in the queue will be the first message extracted from the queue (FIFO) or Last-In_first-Out (LIFO)

  41. Interrupts • When an interrupt is recognized, the CPU saves • Return address (interrupted task) • Flags • Jump to Interrupt Service Routine (ISR) • Upon completion of the ISR, the program returns to • Background for a Foreground/background system • The interrupted task for a non-preemptive kernel • The highest priority task ready to run for a preemptive kernel Interrupt Nesting

  42. Interrupt latency, Response, and Recovery • The most important specification of a real-time kernel is the amount of time interrupts are disabled • The longer interrupts are disabled , the higher the interrupt latency • Interrupt latency • Maximum amount of time interrupts are disabled + Time to start executing the first instruction in the ISR • Interrupt response (defined as the time between the reception of the interrupt and the start of the user code that handles the interrupt) • Foreground/background • Non-preemptive kernel (don’t need to disable interrupt) • Preemptive kernel • Interrupt recovery (the time required for the processor to return to the interrupted code) • Foreground/background • Non-preemptive kernel • Preemptive kernel Interrupt latency + Time to save the CPU's context Interrupt latency + Time to save the CPU's context + Execution time of the kernel ISR entry function (OSIntEnter()) Time to restore the CPU's context + Time to execute the return from interrupt instruction Time to determine if a higher priority task is ready + Time to restore the CPU's context of the highest priority task + Time to execute the return from interrupt instruction

  43. Figure 2.20 Interrupt latency, response, and recovery (foreground/background)

  44. Figure 2.21 Interrupt latency, response, and recovery (non-preemptive kernel)

  45. Figure 2.22 Interrupt latency, response, and recovery (preemptive kernel) (context switch)

  46. Nonmaskable Interrupts (NMIs) • NMI cannot be disabled • Interrupt latency, response, and recovery are minimal • Interrupt latency • Interrupt response • Interrupt recovery • Because NMIs can not be disabled to access critical sections of code, you can not use kernel services to signal a task • Ex: Used NMI in an application to respond to an interrupt occurred every 150 us. An ISR which will take the processing time from 80 to 125 us, and the kernel may disable interrupt for about 45 us. Then if we use maskable interrupt , the ISR could have been late by 20 us (125+45 > 150). Time to execute longest instruction +Time to start executing the NMI ISR Interrupt latency +Time to save the CPU's context Time to restore the CPU's context +Time to execute the return from interrupt instruction Disableing Nonmaskable interrupts

  47. When you are serving an NMI, you cannot use kernel services to signal a task because NMIs cannot be disabled to access critical sections of code. • You can pass parameters to and from the NMI. • Parameters passed must be global variables and the size of these variables must be read or written indivisibly; that is, not as separate byte read or write instructions. • EX: suppose the NMI service routine needs to signal a task every 40 times it executes • If the NMi occurs every 150us, a signal would be required every 40 * 150us = 6ms. • From a NMI ISR, you cannot use the kernel to signal the task, but you can use the following scheme Every 150 us Every 150us*40 = 6ms Signaling a task from a nonmaskable interrupt

  48. Clock Tick • A special interrupt that occurs periodically • Allows kernel to delay task for an internal number of clock ticks • To provide timeout when task are waiting for event to occur • The faster the tick rate, the higher the overhead imposed on the system

  49. Figure 2.25 delaying a task for one tick (case 1) Due to the higher priority tasks and the ISRs execute prior to the delayed task, the delayed task actually executes at varying intervals ( we call this variance is jitter)

More Related