1 / 52

Chapter 2 Real-Time Systems Concepts

Chapter 2 Real-Time Systems Concepts. Background. Foreground. ISR. Time. ISR. ISR. Code execution. Foreground/Background Systems. application consists of an infinite loop Interrupt service routines (ISRs) handle asynchronous event (foreground)

maina
Download Presentation

Chapter 2 Real-Time Systems Concepts

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Chapter 2 Real-Time Systems Concepts

  2. Background Foreground ISR Time ISR ISR Code execution Foreground/Background Systems • application consists of an infinite loop • Interrupt service routines (ISRs) handle asynchronous event (foreground) • The worst case task-level response time dpends on how long the background loop takes to execute • Because the execution time of typical code is not constant, the time for successive passes through a portion of the loop is nondeterministic

  3. Critical Section • The critical section • A piece of code which cannot be interrupted during execution • Cases of critical sections • Modifying a block of memory shared by multiple kernel services • Process table • Ready queue, waiting queue, delay queue, etc. • Modifying global variables used by kernel • Entering a critical section • Disable global interrupt - disable() • Leaving a critical section • Enable global interrupt - enable()

  4. Resource • EX: I/O, CPU, Memory, Printer, … • Shared Resource • The resources which are shared among the tasks • Each task should gain exclusive access to the shared resource to prevent data corruption  Mutual exclusion • Task and Thread (herein both terminology represent a same thing) • A simple program that thinks it has the CPU all to itself • Each task is an infinite loop • It has five state: Dormant, Ready, Running, Waiting, and ISR (Interrupted)

  5. Figure 2.2 Multiple Task

  6. Figure 2.3 Task States

  7. Context Switch (or Task Switch) • Each task has its stack to store the associated information • The purpose of context switch • To make sure that the task, which is forced to give up CPU, can resume operation later without lost of any vital information or data • The procedure of interrupt    MP_addr(n) : instruction(n) MP_addr(n+1) : instruction(n+1) MP_addr(n+2) : instruction(n+2)    Push flags Push MP_addr(n+1) Push CPU registers   ; isr codes  pop CPU registers return ; jump back to ; MP_addr(n+1) Interrupt return from interrupt

  8. Foreground/background programming • Context (CPU registers and interrupted program address) save and restore using one stack • Multi-tasking context switch • Using each task’s own stack CPU registers when Task 2 was switched Current CPU stack pointer Task 2’s code address when it was switched Task1’s local variables Task 2’s local variables when it was switched Return address (optional) Return address (optional) Task 1 (to be switched from) stack Task 2 (to be switched to) stack

  9. CPU registers when Task 2 was switched • During context switch Task 2’s code address when it was switched Task 2’s local variables when it was switched Task1’s local variables Return address (optional) Return address (optional) Task 1 stack Task 2 stack Task 1 TCB Task 2 TCB Current Program address Current CPU registers ‧‧‧ ‧‧‧ Stack pointer Stack pointer Current CPU stack pointer

  10. After context switch CPU registers when Task 1 was switched Current CPU stack pointer Task 1’s code address when it was switched Task 1’s local variables when it was switched Task1’s local variables Return address (optional) Return address (optional) Task 1 (suspended) stack Task 2 (current) stack

  11. The Operations of Context Switch • Rely Interrupt (hardware or software, into kernel mode) to do context switch • Push return address • Push FLAGS register • ISR (context switch routine) • Push all register • Store sp to TCB (task control block) • Select a ready task which has the highest priority (Scheduler) • Restore the sp from the TCB of the new selected task • Pop all register • iret (interrupt return which will pop FLAGS and return address) • Switch to new task

  12. Scheduler • Also called the dispatcher • Determine which task will run next • In a priority-based kernel, control of the CPU is always given to the highest priority task ready to run • Two types of priority-based kernels • Non-preemptive • preemptive

  13. Non-preemptive • Task auto gives up the control of the CPU • Also called cooperative multitasking • Tasks cooperate with each other to share the CPU • advantages • Can use non-reentrant function • Without fear of corruption by another task (less need to guard shared data through the use of semaphores) • Interrupt latency is typically low (don’t need to disable interrupt) • Task-level response time much lower than the foreground/background • Worst case is the longest task time

  14. Figure 2.4 Non-preemptive kernel

  15. Preemptive Kernel • Used in the high system responsiveness • The highest priority task ready to run is always given control of the CPU • When a task make a higher priority task ready to run, the current task is preempted and the higher priority task is immediately given control of the CPU • If an ISR makes a higher priority task ready, when the ISR completes, the interrupted task is suspended and the new higher priority task is resumed • The use of non-reentrant functions requires to cooperate with mutual exclusion semaphores

  16. Figure 2.5 Preemptive kernel

  17. Reentrancy • Reentrant function • Can be used by more than one task in concurrent without fear of data corruption • Can be interrupted at any time and resumed at a later time without loss of data • Use local variable int Temp;void swap(int *x, int *y){ Temp = *x; *x = *y; *y = Temp;} void strcpy(char *dest, char *src){ while (*dest++ = *src++) { ; } *dest = NUL;} Listing 2.1 Reentrant function Listing 2.2 Non-reentrant function

  18. Figure 2.6 Non-reentrant function

  19. Task Priority • Static priorities • The priority of each task does not change during the application’s execution • Dynamic priorities • The priority of tasks can be changed during the application’s execution

  20. Figure 2.7 Priority Inversion problem

  21. Figure 2.8 Kernel that supports priority inheritance

  22. Assigning Task Priorities • Rate Monotonic Scheduling (RMS) • The highest rate of execution are given the highest priority • RMS makes a number of assumptions: • All tasks are periodic (they occur at regular intervals). • Tasks do not synchronize with one another, share resources, or exchange data. • The CPU must always execute the highest priority task that is ready to run. In other words, preemptive scheduling must be used. • If meet the following inequality equation, all task HARD real-time deadlines will be met CPU utilization of all time-critical tasks should be less than 70% Other 30% can be used by non-time critical tasks

  23. Mutual Exclusion • Multiple tasks access same area (critical section) must ensure that each task has exclusive access to the data to avoid contention and data corruption • The method of exclusive access • Disabling interrupts • Performing test-and-set operations • Disabling scheduling • Using semaphores

  24. Disable interrupts; Access the resource (read/write from/to variables); Reenable interrupts; • Disabling and enabling interrupts • uc/OS-II provides two macros to disable/enable interrupt void Function (void) { OS_ENTER_CRITICAL(); . . /* You can access shared data in here */ . OS_EXIT_CRITICAL(); } • Not to disable interrupts for too long • Affect the response of your system to interrupts (interrupt latency)

  25. Disable interrupts; if (‘Access Variable’ is 0) { Set variable to 1; Reenable interrupts; Access the resource; Disable interrupts; Set the ‘Access Variable’ back to 0; Reenable interrupts; } else { Reenable interrupts; /* You don’t have access to the resource, try back later; */ } • Test-and-Set (TAS) • Some processor actually implement a TAS operation in hardware (e.g., 68000 family)

  26. Disabling and Enabling the Scheduler • If no shared variables or data structures with an ISR, we can disable and enable scheduling • Two or more tasks can share data without contention • While the scheduler is locked, interrupts are enable • If the ISR generates a new event which enables a higher priority, the higher priority will be run when the OSSchedunlock() is called. void Function (void) { OSSchedLock(); . . /* You can access shared data in here */ . OSSchedUnlock(); }

  27. Semaphores • Semaphores are used to • Control access to a shared resource (mutual exclusion) • Signal the occurrence of an event • Allow two tasks to synchronize their activities • Two types of semaphores • Binary semaphores • Counting semaphores • Three operations of a semaphore • INITIALIZE (called CREATE) • WAIT (called PEND) • SIGNAL (called POST) --Release semaphore • Highest priority task waiting for the semaphore is activized (uCOS-II supports this one) • First task that requested the semaphore is activized (FIFO)

  28. Accessing shared data by obtaining a semaphore OS_EVENT *SharedDataSem; void Function (void) { INT8U err; OSSemPend(SharedDataSem, 0, &err); . . /* You can access shared data in here (interrupts are recognized) */ . OSSemPost(SharedDataSem); }

  29. Control of shared resources (multual exclusion) • e.g., single display device Results may be: task1( ... ) { ... printf("This is task 1."); ... } task2( ... ) { ... printf("This is task 2."); ... } ThiThsi siis ttasaks k12.. Exclusive usage of certain resources (e.g., shared memory)

  30. Solution: using a semaphore and initialized it to 1 • Each task must know about the existence of semaphore in order to access the resource • Some situations may encapsulate the semaphore is better

  31. INT8U CommSendCmd(char *cmd, char *response, INT16U timeout){ Acquire port's semaphore; Send command to device; Wait for response (with timeout); if (timed out) { Release semaphore; return (error code); } else { Release semaphore; return (no error); }} Figure 2.11 Hiding a semaphore from tasks

  32. Counting semaphore BUF *BufReq(void){ BUF *ptr; Acquire a semaphore; Disable interrupts; ptr = BufFreeList; BufFreeList = ptr->BufNext; Enable interrupts; return (ptr);} void BufRel(BUF *ptr){ Disable interrupts; ptr->BufNext = BufFreeList; BufFreeList = ptr; Enable interrupts; Release semaphore;}

  33. Deadlock • To avoid a deadlock the tasks is • Acquire all resources before proceeding • Acquire the resources in the same order • Release the resources in the reverse order • Using timeout when acquiring a semaphore • When a timeout occur, a return error code prevents the task form thinking it has obtained the resource. • Deadlocks generally occur in large multitasking systems, not in embedded systems

  34. Synchronization • A task can be synchronized with an ISR or a task • A task initiates an I/O operation and waits for the semaphore • When the I/O operation is complete, an ISR (or another task) signals the semaphore and the task is resumed Unilateral rendezvous • Tasks synchronizing their activities bilateral rendezvous

  35. Bilateral rendezvous Task1(){ for (;;) { Perform operation; Signal task #2; (1) Wait for signal from task #2; (2) Continue operation; }} Task2(){ for (;;) { Perform operation; Signal task #1; (3) Wait for signal from task #1; (4) Continue operation; }}

  36. Event Flags (uCOS-II does not support) • Used in a task needs to synchronize with the occurrence of multiple events

  37. Common events can be used to signal multiple tasks

  38. Intertask Communication • A task or an ISR to communicate information to another task • There are two ways of intertask communication • Through global data (disable/enable interrupts or using semaphore) • Task can only communicate information to an ISR by using global variables • Task can not aware any global variable is changed (unless using semaphore or task periodical polling) • Sending messages • Message mailbox or message queue

  39. Message Mailboxes • A task desiring a message from an empty mailbox is suspended and placed on the waiting list until a message is received • Kernel allows the task waiting for a message to specify a timeout • When a message is deposited into the mailbox • Priority based • FIFO Waiting list

  40. Message queues • Is used to send one or more messages to a task • Is basically an array of mailboxes • The first message inserted in the queue will be the first message extracted from the queue (FIFO) or Last-In_first-Out (LIFO)

  41. Interrutps • When an interrupt is recognized, the CPU saves • Return address (interrupted task) • Flags • Jump to Interrupt Service Routine (ISR) • Upon completion of the ISR, the program returns to • Foreground/background • The interrupted task for a non-preemptive kernel • The highest priority task ready to run for a preemptive kernel Interrupt Nesting

  42. Interrupt latency, Response, and Recovery • The most important specification of a real-time kernel is the amount of time interrupts are dissabled • Interrupt latency • Maximum amount of time interrupts are disabled + Time to start executing the first instruction in the ISR • Interrupt response (defined as the time between the reception of the interrupt and the start of the user code that handles the interrupt) • Foreground/background • Non-preemptive kernel • Preemptive kernel • Interrupt recovery (the time required for the processor to return to the interrupted code) • Foreground/background • Non-preemptive kernel • Preemptive kernel Interrupt latency + Time to save the CPU's context Interrupt latency + Time to save the CPU's context + Execution time of the kernel ISR entry function Time to restore the CPU's context + Time to execute the return from interrupt instruction Time to determine if a higher priority task is ready + Time to restore the CPU's context of the highest priority task + Time to execute the return from interrupt instruction

  43. Figure 2.20 foreground/background

  44. Figure 2.21 non-preemptive kernel

  45. Figure 2.22 preemptive kernel

  46. Nonmaskable Interrupts (NMIs) • NMI cannot be disabled • Interrupt latency, response, and recovery are minimal • Interrupt latency • Interrupt response • Interrupt recovery Time to execute longest instruction +Time to start executing the NMI ISR Interrupt latency +Time to save the CPU's context Time to restore the CPU's context +Time to execute the return from interrupt instruction Disableing Nonmaskable interrupts Every 150 us Every 150us*40 = 6ms Signaling a task from a nonmaskable interrupt

  47. Clock Tick • A special interrupt that occurs periodically • Allows kernel to delay task for an internal number of clock ticks • To provide timeout when task are waiting for event to occur • The faster the tick rate, the higher the overhead imposed on the system

  48. Figure 2.25 delaying a task for one tick (case 1)

  49. Figure 2.26 Delaying a task for one tick (case 2)

More Related