1 / 57

Embedded Operating System

Embedded Operating System. Unit-I ( 4 Hrs.). Course Objective. To Learn the Concepts of Embedded Systems processors and Operating System Develop ability to use Embedded Operating utilities in Embedded Linux. Content of Unit-I. Reference Book :

wwallen
Download Presentation

Embedded Operating System

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Embedded Operating System Unit-I ( 4 Hrs.)

  2. Course Objective • To Learn the Concepts of Embedded Systems processors and Operating System • Develop ability to use Embedded Operating utilities in Embedded Linux

  3. Content of Unit-I • Reference Book : • Embedded System : An Integrated Approach by Lyla B. Das (Chapter 7 & 8) • Operating Systems Concepts (Chapter 7) • Real-Time Tasks (Chapter 8) • Real-Time Systems (Chapter 8) • Types of Real-Time Tasks (Chapter 8) • Real-Time Operating Systems (Chapter 8)

  4. Operating System Concepts (Chapter 7) • Embedded Operating System • An "embedded system" is any computer system or computing device that performs a dedicated function or is designed for use with a specific embedded software application. • Embedded systems may use a ROM-based operating system or they may use a disk-based system, like a PC. But an embedded system is not usable as a commercially viable substitute for general purpose computers or devices.

  5. Operating System Concepts (Chapter 7) • Characteristics of Embedded Operating System • Real-time operation • correctness of computation depends, in part, on the time at which result is delivered • Reactive operation • needs to consider worst-case conditions in execution in order to respond to external events that do not occur at predictable intervals • Configurability • supports flexible configuration so that only the functionality needed for a specific application and hardware suite is provided • e.g., allows to select only the necessary OS modules to load • I/O device flexibility • handles devices by using special tasks instead of integrating their drives into the OS kernel

  6. Operating System Concepts (Chapter 7) • Characteristics of Embedded Operating System • Streamlined protection mechanisms • requires limited protection because tested software can be assumed to be reliable • e.g., I/O instructions need not be privileged instructions that trap to OS  tasks can directly perform their own I/O • no use of an OS service call  avoid overhead for saving and restoring the task context • Direct use of interrupts • permits user process to use interrupts directly • no need to go through OS interrupt service routines • have efficient control over a variety of devices

  7. Operating System Concepts (Chapter 7) • Layers of Operating System : • A program that controls the execution of application programs • An interface between applications and hardware

  8. Operating System Concepts (Chapter 7) • History of an Operating System : • Two major versions of UNIX developed, System V, from AT&T, and BSD, (Berkeley Software Distribution) from the University of California at Berkeley (1977) • To write programs that could run on any UNIX system, IEEE developed a standard for UNIX, called POSIX • in 1987, the author released a small clone of UNIX, called MINIX, for educational purposes. Functionally, MINIX is very similar to UNIX, including POSIX support • Linux system was developed on MINIX and originally supported various MINIX features like MINIX file system • IBM PC/AT came out in 1983 with the Intel 80286 CPU.

  9. Operating System Concepts (Chapter 7) • History of an Operating System : • Bell Labs found they needed an operating system for their computer center which at the time was running various batch jobs. The BESYS operating system was created at Bell Labs to deal with these needs. (1957) • 1969 Unix was developed.First edition of Unix released 11/03/1971. • Two major versions of UNIX developed, System V, from AT&T, and BSD, (Berkeley Software Distribution) from the University of California at Berkeley (1977) • To write programs that could run on any UNIX system, IEEE developed a standard for UNIX, called POSIX • in 1987, the author released a small clone of UNIX, called MINIX, for educational purposes. Functionally, MINIX is very similar to UNIX, including POSIX support • Linux system was developed on MINIX and originally supported various MINIX features like MINIX file system • IBM PC/AT came out in 1983 with the Intel 80286 CPU. • In 1994 Red Hat Linux is introduced. • IRIX 6.5 the fifth generation of SGI Unix is released July 6, 1998.

  10. Operating System Concepts (Chapter 7) • Component of an Operating System : • Process Management • Memory Management • File Management • I/O Management • Network Management

  11. Memory Management tasks • All programs executed in RAM. OS keeps track of RAM for allocation and de allocation. • Keeps track of secondary storage and allocate space as required. • The most important is implementation of virtual memory.

  12. Operating System Concepts (Chapter 7) • The Kernel • Monolithic Kernel • A monolithic kernel is one single program that contains all of the code necessary to perform every kernel related task. • Since there is less software involved it is faster. • Example is Unix, Lunux , Solaris • Monolithic kernel supports modularity. Modules can be statically linked at compile time. • Modules can also be dynamically loaded. Device drivers are installed as LKM. • Note that finally these modules becomes part of kernel

  13. Operating System Concepts (Chapter 7) • The Kernel • Microkernel • Only the most fundamental of tasks are performed such as being able to access some (not necessarily all) of the hardware, manage memory and coordinate message passing between the processes. • Some processes of kernel are separated out and designated as servers. Most important servers runs in kernel space. Others in user space. • IPC is needed for communication between servers (making the operation slower but modularity prevents ‘total crash’). • Example: Minix , BeOS, GNU Hurd

  14. Operating System Concepts (Chapter 7) • Tasks/Processes • Process States • New • Ready • Running • Blocked • Exit • Process Control Block • Multitasking • Task Scheduling • CPU and IO Bound Tasks • Scheduling Algorithms

  15. Process states • New: Process has been created but hasn’t yet got admission in ready queue. • Ready: Submitted for execution, all resources are assigned except CPU. • Running: Currently running on CPU • Blocked: Cannot execute until specific event occurs • Exit: Normal or abnormal termination of process

  16. Process Control Block A table containing all information about the process: • The Unique Process Id • The process state • Scheduling information with respect to it. • Register used and the contents • Pointers to the memory it uses • Program counter value • Priority of the task

  17. The process control block in Linux • PCB is represented in Linux as a C structure called task_struct • All the PCB information are stored in the structure. • Kernel maintains a doubly linked list of such structures. • Maintains a pointer current to the process currently executing in the system • E.g. Current -> state = new_state

  18. CPU Scheduling Criteria • CPU Utilization: CPU should be kept as busy as possible. (40 to 90 percent) • Throughput: Work is being done. No. Of processes per unit time • Turnaround time: For a particular process how long it takes to execute. (Interval between time of submission to time of completion) • Waiting time: Total time process spends in ready queue. • Response: First response of process after submission

  19. Optimization criteria • It is desirable to • Maximize CPU utilization • Maximize throughput • Minimize turnaround time • Minimize start time • Minimize waiting time • Minimize response time • In most cases, we strive to optimize the average measure of each metric • In other cases, it is more important to optimize the minimum or maximum values rather than the average

  20. CPU Scheduling • FCFS or Co-operative • SJF • SRT first • Priority Scheduling • Round Robin Scheduling

  21. P1 P2 P3 0 24 27 30 First-Come, First-Served (FCFS) Scheduling ProcessBurst Time P1 24 P2 3 P3 3 • Suppose that the processes arrive in the order: P1 , P2 , P3 The Gantt Chart for the schedule is: • Waiting time for P1 = 0; P2 = 24; P3 = 27 • Average waiting time: (0 + 24 + 27)/3 = 17 • Average turn-around time: (24 + 27 + 30)/3 = 27

  22. P2 P3 P1 0 3 6 30 FCFS – Continued: Suppose that the processes arrive in the order P2 , P3 , P1 • The Gantt chart for the schedule is: • Waiting time for P1 = 6;P2 = 0; P3 = 3 • Average waiting time: (6 + 0 + 3)/3 = 3 • Much better than previous case • Convoy effect short process behind long process • Average turn-around time: (3 + 6 + 30)/3 = 13

  23. P3 P2 P4 P1 3 9 16 24 0 SJF( Non Pre-emptive and Simultaneous arrival Process Arrival TimeBurst Time P1 0.0 6 P2 0.0 4 P3 0.0 1 P4 0.0 5 • SJF (non-preemptive, simultaneous arrival) • Average waiting time = (0 + 1 + 5 + 10)/4 = 4 • Average turn-around time = (1 + 5 + 10 + 16)/4 = 8

  24. P1 P3 P2 P4 0 3 7 8 12 16 SJF (Non pre-emptive and varied arrival) Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (non-preemptive, varied arrival times) • Average waiting time = ( (0 – 0) + (8 – 2) + (7 – 4) + (12 – 5) )/4 = (0 + 6 + 3 + 7)/4 = 4 • Average turn-around time: = ( (7 – 0) + (12 – 2) + (8 - 4) + (16 – 5))/4 = ( 7 + 10 + 4 + 11)/4 = 8

  25. P1 P2 P3 P2 P4 P1 16 0 11 2 4 5 7 SJF (Pre-emptive and varied arrival)SRTF Process Arrival TimeBurst Time P1 0.0 7 P2 2.0 4 P3 4.0 1 P4 5.0 4 • SJF (preemptive, varied arrival times) • Average waiting time = ( [(0 – 0) + (11 - 2)] + [(2 – 2) + (5 – 4)] + (4 - 4) + (7 – 5) )/4 = 9 + 1 + 0 + 2)/4 = 3 • Average turn-around time = (16 + 7 + 5 + 11)/4 = 9.75

  26. Priority Sceduling • A priority number (integer) is associated with each process • The CPU is allocated to the process with the highest priority (smallest integer  highest priority) • Preemptive • nonpreemptive • SJF is a priority scheduling where priority is the predicted next CPU burst time • Problem  Starvation – low priority processes may never execute • Solution  Aging – as time progresses increase the priority of the process

  27. P1 P2 P3 P4 P1 P3 P4 P1 P3 P3 0 20 37 57 77 97 117 121 134 154 162 Round Robin ProcessBurst Time P1 53 P2 17 P3 68 P4 24 • Typically, higher average turnaround than SJF, but betterresponse time • Average waiting time = ( [(0 – 0) + (77 - 20) + (121 – 97)] + (20 – 0) + [(37 – 0) + (97 - 57) + (134 – 117)] + [(57 – 0) + (117 – 77)] ) / 4 = (0 + 57 + 24) + 20 + (37 + 40 + 17) + (57 + 40) ) / 4 = (81 + 20 + 94 + 97)/4= 292 / 4 = 73 • Average turn-around time = 134 + 37 + 162 + 121) / 4 = 113.5

  28. Operating System Concepts (Chapter 7) • Threads • Interrupt Handling • Inter-Process Communication • Shared Memory • Message Passing • Message Pipes • Mailboxes • Transfer Protocol • Blocking Sender, Blocking Receiver • Non-blocking sender, Blocking Receiver • Producer Consumer Paradigm • Remote Procedure Call

  29. Processing Environment • Program is an executable file stored in disk. • When submitted for execution in memory, it becomes a ‘process’. • So ,a program under execution is called process and now it has associated execution context stored in well defined data structures.

  30. Process Environment : Memory Image of a Process • Address space of a process is the memory locations which the process can read or write • Process has three types of memory segments • Text : Program Code • Data : Globally Declared Variables • Heap : Dynamically allocated memory • Stack : Function call arguments, local variables, return addresses

  31. Process in memory Multiple Processes Single Process Layout

  32. Memory Layout of Program int tax = 0; int add (int a , int b) { int total; total = a + b + tax; return total; } int main() { int x, y, sum; x = 5; y = 6; sum = add (x, y); }

  33. Threads • What exactly is a thread? • Why does it comes into picture? • What we are achieving with thread in single core and multi core architecture? • Example of thread in real time? • Role of Threads in Desktop, server and embedded systems.

  34. Multithreading

  35. Multithreading • Separate (Specific to each thread0: • Stack • Registers • Program counters • Common (Specific to process): • address space (Address space is allocated to process not each thread) • Global data • Open files

  36. Multithreading • Most of the modern application uses multithreaded model of process. • Thread scheduling is required like process scheduling • Overhead involved in thread switching is less than in process switching • Only registers (PC , SP and general purpose registers) needed to be saved

  37. Interrupts • Used either with hardware support or (provided by processor) or by pure software • Either way, ISR performs the task initiated by interrupt • Example?

  38. Interrupts • Interrupts are associated with delay called interrupt latency. • Sequence of actions that follows an interrupt • Saving the current Context • Determining the identity of the interrupt • Switching to the new context • Starting the execution of the Interrupt handler.

  39. Interrupts and Process Switching • Process switching is accomplished by soft interrupts • Total task switching latency = interrupt latency+ dispatch latency • Dispatch latency – Time incurred in deciding which task to schedule

  40. Inter Process Communication • Independent process cannot affect or be affected by the execution of another process • Cooperating process can affect or be affected by the execution of another process • Advantages of process cooperation • Information sharing • Computation speed-up • Modularity • Convenience

  41. IPC • Cooperating processes need interprocess communication (IPC) • Two models of IPC • Shared memory • Message passing

  42. IPC • Message passing: small amount of data, easier to implement • Shared memory: allow maximum speed, convenience of communication. Should be handles care fully with efficient synchronization policies

  43. Two variation in message passing • Message Pipes • Kernel data structure using FIFO • A buffer, written to end and read from another. • Unidirectional communication • Anonymous and named pipes • Mailboxes • Indirect way of sending the message • Jus like mail box available in post office • Mailbox is maintained by kernel

  44. Producer-Consumer Paradigm • Paradigm for cooperating processes, producer process produces information that is consumed by a consumer process • unbounded-buffer places no practical limit on the size of the buffer • bounded-buffer assumes that there is a fixed buffer size

  45. RPC • Communication between Processes of different computers (RPC assumes underlying network protocols like TCP, UDP) • Primary components: • Client • Server • Endpoint

  46. Deadlocks • What is deadlock? • What are the conditions necessary to consider a deadlock? • What are the methods available to handle it?

  47. Condition for deadlock • Mutual exclusion: Resource cannot be shared. (Resource is occupied in mutually exclusive way) • Hold and Wait: A condition where a process is holding a resource but require more resource to continue with its execution • No Pre emption: Resource cannot be forcibly taken away by OS. • Circular Wait: Circular queue of waiting task.

  48. What can be done about deadlock • Ignore deadlock • Avoidance • Prevention • Detect and recover

  49. Operating System Concepts (Chapter 7) • Task Synchronization • Concurrency • Race Condition • Critical Section • Reader Writer Problem • Deadlock • Necessary conditions • Prevention • Avoidance

  50. Operating System Concepts (Chapter 7) • Semaphores • Binary Semaphore • Counting Semaphore

More Related