1 / 185

SYLLABUS

SYLLABUS. Section A Multi-Processor and Distributed Operating System: Introduction , Architecture, Organization, Resource sharing, Load Balancing, Availability and Fault Tolerance, Design and Development Challenges, Inter-process Communication,

Download Presentation

SYLLABUS

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. SYLLABUS Section A • Multi-Processor and Distributed Operating System: • Introduction, Architecture, Organization, • Resource sharing, • Load Balancing, • Availability and Fault Tolerance, • Design and Development Challenges, • Inter-process Communication, • Distributed Applications: • Logical Clock, • Mutual Exclusion, • Distributed File System.

  2. ADVANCED OPERATING SYSTEMS MCA 404

  3. SYLLABUS

  4. SYLLABUS Section A • Multi-Processor and Distributed Operating System: • Introduction, • Architecture, • Organization, • Resource sharing, • Load Balancing, • Availability and Fault Tolerance, • Design and Development Challenges, • Inter-process Communication, • Distributed Applications: • Logical Clock, • Mutual Exclusion, • Distributed File System.

  5. SYLLABUS Section B • Real Time and Embedded Operating Systems: • Introduction, • Hardware Elements, • Structure • Interrupt Driven, • Nanokernel, • Microkernel and • Monolithic kernel based models. • Scheduling – • Periodic, • Aperiodic and • Sporadic Tasks, • Introduction to Energy Aware CPU Scheduling.

  6. SYLLABUS Section C • Cluster and Grid Computing: • Introduction to Cluster Computing and MOSIX OS, • Introduction to the Grid, • Grid Architecture, • Computing Platforms: • Operating Systems and Network Interfaces, • Grid Monitoring and Scheduling, • Performance Analysis, • Case Studies.

  7. SYLLABUS Section D • Cloud Computing: • Introduction to Cloud, • Cloud Building Blocks, • Cloud as IaaS, PaaS and SaaS, • Hardware and software virtualization, • Virtualization of OS • Hypervisor KVM, • SAN and • NAS back-end concepts. • Mobile Computing: • Introduction, • Design Principles, • Structure, Platform and Features of Mobile Operating Systems (Android, IOS, Windows Mobile OS).

  8. SYLLABUS References: • SibsankarHaldar, Alex A. Arvind, “Operattng Systems”, Pearson Education Inc. • Tanenbaum and Van Steen, “Distributed systems: Principles and Paradigms”, Pearson, 2007. • M. L. Liu, “Distributed Computing: Principles and Applications”, Addison Wesley, Pearson • Maozhen Li, Mark Baker, “The Grid – Core Technologies”, John Wiley & Sons 2005

  9. HappyNewYear2014

  10. Happy NewYear: 2014How to be Happy? There are Nine Philosophies (Darshan) to be happy:- • BrahmDarshan: Philosophy of understanding the God or Brahm. • DevDarshan: Philosophy of understanding the lords (Devtas). • GayatriDarshan: Understand the meaning of Gayatri Mantra. • GangaDarshan: Understand the meaning of Ganga. • VicharDarshan: Understand the power of a thought. • Karmyog: Philosophy of Action/Effort. • SamDarshan: Understand the philosophy of balance in life. • DukhDarshan: Understand the value of stress and strain. • SukhDarshan: Understand the key behind Happiness.

  11. SECTION A • Multi-Processorand Distributed Operating System: • Introduction, Architecture, Organisation

  12. MULTIPROCESSOR SYSTEMS: INTRODUCTION, ARCHITECTURE AND ORGANISATION

  13. MULTIPROCESSOR SYSTEMS: INTRODUCTION, ARCHITECTURE AND ORGANISATION • A multiprocessor system is one that has more than one processor on-board in the computer.

  14. MULTI-PROCESSOR SYSTEM: TWO PROCESSORS • There are two CPU Chips on the same mother board. • Each CPU may be multicore. Eg Dual Core, Quad Core etc. • Each CPU has its own Memory slots.

  15. MULTI-PROCESSOR SYSTEM: FOUR PROCESSORS • There are four CPU Chips on the same mother board. • Each CPU may be multicore (Dual Core, Quad Core etc). • Each CPU has its own Memory slots.

  16. MULTI-PROCESSOR SYSTEM • A multiprocessor is a tightly coupled computer system having two or more processing units(Multiple Processors) each sharing main memory and peripherals, in order to simultaneously process programs. • Sometimes the term Multiprocessor is confused with the term Multiprocessing. • While Multiprocessing is a type of processing in which two or more processors work together to execute more than one program simultaneously, the term Multiprocessor refers to the hardware architecture that allows multiprocessing.

  17. MULTI-PROCESSOR SYSTEM: INTRODUCTION, ARCHITECTURE AND ORGANISATION • A CPU, or Central Processing Unit, is what is typically referred to as a processor. A processor contains many discrete parts within it, such as one or more memory caches for instructions and data, instruction decoders, and various types of execution units for performing arithmetic or logical operations. • A multiprocessor system contains more than one such CPU, allowing them to work in parallel. This is called SMP, or Simultaneous Multiprocessing. • A multicore CPU has multiple execution cores on one CPU. Now, this can mean different things depending on the exact architecture, but it basically means that a certain subset of the CPU's components is duplicated, so that multiple "cores" can work in parallel on separate operations. This is called CMP, Chip-level Multiprocessing.

  18. MULTI-CORE PROCESSOR • Multi-core processing refers to the use of multiple microprocessors, called "cores," that are built onto a single silicon die. • A multi-core processor acts as a single unit. • As such, it is more efficient, and establishes a standardized platform, for which mass-produced software can easily be developed.

  19. MULTI-CORE PROCESSOR • The design of a multi-core processor allows for each core to communicate with the others, so that processing tasks may be divided and delegated appropriately. • However, the actual delegation is dictated by software. • When a task is completed, the processed information from all cores is returned to the motherboard via a single shared conduit. • This process can often significantly improve performance over a single-core processor of comparable speed. • The degree of performance improvement will depend upon the efficiency of the software code being run.

  20. MULTI-CORE PROCESSOR • Multi-core is usually the term used to describe two or more CPUs working together on the same chip.

  21. MULTI-CORE PROCESSOR • Multi-core is usually the term used to describe two or more CPUs working together on the same chip. • A multi-core processor is a single computing component with two or more independent actual central processing units (called "cores"), which are the units that read and execute program instructions. • The instructions are ordinary CPU instructions such as add, move data, and branch, but the multiple cores can run multiple instructions at the same time, increasing overall speed for programs amenable to parallel computing. • Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP), or onto multiple dies in a single chip package.

  22. MULTI-CORE CPU Substrate: An underlying substance or layer.

  23. MULTI-CORE PROCESSOR • For example, a multicore processor may have a separate L1 and L2 cache and execution unitfor each core, while it has a shared L3 cache for the entire processor. • That means that while the processor has one big pool of slower cache, it has separate fast memory and artithmetic/logic units for each of several cores. • This would allow each core to perform operations at the same time as the others.

  24. MULTI-CORE PROCESSOR • There is an even further division, called SMT, Simultaneous Multithreading. • This is where an even smaller subset of a processor's or core's components is duplicated. • For example, an SMT core might have duplicate thread scheduling resources, so that the core looks like two separate "processors" to the operating system, even though it only has one set of execution units. • One common implementation of this is Intel's Hyperthreading.

  25. MULTI-CORE PROCESSOR Cache Hierarchy • Modern system architectures have 2 or 3 levels in the cache hierarchy before going to main memory. • Typically the outermost or Last Level Cache (LLC) will be shared by all cores on the same physical chipwhile the innermost are per core. • We are most interested in the data caches (D-cache), although there will also be caches for instructions (I-cache).

  26. MULTI-CORE Cache Hierarchy… • As an example, on the Intel Westmere EP processors of 2010 we see: • a 64KB L1 D-cache per core • a 256KB L2 D-cache per core • a single 12MB L3 D-cache per socket(some products went as high as 30MB L3)

  27. MULTI-CORE PROCESSOR Cache Hierarchy… • Cache Hits • When data is successfully found in the cache it is called a cache hit. • Data found in the L1 cache takes a few cycles to access. • The L2 cache may take 10 cycles. • The L3 cache takes 50+ cycles. • Main memory can take hundreds of cycles.

  28. MULTI-CORE PROCESSOR Cache Hierarchy… • Cache Lines. • The CPU manages the allocation of space in the cache. • When an address is read that is not already in the cache it loads a larger chunk of memory than was requested. • The expectation is that nearby addresses will soon be used. • These chunks of memory are called cache lines. • Cache lines are commonly 32, 64 or 128 bytes in size. • A cache can only hold a limited number of lines dtermined by the cache size. • A 64KB cache with 64 byte lines has 1024 cache lines.

  29. MULTI-CORE PROCESSOR Cache Hierarchy… Replacement Policy • When all the cache lines are being used a line must be evicted to make room for new data. • The process used to select a cache line is called the replacement policy. • The most common replacement policy is least recently used (LRU). • This policy assumes that the more recently used a cache lines is, the more likely it is to be needed again soon. • Another replacement policy is random replacement: a random cache line is evicted.

  30. MULTI-CORE PROCESSOR Cache Hierarchy… Cache Misses • When a program accesses an uncached memory address it is called a cache miss. • Processing stalls while it attempts to fetch the data from the next level cache. • In the worst case, the miss continues all the way to main memory.

  31. MULTI-PROCESSOR SYSTEM

  32. MULTIPROCESSOR SYSTEMS: INTRODUCTION, ARCHITECTURE AND ORGANISATION • . (MMU: Memory Management Unit)

  33. MULTIPROCESSOR SYSTEMS: (SH & AAA: 1.11.1)INTRODUCTION, ARCHITECTURE AND ORGANISATION • A multiprocessor system is one that has more than one processor on-board in the computer. • They execute independent streams of instructions simultaneously. • They share • system buses, • the system clock, • and the main memory, • and may share peripheral devices too. • Such systems are also referred to as tightly coupled multiprocessor systemsas opposed to network of computers (calleddistributed systems). • A uniprocessor system can execute only one process at any point of real lime, though there might be many processes ready to be executed.

  34. MULTIPROCESSOR SYSTEMS: INTRODUCTION, ARCHITECTURE AND ORGANISATION • By contrast,a multiprocessor system can execute many different processes simultaneously at the same real time. • However, the number of processors in the system restricts the degree of simultaneous process executions. • There are two primary models of multiprocessor operating systems: symmetric and asymmetric. • In a symmetric multiprocessor system, each processor executes the same copy of the resident operating system, takes its own decisions, and cooperates with other processors for smooth functioning of the entire system. • In an asymmetric multiprocessor system, each processor is assigned at specific task, and there is a designated master processor that controls activities of the other subordinate processors. The master processor assigns works to subordinate processors.

  35. MULTIPROCESSOR SYSTEMS: (SH & AAA: 1.11.1) INTRODUCTION, ARCHITECTURE AND ORGANISATION • In multiprocessor systems, many processors can execute operating system programs simultaneously. • Consequently,kernel path synchronization is a major challenge in designing multiprocessor operating systems. • We need a highly concurrent kernel to achieve real gains in system performance. • Synchronization has a much stronger impact on performance in multiprocessor systems than on uniprocessor systems. • Many known uniprocessor synchronization techniques are ineffective in multiprocessor systems. • Multiprocessor systems need very sophisticated, specialized synchronization schemes. • Another challenge in symmetric multiprocessor systems is to balance the workload among processors rationally.

  36. MULTIPROCESSOR SYSTEMS: (SH & AAA: 1.11.1)INTRODUCTION, ARCHITECTURE AND ORGANISATION • Multiprocessor operating systems are expected to be fault tolerant, that is, failures of a few processors should not halt the entire system, a concept called graceful degradation of the system. • >> • In multiprocessor systems, many processes may execute the kernel simultaneously. • In uniprocessor systems, concurrency is only achieved in the form of execution interleavings; only one process can make progress in the kernel mode, while others are blocked in the kernel wailing for processor allocation or some events to occur.

  37. MULTITHREAD SYSTEMS (SH & AAA: 1.11.6) • A thread is an independent strand that executes a program concurrently with other threads within the context of the some process. • A thread is a single sequential flow of control within a program execution. • Each thread has a beginning, a sequence of instruction executions, and an end. • At any given point of time, there is one single point of execution in each thread. • A thread is not a process by itself. • It cannot run on its own; it always runs within at process.

  38. MULTITHREAD SYSTEMS (SH & AAA: 1.11.6)… • Thus, a multithreaded process may have multiple execution flows, different ones belonging to different threads. • These threads share the same private address space of the process, and they share all the resources acquired by the process. • They run in the same process execution context, and therefore, one thread may influence other threads in the process. • Different systems implement the thread concept differently. • Some systems have user-level library routines to manage threads in a process. • An application process can be multithreaded,but the operating system sees only the process and not the contained threads*. • *In some other systems, every thread has a kind of process entity called lightweight process (LWP) in the operating system. • *The LWPs in a process are truly independent strands.

  39. MULTITHREAD SYSTEMS (SH & AAA: 1.11.6)… • When any thread makes a system call and is blocked, the entire process is blocked too, and no other threads in the process can make any progress until the former thread returns from the system call. • No change in the operating system is required for thread handling. • We often say the operating system is single threaded but applications are multithreaded. • In some other systems, every thread has a kind of process entity called lightweight process (LWP) in the operating system. • The LWPs in a process are truly independent strands. • If one LWP is blocked in the operating system, other sibling LWPs in the process can make progress in their executions. • These systems are truly multithreaded as the threads are visible to the operating system. These systems need to provide support for LWP creation, maintenance, scheduling, and synchronization.

  40. PROCESSES AND THREADS Process Synchronisation • Process synchronization is required when one process must wait for another to complete some operation before proceeding. • For example, • one process (called a writer) may be writing data to a certain main memory area, • while another process (a reader) may be reading data from that area and sending it to the printer. • The reader and writer must be synchronized so that the writer does not overwrite existing data with new data until the reader has processed it. • Similarly, the reader should not start to read until data has actually been written to the area.

  41. PROCESSES AND THREADS Process Synchronisation… • Various synchronization techniques have been developed. • In one method, the operating system provides special commands that allow one process to signal to the second when it begins and completes its operations, so that the second knows when it may start. • In another approach, shared data, along with the code to read or write them, are encapsulated in a protected program module. The operating system then enforces rules of mutual exclusion, which allow only one reader or writer at a time to access the module. • Process synchronization may also be supported by an interprocess communication facility, a feature of the operating system that allows processes to send messages to one another.

  42. PROCESSES AND THREADS Process Synchronisation… • Designing software as a group of cooperating processes has been made simpler by the concept of “threads.” • A single process may contain several executable programs (threads) that work together as a coherent whole. • Example: • One thread might, for example, handle error signals, • another might send a message about the error to the user, • while a third thread is executing the actual task of the process. • Modern operating systems provide management services (e.g., scheduling, synchronization) for such multithreaded processes.

  43. PROCESSES AND THREADS Threads • The majority of processes seen on operating systems today are single threaded, meaningthere is a single path of execution within the process. • Should a process have to perform many sub tasks during it's operation then a single threaded process would sequence these tasks in a serial manner, with each sub task being required to wait for the completion of the previous sub task before commencement. • Such an arrangement can lead to great inefficiency in the use of the processor and in the apparent responsiveness of the computer. • An example can illustrate the advantages of having multiple threads of execution as shown in the figure.

More Related