1 / 25

Lecture 1

Lecture 1. History of Operating Systems Introduction of Concurrent and Distributed Systems. What Is an Operating System?. An Operating System is the software that makes the computer's hardware usable. The OS manages the HW and the SW resources of the computer.

topper
Download Presentation

Lecture 1

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Lecture 1 History of Operating Systems Introduction of Concurrent and Distributed Systems

  2. What Is an Operating System? An Operating System is the software that makes the computer's hardware usable. The OS manages the HW and the SW resources of the computer. An OS is a software application that is initiated upon computer startup and continues to run while the computer is operational. The purpose of an OS is to provide access to and control of application programs, file transfers and other tasks required by the computer user. The OS is the software that makes the computer hardware usable. End-User Applications Utilities Operating System Computer Hardware

  3. Why do we study Operating Systems? Because they are more interesting that the broken ones* To be able to manage our computer resources more effectively To take advantage of functionality provided by the OS in our programming projects Why do we study Distributed Systems? To build more secure software applications To better prepare us for the increasing complexity of networked and distributed software design and implementation * bahrumpbump

  4. Functions of an Operating System Separates applications from the hardware they access Software layer Manages software and hardware to produce desired results Operating systems primarily are resource managers Hardware Processors Memory Input/output devices Communication devices Software applications

  5. Mainframe Operating Systems History 1940’s No O.S. (The 0th Generation) all instructions handcoded in machine language one program resident in computer at a time programmer at console 1950’s Batch O.S. (The 1st Generation) batch processing - single jobs running to completion in sequence O.S. regains control after each job is completed programs included Job Control Language (JCL) to support O.S. functions 1960’s Multiprogramming O.S. (The 2nd Generation) multiprogramming - multiple jobs running concurrently on a mainframe device independence - peripherals interchangeable timesharine - interactive programming real-time systems - immediate response and hardware in the loop (HWIL)

  6. 1970’s Multimode O.S. (The 3rd Generation) multimode systems - combining batch, timeshare, real-time and multiprocessing software layer - layers of abstraction separating programmer and user from HW virtual machine - the emulation of one machine on another software became unbundled from hardware creating a new industry 1980’s Personal PC O.S. (The 4th Generation) personal computers - minicomputers, workstations, desktops, laptops, handheld local area network - regaining processing power by connecting smaller systems 1990’s Distributed O.S. (The 5th Generation) global interconnectivity - Internet, World Wide Web thin client - functionality left on server system to manage large databases

  7. Personal Computer Development Timeline 1974 1975 1976 1977 1979 1980 Ed Roberts invents the ALTAIR 8800. Appears in Popular Electronics article. Paul Allen and Bill Gate create a BASIC Interpreter for the ALTAIR Allen and Gates take a paper tape copy of their BASIC Interpreter to Roberts Allen and Gates move to Albequerque, NM and open an office across the street from Roberts’ company. This is the beginning of Microsoft. Steven Jobs and Steve Wozniak debut the Apple at the West Coast Computer Fair and sell 50 units. Jobs and Wozniak sell the first Apple II. Gary Kindall writes CP/M (Intergalactic Digital Research) sells 600,000 copies Steven Jobs tours Xerox Palo Alto Research Center and sees ethernet, laser printers, and a Graphical User Interface (GUI). IBM contacted Bill Gates looking for a BASIC interpreter and OS for their upcoming PC.

  8. 1980 1981 1982 1983 1984 Gates did not have an OS so he recommended that IBM talk to Kindall. Kindall was not available for the meeting but left his wife to talk with them. She would not sign their non-disclosure agreement so they gave up on getting CP/M. Apple sells 35,000 Apple II’s. Microsoft moves office to Bellevue, Washington Gates contacted Tim Patterson the creator of QDOS (an OS similar to CP/M). But Patterson would not make a deal since he had a exclusive rights agreement with a company called Seattle Computer Products. Microsoft makes a deal with IBM to provide BASIC and QDOS. Gates purchases the rights to QDOS from Seattle Computer Products for $50K and renames it PCDOS 1.0. Tim Patternson goes to work for Microsoft. Apple develops the Lisa (forerunner to the Macintosh). Microsoft recommends a GUI for DOS that would look-and-feel like the Mac OS but IBM rejects it. IBM asks Microsoft to help develop OS/2 to compete with clone operating systems Apple introduces the Macintosh during Superbowl.

  9. 1984 1985 1987 1990 1992 1994 1995 1999 Microsoft breaks ties with IBM and begins work on Windows 1.0 Approximately 1000 servers connected to “Internet”. Microsoft releases Windows 1.0 OS/2 are introduced and are largely ignored by the public. Microsoft announces Windows 2.0 Apple takes Microsoft to court and eventually loses. Microsoft introduces Windows 3.0 and sells over 30 million copies. Microsoft ships Windows 3.1 Apple’s Powerbook first computer to break $1 billion threshold Microsoft releases Windows 3.11 Microsoft introduces Windows 95 and spends $300 million promoting it. Microsoft introduces Windows NT 5.0

  10. Monolithic Kernel (Linux) vs Microkernel (Minix) "Linux is obsolete" - A must read debate between Andrew S. Tanenbaum and Linus Torvalds

  11. Hardware Innovations that have Improved OS Performance Storage interleaving - making multiple banks of secondary storage independently accessible. Relocation register - holds address of program in memory so it made be moved during use. memory buffer - a block of memory used to hold data during I/O operations. I/O and DMA channels - special hardware to handle I/O and memory access instead of CPU cycle stealing - giving DMA channels priority access to memory over the CPU Interrupts vs polling - methods for peripherals to gain the attention of the CPU storage protection - limiting the range of programs in memory in a multiprogramming system clocks and timers - interval and real-time clocks are used to support job scheduling/sharing base-plus-displacement- makes processes relocatable and permits shorter addresses virtual storage - allows the creation of arbitrarily large seamless programs multiprocessing - multiple processors sharing primary memory controlled by one OS pipelining & superscalar CPU- techniques to parallelize inherently sequential processes memory hierarchy - using different types of memory to improve performance, cache, RAM...

  12. Pipeline & Superscalar CPUs Pipelining - A CPU can have separate fetch, decode, and execute units, so that while it is executing instruction n, it can also be decoding instruciton n+1 and fetching instruction n+2. Superscalar - In a superscalar CPU multiple execution units are present. Two or more instrucitons are fetched at once, decoded, and dumped into a holding buffer until they can be executed.

  13. Memory Hierarchy

  14. System Calls To obtain services from the OS, a user program must make a system call, which traps into the kernel and invokes the operating system. When the work has been completed, control is returned to the user program at the instruction following the system call. System-calls are kernel functions that serve as an interface (for user mode applications) to invoke kernel services like drivers, file-systems, Network stracks ... system calls are also referred to as "kernel entry points" since applications can enter kernel mode only through a valid system call interface. http://blog.techveda.org/adding-system-calls-linux-kernel-3-5-x/

  15. Single-Platform Operating Systems The early OS's were designed to execute on a single computer. These systems had no mechanisms for accessing another computer through a network. Somewhat later in the history of OS's, tools were created for machine to machine data transfer through peer-to-peer connections, however these were limited to basic file transfer for copying and backup.

  16. Types of Operating Systems Real-Time OS - Features and settings not accessible by user. The primary goal of an RTOS is to ensure that a specific set of operations occur within a precise time period. Embedded OS - A single-user, single-tasking OS used on many small hand-held devices such as personal data assistants, cell-phones and media players. PC OS - These are the most popular and well-known type of OS, They are single-user, multi-tasking and are used on desktop and laptop computers. Mainframe OS - Also called a multi-user operating system, makes the resources of the computer simultaneously available to many different users. These OS must balance the processing load and resources to provide fair and effectice access to data and processes. Networked & Distributed OS - For many multi-user and client-server applications the mainframe computer has been replaced with many distributed computers managed by a single OS.

  17. Network Operating Systems A networking operating system is an operating system that contains components and programs that allow a computer on a network to serve requests from other computer for data and provide access to other resources such as printer and file systems. JUNOS, used in routers and switches from Juniper Networks. Cisco IOS (formerly "Cisco Internetwork Operating System") is a NOS having a focus on the internetworking capabilities of network devices. It is used on Cisco Systems routers and some network switches. BSD, also used in many network servers. Linux Microsoft Windows Server Novell Netware

  18. Distributed Operating Systems A distributed operating system is one that looks to its users like an ordinary centralized operating system but runs on multiple, independent central processing units (CPUs). The key concept here is transparency. In other words, the use of multiple processors should be invisible (transparent) to the user. Another way of expressing the same idea is to say that the user views the system as a virtual uniprocessor, not as a collection of distinct machines. - Tanenbaum Plan 9 from Bell Labs, a distributed operating system - designed from the ground up as a distributed system: the architecture of Plan 9 is inherently grid-enabled. Amoeba - an open source microkernel-based distributed operating system developed by Andrew S. Tanenbaum and others at the VrijeUniversiteit. The aim of the Amoeba project is to build a timesharing system that makes an entire network of computers appear to the user as a single machine. (stalled) BOINC - Open-source software for volunteer computing and grid computing.

  19. Networked vs Distributed Operating Systems • A typical configuration for a network operating system would be a collection of personal computers along with a common printer server and file server for archival storage, all tied together by a local network. Generally speaking, such a system will have most of the following characteristics that distinguish it from a distributed system: • Each computer has its own private operating system, instead of running part of a global, systemwide operating system. • Each user normally works on his or her own machine; using a different machine invariably requires some kind of “remote login,” instead of having the operating system dynamically allocate processes to CPUs. • Users are typically aware of where each of their files are kept and must move files between machines with explicit “file transfer” commands, instead of having file placement managed by the operating system. • The system has little or no fault tolerance; if 1 percent of the personal computers crash, 1 percent of the users are out of business, instead of everyone simply being able to continue normal work, albeit with 1 percent worse performance.

  20. Parallel vs. Concurrent Processing A concurrent program is a set of sequential programs that can be executed in parallel. A process is one of the sequential threads of execution making up a concurrent program. In the textbook the term program is reserved for concurrent programs. Parallel processes are two or more processes executing at the same time. Concurrency is an abstraction that refers to multiple processes each executing a sequence of operations whose relative timing is arbitrary.

  21. Multitasking Multitasking is a simple generalization from the concept of overlapping I/O with a computation to overlapping the computation of one program with that of another. A scheduler program is run by the operating system to determine which process should be allowed to run for the next interval of time. The scheduler can take into account priority considerations and usually implements time-slicing, where computations are periodically interrupted to allow a fair sharing of the computational resources, in particular, of the CPU. When multitasking is performed within a program it is referred to as multithreading.

  22. Threads The term process is used in the theory of concurrency, while the term thread is commonly used in programming languages. A technical distinction is often made between the two terms: a process runs in its own address space managed by the OS a thread runs within the address space of a single process The term thread was popularized by pthread (POSIX threads), a specification of concurrency implement in UNIX/LINUX systems. The differences between processes and threads is not relevant for the study concurrent systems, so we will be using process except when referring to a specific programming language. C# and Java use threads while Ada refers to tasks.

  23. Multiprocessors & Distributed Programming The modern personal computer contains more than one processor, the graphics processor is a specialized computer, as well as the processors for I/O and communications. In this sense desktop computers are multiprocessor systems. Multiprocessing can also be performed using multiple computers. A program that runs on multiple networked computers is called a distributed program. The entire Internet can be considered to be one distributed system working to disseminate information in the form of email and Web pages.

  24. The Challenge of Concurrent Programming The challenge in concurrent programming comes from the need to synchronize the execution of different processes and to enable them to communicate. It turns out to be very challenging to implement safe and efficient synchronization and communication. When a concurrent program crashes or produces incorrect answers or unexpected behavior it can be very difficult to reproduce, diagnose and correct the problem.

More Related