1 / 28

COP 4600 Operating Systems Spring 2011

COP 4600 Operating Systems Spring 2011. Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM. Last time: Midterm solutions Today: Virtualization for the three abstractions Threads Virtual Memory Bounded buffer The kernel of an operating system Threads State

gretel
Download Presentation

COP 4600 Operating Systems Spring 2011

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. COP 4600 Operating Systems Spring 2011 Dan C. Marinescu Office: HEC 304 Office hours: Tu-Th 5:00-6:00 PM

  2. Last time: Midterm solutions Today: Virtualization for the three abstractions Threads Virtual Memory Bounded buffer The kernel of an operating system Threads State Thread manager Thread state Kernel and application threads Next time Processor switching Lecture 15 – Thursday, March 17, 2011 Lecture 15

  3. Virtualization – relating physical with virtual objects • Virtualization  simulating the interface to a physical object by: • Multiplexing  create multiple physical objects from one instance of a physical object. • Aggregation  create one virtual object from multiple physical objects • Emulation  construct a virtual object from a different type of a physical object. Emulation in software is slow. Lecture 15

  4. Virtualization of the three abstractions • We analyze virtualization for the three abstractions • Interpreter  Threads • Communication link/channel  Bounded Buffer • Storage  Virtual Memory Lecture 15

  5. (1) Virtualization of interpreter - Threads • Process/Threads  a virtual processor • Multiplexing or processor sharing is possible because • there is a significant discrepancy between processor bandwidth and the bandwidth of memory and of I/O devices • threads spend a significant percentage of their lifetime waiting for external events. • Called: • Time-sharing • Processor multiplexing • Multiprogramming • Multitasking • Processes versus thread: • Both represent a module in execution • A process may consist of multiple threads • A thread is a light-weight process, less overhead to create it. Lecture 15

  6. Lecture 15

  7. Tight coupling between interpreter and storage • We need a memory enforcement mechanism; to prevent a thread running the code of one module from overwriting the data of another module. • Address space  the range of memory addresses a thread is allowed to access. • Close relationship between a thread and an address space. Lecture 15

  8. (2) Virtualization of storage - Virtual memory • Address space – the storage a thread is allowed to access. • Virtual address space – an address space of a standard size regardless of the amount of physical memory. • The physical memory may be too small to fit an application; otherwise each application would need to manage its own memory. • The size of the address space is a function of the number of bits in an address. For example, 32 bits addresses allow an address space of 232 (4 Gbytes), 64 bit addresses allow 264 • Virtual Memory - a scheme to allow each thread to access only its own virtual address space (collection of virtual addresses). Lecture 15

  9. Memory map of a process Lecture 15

  10. Lecture 15

  11. (3) Virtualization of communication link - Bounded buffers Bounded buffers  implement the communication channel abstraction • Bounded the buffer has a finite size. We assume that all messages are of the same size and each can fit into a buffer cell. A bounded buffer will only accommodate N messages. • Threads use the SEND and RECEIVE primitives. Lecture 15

  12. Lecture 15

  13. Basic concepts • Principle of least astonishment - study and understand simple phenomena or facts before moving to complex ones. For example: • Concurrency - an application requires multiple threads that run at the same time. Tricky. Understand sequential processing first. • Examine a simple operating system interface to the three abstractions • At the same time we need concepts that can be extended; Recall the extension of the UNIX file systems to NFS!! • Create a framework which can allow us to deal with more complex phenomena. Example: how to deal with concurrency • Serialization • Critical sections  code that must be executed by a single process to completion. • Locks  mechanisms to ensure serialization • Interrupts – external events which require suspension of the current thread, identification of the cause of the interrupt, and reaction to the interrupt. • State – critical concepts which allows us to interrupt the execution of a thread and restart it at a later point in time. • Event – a change of the state of an interpreter (thread, processor) Lecture 15

  14. Side effects of concurrency • Race condition  error that occurs when a device or system attempts to perform two or more operations at the same time, but because of the nature of the device or system, the operations must be done in the proper sequence in order to be done correctly. • Race conditions depend on the exact timing of events thus are not reproducible. • A slight variation of the timing could either remove a race condition or create. • Very hard to debug such errors. Lecture 15

  15. Lock • Lock  a mechanism to guarantee that a program works correctly when multiple threads execute concurrently • a multi-step operation protected by a lock behaves like a single operation • can be used to implement before-or after atomicity • shared variable acting as a flag (traffic light) to coordinate access to a shared variable • works only if all threads follow the rule  check the lock before accessing a shared variable. Lecture 15

  16. The kernel implements the three abstractions • The kernel • is responsible for the management of all system resources including the processor • performs operations on behalf of users in response to system calls; in other words extends the instruction set of a processor with a number of operations. • Two modes of operation of a computing system • kernel/privileged mode • user mode • Multiple functions • Scheduling and thread management • Event handling • Memory management • Communication management Lecture 15

  17. Thread and VM management – virtual computer • The kernel supports thread and virtual memory management • Thread management: • Creation and destruction of threads • Allocation of the processor to a ready to run thread • Handling of interrupts • Scheduling – deciding which one of the ready to run threads should be allocated the processor • Virtual memory management  maps virtual address space of a thread to physical memory. • Each module runs in own address space; if one module runs multiple threads all share one address space. • Thread + virtual memory  virtual computer for each module. Lecture 15

  18. Threads and the Thread Manager • Thread  virtual processor - multiplexes a physical processor • a module in execution; • a module may have several threads. • sequence of operations: • Load the module’s text • Create a thread and lunch the execution of the module in that thread. • Scheduler system component which chooses the thread to run next • Thread manager implements the thread abstraction. • Interrupts processed by the interrupt handler which interacts with the thread manager • Exception  interrupts caused by the running thread and processed by exception handlers • Interrupt handlers run in the context of the OS while exception handlers run in the context of interrupted thread. Lecture 15

  19. The state of a thread; kernel versus application threads • Thread state: • Thread Id  unique identifier of a thread • Program Counter (PC) -the reference to the next computational step • Stack Pointer (SP) • PMAR – Page Table Memory Address Register • Other registers • Application threads – threads running on behalf of users • Kernel threads – threads running on behalf of the kernel Lecture 15

  20. Virtual versus real; the state of a processor • Virtual objects need a physical support. • In addition to threads we should be concerned with the processor or cores running a thread. • A system may have multiple processors and each processor may have multiple cores • The state of the processor or core: • Processor Id/Core Id  unique identifier of a processor /core • Program Counter (PC) -the reference to the next computational step • Stack Pointer (SP) • PMAR – Page Table Memory Address Register • Other registers Lecture 15

  21. Lecture 15

  22. Lecture 15

  23. Virtual machines • Allow • multiple operating systems on the same processor. • a processor with a certain instruction set to emulate another one with a different instruction set e.g., the Java Virtual Machines runs under Window, Linux, Mac OS, etc. Lecture 15

  24. Lecture 15

  25. Basic primitives for processor virtualization Lecture 15

  26. Switching the processor from one thread to another • Thread creation: thread_idALLOCATE_THREAD(starting_address_of_procedure, address_space_id); • YIELD  function implemented by the kernel to allow a thread to wait for an event. • Save the state of the current thread • Schedule another thread • Start running the new thread – dispatch the processor to the new thread • YIELD • cannot be implemented in a high level language, must be implemented in the machine language. • can be called from the environment of the thread, e.g., C, C++, Java • allows several threads running on the same processor to wait for a lock. It replaces the busy wait we have used before. Lecture 15

  27. Thread states and state transitions Lecture 15

  28. The state of a thread and its associated virtual address space Lecture 15

More Related