1 / 26

Computer Science 162 Discussion Section Week 2

Computer Science 162 Discussion Section Week 2. Agenda. Recap “What is an OS?” and Why? Process vs. Thread “THE” System. Note: Many slides are modifications of slides from Matei Zaharia Who referenced slides from Steve Gribble, Ed Lazowska , Hank Levy, and John Zahorian.

hua
Download Presentation

Computer Science 162 Discussion Section Week 2

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Computer Science 162Discussion SectionWeek 2

  2. Agenda • Recap “What is an OS?” and Why? • Process vs. Thread • “THE” System

  3. Note: Many slides are modifications of slides from MateiZaharia Who referenced slides from Steve Gribble, Ed Lazowska, Hank Levy, and John Zahorian

  4. Why do we want an OS? • Isolation • Fault: “if my program crashes yours shouldn’t” • Performance: “if my program starts to do some massive computation, it shouldn’t starve yours from running” • Mediation (multiplexing/sharing + protection) • Manage the sharing of hardware resources (CPU, NIC, RAM, disk, keyboard, sound card, etc), • Abstractions and Primitives • Set of constructs and well-defined interfaces to simplify application development: “all the code you didn’t write” in order to implement your application • Because hardware changes faster than applications! • Because some concepts are useful across applications

  5. Why bother with an OS? • User benefits • Efficiency (cost and performance) • share one computer across many users • concurrent execution of multiple programs • Safety • OS protects programs from each other • OS fairlymultiplexes resources across programs • Application benefits • Simplicity • sockets instead of ethernet cards • Portability • device independence: 3COM card or Intel card?

  6. Concurrency and Parallelism • Concurrency means multiple threads of computation can make progress, but possibly by sharing the same processor • Like doing homework while chatting on IM • Parallelism means leveraging multiple processors to compute a result faster • Like dividing a pile of work among people

  7. Why Concurrency? • Consider a web server: while it’s waiting for a response from one client, it could read a request for another client • Consider a browser: while it’s waiting for a response from a web server, it wants to react to mouse or keyboard input Concurrency increases/enables responsiveness

  8. Why Parallelism? • Because we actually have multiple CPUs! • Because matrix multiply goes so much faster! NOTE: Parallelism requires multiple processors, while concurrency also helps on a uniprocessor

  9. Lifecycle of a Thread (or Process) • As a thread executes, it changes state: • new: The thread is being created • ready: The thread is waiting to run • running: Instructions are being executed • waiting: Thread waiting for some event to occur • terminated: The thread has finished execution • “Active” threads are represented by their TCBs • TCBs organized into queues based on their state

  10. How does OS do it? • Kernel: The highly privileged code that carries out lowest level OS functions • Use multiple processes, OS schedules them (i.e. multiplexes resources between them) • Each process has its own address space • Each process maintains a list of open files, open network connections … • Use multiple threads within a process, either OS or user schedules them • Threads share the process’s address space Threads are cheaper than processes and can more easily share state! But have no isolation.

  11. Recall, an OS needs to mediate access to resources: how do we share the CPU? • Strategy 1: force everyone to cooperate • a thread willingly gives up the CPU by calling yield() which calls into the scheduler, which context-switches to another thread • what if a thread never calls yield()? • Strategy 2: use preemption • at timer interrupt, scheduler gains control and context switches as appropriate

  12. Thread T Thread S A A Stack growth B(while) B(while) yield yield run_new_thread run_new_thread switch switch Review: Two Thread Yield Example • Consider the following code blocks: procA() { B(); } proc B() { while(TRUE) { yield(); } } • Suppose we have 2 threads: • Threads S and T

  13. “THE” System • Dijkstra • Algorithm (shortest path) • OS (“THE”) • Software Engineering (“GOTO Considered Harmful”) • Programming Language and Formal Verification

  14. “THE” Multiprogramming System • Why Multiprogramming? • A Reduction in turnaround time for short programs • Economic use of peripheral devices • Automatic control of backing store and efficient use of CPU • Need general processor but not all the power

  15. Storage • Core ->RAM, Drum ->Disk • Separation of Virtual and Physical location • Page Swap, the content of the page can be written to a different location on the Drum. • No need for consecutive physical locations

  16. Processor • A Collection of Sequential Processes Working Together • Process State • Mutual Synchronization

  17. Hierarchy • Level 0 – Present a virtual processor • Level 1 – Present virtual segments • Level 2 – Present a virtual console • Level 3 – Present a buffered IO interface to devices • Level 4 – User Programs

  18. Benefit of Layering • Limited Interface • Fewer Bugs • Easier to Test • Easier to Communicate

  19. Semaphore • Found in Appendix, but so Important! • Shared between sequential processes for synchronization • P -> decrease ->Possibly lock (if value <0) • V -> increase ->unlock • P, V are indivisible (atomic)

  20. Backup Slides

  21. Kernel-level threads address space os kernel thread CPU (thread create, destroy, signal, wait, etc.)

  22. Are kernel threads too expensive? • Historically yes (thread operations require system calls), but aren’t too bad in practice today, if you use them correctly. • Alternatives?

  23. user-level thread library (thread create, destroy, signal, wait, etc.) User-level threads address space os kernel thread CPU

  24. User-level threads: what the kernel sees address space os kernel thread CPU

  25. user-level thread library (thread create, destroy, signal, wait, etc.) User-level threads: the full story address space kernel threads os kernel thread CPU (kernel thread create, destroy, signal, wait, etc.)

  26. Are user-level threads the answer? • No, Google “scheduler activations” for a great discussion of why user-level threads aren’t enough!

More Related