1 / 34

Today

Today. Parallel and Distributed Systems, Virtualization Systems Take-Away Final No break: end early Chapter 8, 13. Space Sharing. A set of 32 CPUs split into four partitions, with two CPUs available. OS per node. Parallel/distributed job has interprocessor communication

yan
Download Presentation

Today

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Today • Parallel and Distributed Systems, Virtualization • Systems Take-Away • Final • No break: end early • Chapter 8, 13

  2. Space Sharing • A set of 32 CPUs split into four partitions, with two CPUs available. • OS per node Parallel/distributed job has interprocessor communication Job gets full use of K processors; no other jobs on them; run to completion; usually batch mode

  3. Time sharing • Processes of different jobs are co-mingled • Pros/Cons for parallel jobs? • More efficient utilization; particularly if blocking … • Communicating processes/threads not running together

  4. Gang Scheduling • Figure 8-14. Communication between two threads belonging to thread A that are running out of phase. Need a more complex OS to ensure A0 and A1 running together!

  5. Gang Scheduling The three parts of gang scheduling: • Groups of related processors/threads are scheduled as a unit, a gang. • All members of a gang run simultaneously, on different timeshared CPUs. • All gang members start and end their time slices together.

  6. Gang Scheduling • Figure 8-15. Gang scheduling.

  7. Backfilling

  8. Virtualization • Compress K servers into L • Why? • Save $$, space • Run legacy apps on new h/w • Contained environment/OS – easily migrated • Reliability? • Mostly due to bogus code

  9. Type 1 Hypervisors • When the operating system in a virtual machine executes a kernel-only instruction, it traps to the hypervisor if virtualization technology (VT) is present. If not, what would happen? • Downsides? • hardware support and T1 hypervisor is a fresh OS

  10. T1 vs. T2 • VMware binary translation – sensitive instructions replaced with calls to emulate them; e.g. I/O instruction -> call hypervisor can do a read syscall

  11. Paravirtualization • T1/T2 have emulation overhead. Xen and VMware (as well) – no sensitive calls in guest OS– just calls to hypervisor

  12. Distributed Systems • Classic scheduling • Sender or Receiver initiated • Client-server systems • saturate bandwidth going in to the server … • non-scalable

  13. Peer-to-Peer (P2P) • Systems Perspective: Scalable

  14. Grid • The “Grid” is a vision • ensemble of geographically-dispersed resources • seamless, transparent access • Analogy to Power Grids • cheap, ubiquitous, consistent • computational grids deliver computing & data - not power • Core Grid Features and Challenges • single-sign on • dynamic and shared • highly heterogeneous • multiple administrative domains • sheer scale • Systems Perspective: Wide-area OS

  15. Cloud Buzz words: Virtualized Pay-as-you-go Scale up Systems perspective: centralized management, co-location

  16. CSci 5103Operating Systems Operating System Design Tannenbaum MOS Chap. 13

  17. Goals of an OS • Define abstractions • Provide primitive operations • Ensure isolation • Manage the hardware • Multiple roles of the OS …

  18. Why is it hard to design an OS? • Extremely large program. Unix 1M loc, Windows 2K 29M • Concurrency • Hostile users • Users want to share and yet isolation must be provided • Long-lived: hardware will change, OS must adapt • Designers have little idea how system will be used • Portability is a must • Backward compatibility

  19. Interface Design • Simplicity (KISS) • Perfection is reached not when there is no longer anything to add, but when there is no longer anything to take away • Completeness • Everything should be as simple as possible, but no simpler • Key idea “minimum of mechanism” or elegance • Efficiency • e.g. system calls should be efficient and cost should be evident • which is faster: fork or thr_create?

  20. System Call Interface • Adding more code adds more bugs • Don’t hide power … hide unnecessary complexity • Expose powerful hardware features • E.g if hardware provides a way to move large bitmaps around the screen (video ram), might want to provide a syscall for it

  21. System Call Interface • Exploit unifying paradigms • In Unix many objects have file-like behavior • (source/sink for data) • read_file, read_tty, read_socket, read_device, … • instead a single read call can be used

  22. Implementation • Monolithic: Unix – no structure or modularity • Layering: • Client/Server/u-kernel • Many OS functions are represented as servers in user-space • Extensible? • Performance?

  23. Mechanism vs. Policy • Mechanism defines how something is implemented • Policy governs how mechanism is used • Should be kept separate • E.g. • mechanism: priority scheduling for threads • policy: how priorities are set • ACLs • Orthogonality • Combine separate concepts/mechanisms • Process is a container for resources • Thread is a schedulable entity • Evolve each capability separately and combine as needed

  24. Naming Human-readable names hide underlying complexity Binding time: when is name bound to an address? early vs. late binding Example?

  25. Static vs. Dynamic Structures Searching a static table for a pid is easy. Suppose it were a linked-list of mini-tables? Flexibility vs. Complexity Dynamic = Flexibility but pitfalls

  26. Useful Implementation Techniques • Indirection • “no problem in Computer Science that cannot be solved with another level of indirection” • Examples of indirection? • Tradeoff? • Reusability • keep OS binary small • e.g. bitmap routines • Brute Force • assembly code for critical routines • linear searches are ok on small tables: sort/hash code can have bugs

  27. Performance • Why does MS-DOS boot in seconds on old x86 hardware but Windows 2K take minutes on hardware that is hundreds of times faster? • features, features, features – e.g. plug-and-play (on every reboot inspect status of all hardware) • Add a feature • should ask what is the price in code size, speed, complexity, and reliability • what would happen if we DIDN’T add this feature? • Optimize useful things, not rarely used features • good enough is good enough • optimize the common case: example?

  28. Performance (cont’d) • Space-Time Trade-Offs • Memory and CPU tradeoff • e.g. store small records (4 bytes) – I want to look them up, sort them, delete them, etc. • array – all operations are linear time (except sort) • binary tree – all operations are log time (double the storage) • ACL vs. ACM • Other tricks • Use macros to save function call overhead • #define max(a,b) a<b • Keep critical regions as small as possible

  29. Performance (cont’d) • Caching: e.g. i-node cache • To look up /usr/ast/mbox requires: • 1. read i-node for root dir, 2. read the root dir, 3. read the i-node for /usr, 4. Read the /usr dir, 5. Read the i-node for /usr/ast, 5. read the /usr/ast/dir • Other examples of caching?

  30. Performance (cont’d) • Caching: e.g. i-node cache • To look up /usr/ast/mbox requires: • 1. read i-node for root dir, 2. read the root dir, 3. read the i-node for /usr, 4. Read the /usr dir, 5. Read the i-node for /usr/ast, 5. read the /usr/ast/dir • Other examples of caching? TLB, …

  31. Systems Mantras • Be clever at high utilization! • simple techniques work well at “low resource demands” • sender initiated – random destination works well under low load; • under high load can lead to many hops • Bulk operations work better than large number of smaller ones • Indirection, indirection, ….

  32. Final • When: Weds July 28 in the classroom, 12:20-2:20 • Final is based on material since last exam – closed everything • memory management (working set, thrashing, memory hogs, …) • I/O • file systems: design and implementation including distributed file systems • provenance • protection • distributed multiprocessor systems • Lecture notes and papers • 2 hours allocated – exam will be same length as in-class exams • As before it will be a mixture of short answer and longer questions

  33. Hints • I will ask a working set question • I will ask an i-node type question • I will ask a question regarding DFS

  34. That’s All Folks • Good luck on the final! • Weds July 28 in the classroom, 12:20-2:20

More Related