1 / 26

Processes III

Processes III. CSE5306 Lecture Quiz 7 due at 5 PM Tuesday, 9 September 2014. Common Approaches to Managing Server Clusters. Clients see server clusters as one machine. Their managers don’t Login to monitor 1 node, install, swap components.

ria-mullen
Download Presentation

Processes III

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Processes III CSE5306 Lecture Quiz 7due at 5 PM Tuesday, 9 September 2014

  2. Common Approaches to Managing Server Clusters • Clients see server clusters as one machine. • Their managers don’t • Login to monitor 1 node, install, swap components. • IBM’s Cluster Sys Mgmt administers 50 servers. • Thousands of servers’ maintenance is ad hoc. • Failures are the rule, not the exception. • Self-management will mature someday. • Now PlanetLab has the best partial solution.

  3. PlanetLab Architecture Node2 Mgr Node1 Mgr • PlanetLab is a multi-university collaboration 1-Tier server cluster. • Each organization has donated one or more of PlanetLab’s ~200 nodes (see “Hardware” above). • A virtual machine monitor (VMM) enforces a security/reliability shield between the hardware and every independent Vserver above it. • Each (virtual) Vserver runtime environment supports a family of similar-vintage legacy processes, which share files with each other, but not outside their Vserver. SLICE (VMM) (VMM) (Node 2) (Node 1)

  4. PlanetLab (continued) • Users test PlanetLab’s distribution transparency in experiments on virtual server clusters called “slices,” which gather Vservers of different nodes (i.e., running on separate hardware). • PlanetLab’s management problem discoveries: • Each organization should be able to control who uses its node(s). • Each of the organizations’ various monitoring tools assumes very specific hardware and software configurations. • Programs in different slices that share a common node must not interfere with each other.

  5. PlanetLab (continued) • Every node has one “node manger” (i.e., one Vserver) dedicated to creating other Vservers for its node and controlling their resources; e.g., disk space, file descriptors, network bandwidth. • Resources are allocated to processes in strict time intervals by an “rspec” specification. Every resource has an “rcap” list of capabilities that the node manager can look up in a table. • Each slice belongs to an (end-user) “service provider,” who has an account on PlanetLab. To create a new slice of nodes, they issue “slice creation services” (SCS) to ask node managers to create a sliceVserver and allocate its resources.

  6. PlanetLab Management • Only a software “slice authority” can issue the SCS to create a slice, prompted by a (web-connected, human) certified user. • Node-owners and their “management authorities” software enforce PlanetLabrules in the bottom two architectural layers. • Conclusions: • Large server clusters must be managed by intermediaries with clearly delineated authority. • End-user service providers request slices, but organizations’ resource providers manage them.

  7. R U O K ? 1. What are some common approaches to managing server clusters? • Login to each node to monitor its operations, to install new software or to swap components. • Use IBM’s Cluster System Management tool administer up to 50 servers. • Maintain many thousands of servers in ad hoc fashion. • All of the above. • None of the above.

  8. R U O K ? 2. Which of the following accurately characterize the PlanetLab experimental server cluster? • It has only one tier. • Each organization donated one or more of its ~200 nodes. • A virtual machine monitor (VMM) enforces a security/reliability shield between the hardware and every independent Vserver above it. • Each (virtual) Vserver runtime environment supports a family of similar-vintage legacy processes, which share files with each other, but not outside their Vserver. • All of the above.

  9. R U O K ? 3. What problems did PlanetLab’smanagers discover: • Each organization should be able to control who uses its node(s). • Each of the organizations’ various monitoring tools assumes very specific hardware and software configurations. • Programs in different slices that share a common node must not interfere with each other. • All of the above. • None of the above.

  10. R U O K ? 4. Which of the following describe the administrative organization that evolved in PlanetLabfor addressing its management problems? • Every node has one “node manger” (i.e., a Vserver) dedicated to creating other Vservers for its node and controlling their resources; e.g., disk space, file descriptors, network bandwidth. • Resources are allocated to processes in strict time intervals by an “rspec” specification, and every resource has an “rcap” list of capabilities that the node manager can look up in a table. • Each slice belongs to an (end-user) “service provider” (a PlanetLab account holder), who creates the new slice of nodes and issues “slice creation services” (SCS) asking the node managers to create a sliceVserver and to allocate its resources. • All of the above. • None of the above.

  11. R U O K ? 5. What did PlanetLab’s managers conclude from their experiences with its administrative organization? • Large server clusters must be managed by intermediaries with clearly delineated authority. • End-user service providers request slices, but organizations’ resource providers must manage them. • Only a software “slice authority” can issue the SCS to create a slice, prompted by a (web-connected, human) certified user. • Node-owners and their “management authorities” software enforce PlanetLab rules in the bottom two architectural layers. • All of the above.

  12. PlanetLab Management (continued) • PlanetLab’s evolved relationship-building sequence: • A management authority takes over control of an owner’s node. • The management authority provides software that enables the node to join PlanetLab. • In their simple “wedding,” a service provider registers itself with the management authority, trusting it to provide well-behaved nodes. • The certified service provider directs a slice authority to create a slice on its bound nodes. • The slice authority contacts the node owner to authenticate the service provider. • The node owner provides a “slice creation service” enabling the slice authority to create slices, essentially delegating resource management to that slice authority. • The management authority also delegates the slice creation task to the slice authority.

  13. PlanetLab Management (concluded) • Recall PlanetLab’s management problems: • Each organization should be able to control who uses its node(s). • Each of the organizations’ various monitoring tools assumes very specific hardware and software configurations. • Programs in different slices that share a common node must not interfere with each other. • PlanetLab’s management problem solutions: • Delegate nodes in a controlled way, so that a node owner can rely upon decent and secure node management. • PlanetLab’s unified approach to allowing each slice user to see that her programs are behaving: • Every node is equipped with web-connected sensors of disk activity, CPU usage, etc. • Eventually an Astrolabe-like service could aggregate sensor readings and high-level inferences across multiple nodes. • Programs are protected from each other: • Vservers and VMM isolate slices. • Vserver administrators also issue UNIX chroot commands to change all applications’ file system roots. • Vservers also should separate program’s normally shared information; e.g., their processes, network addresses, memory usage. • The physical machine should be partitioned into many separate Linux environments.

  14. Reasons for Migrating Code • Obviously distributed systems should migrate data, but code too…? • Though difficult, moving a running process from a heavily-loaded (e.g., CPU utilization, queue length, communications) to a lightly-loaded machine improves system performance overall. • If an LA client asks a NY servent for articles about the “United States,” and all 20-million of them are found on an LA database, the servent’sprocess should migrate to the LA server. • When gathering biographical data, a server should migrate its user dialog to the client, which returns a completed form. • Migrate a server’s search query to many mobile agents, and they can travel from site to site gathering data and returning it. • If a server has a proprietary new way of accessing a client’s data, it can migrate a limited-rights copy to the client.

  15. R U O K ? Arrange the following steps of PlanetLab’srelationship-building sequence in the proper order. 6. In their simple “wedding,” a service provider registers itself with the management authority, trusting it to provide well-behaved nodes. __ 7. The node owner provides a “slice creation service” enabling the slice authority to create slices, essentially delegating resource management to that slice authority. (The owner’s management authority also delegates the slice creation task to the slice authority.) __ 8. The certified service provider directs a slice authority to create a slice on its bound nodes. __ 9. The slice authority contacts the node owner to authenticate the service provider. __ 10. A management authority takes over control of an owner’s node and provides software that enables the node to join PlanetLab. __

  16. R U O K ? 11. Which of the following describe the solutions that PlanetLab’s managers found for their problems? • Delegate nodes in a controlled way, so that a node owner can rely upon decent and secure node management. • Equip every node with web-connected sensors of disk activity, CPU usage, etc. • Protect programs from each other with Vserversand VMM. • All of the above. • None of the above.

  17. R U O K ? 12. Which of the following are examples of valid reasons for migrating code? • Moving a running process from a heavily-loaded machine (e.g., excessive CPU utilization, queue length, communications) to a lightly-loaded machine improves system performance overall. • If an LA client asks a NY servent for articles about the “United States,” and all 20-million of them are found on an LA database, then migrating the servent’s process to the LA server improves system performance. • When gathering biographical data, a server’s migrating its user dialog to the idle client could speed up completing the form. • Migrating a server’s search query to many mobile agents could enable them to quickly travel from site to site gathering data and returning it. • All of the above.

  18. Models for Code Migration • Migrating code actually includes migrating its instructionsand references to all of its required resources, as well as the execution state of its process; e.g., private data, stack, program counter. • Code migration mobility: • Weak—A Java applet, starting from the beginning. • Strong—Process resumes from where it left off. • Sender-initiated—search program sent to database. (Database server authenticates sender for security.) • Receiver-initiated—Browser downloads Java Applet. (Client must protect itself against malicious code.) • Cloning code at target ensures resource compatibility.

  19. Migration and Local Resources • Mobility of process-to-resource bindings: • Binding by identifier (strong); e.g., referring to (some) local communication end points, to a file by it URL, to an FTP server by its Internet address. • Binding by value (weaker); e.g., standard C or Java library. • Binding by type (weakest); e.g., local printer, monitor or temporary file store. • Resource mobility: • Unattached resources; e.g., migrating code’s data files. • Fastened resources; e.g., local databases, complete Web sites. • Fixed resources; e.g., local communication end points.

  20. Migration and Local Resources (continued) • When we can’t establish a global reference… • A compute-intensive image processor can’t also handle communications and image compression. • A process bound to a local communication end point. (Forwarding messages to its new address or asking all of its correspondents to change its address is messy.) • When we can’t copy a resource’s value… • Two processes bound by value to a fixed, shared local memory are inseparable. • Fastened resources bound by value are separable; e.g., word processors’ dictionaries. • Unattached resources easily move with the code. • Bindings by type are easily broken and rebound to another resource.

  21. Migration in Heterogeneous Systems • What if a process must migrate to different hardware…? • Pascal (1970) and Java (1995) were very portable programming languages with one top level and a variety of intermediate primitive p-codes to support a variety of machines. • Today we migrate processes along with their (virtualized) operating systems, as well as all of their bindings to resources within a shared LAN. • We can migrate the operating system in 1 of 4 ways: • Pre-copy memory pages to the new machine, and resend any that get modified just before the code migrates. • Stopping the current virtual machine, migrating its entire memory and starting the new virtual machine. (This much downtime would be unacceptable in continuous service servers.) • Immediately starting the virtual machine, which pulls virtual memory pages as needed. (This could unacceptably prolong substandard performance.) • Pre-copy memory, and stop to copy only pages currently in use (i.e., an optimal combination of alternatives 1 and 2 that reduces downtime to 0.2 second.) • When both machines share a LAN (i.e., a server cluster), clients can be asked to call back on another access point. Migrating bindings to files is simple, when storage is provided as a separate tier (Fig.3-12, p.93).

  22. Summary • Multithreading enables processes to communicate faster between machines without I/O blocking. • Client processes… • Implement user interfaces; e.g., compound documents. • Hide the details of where a server is located, whether it is replicated and recovery from failures. • More intricate than clients, servers may be… • Iterative or concurrent. • Implement one or more services. • Be stateless or stateful. • Clustered servers… • Can be organized to shield apps from each other and the hardware. • Their single entry point may handoff communications to individuals or (eventually) be replaced by a URL. • Migrating code between machines may mean… • Reducing communications bandwidth by letting clients preprocess data. • Letting clients download software that enhances server communications. • Code migration problems… • Arise when bindings with resources also must migrate or change. • Arise when platforms are very different. • Are solved by moving memory and the operating system (virtual machine) along with the code.

  23. R U O K ? Match the following terms with their definitions below. 13. Receiver-initiated code migration mobility __ 14. Weak code migration mobility __ 15. Code migration __ 16. Strong code migration mobility __ 17. Sender-initiated code migration mobility __ • Moving to another machine all instructions and references to all of required resources, as well as the execution state of its process (e.g., private data, stack, program counter). • For example, a Java applet, which always starts from the beginning. • A process that resumes execution from where it left off. • For example, a search program that is sent to database server. • For example, a browser downloading a Java Applet.

  24. R U O K ? Match the following terms with their definitions below. 18. Binding a process to a resource by identifier __ 19. Fastened resources __ 20. Binding by type __ 21. Unattached resources __ 22. Binding by value __ • Strong (e.g., referring to (some) local communication end points, process to a file by its URL, to an FTP server by its Internet address). • Weak (e.g., standard C or Java library). • Weakest (e.g., local printer, monitor or temporary file store). • For example, migrating the code’s data files. • For examples, local databases, complete Web sites.

  25. R U O K ? 23. How can an entire computing environment be migrated to heterogeneous hardware? • Pre-copy memory pages to the new machine, and resend any that get modified just before the code migrates. • Stopping the current virtual machine, migrating its entire memory and starting the new virtual machine. (This much downtime would be unacceptable in continuous service servers.) • Immediately starting the virtual machine, which pulls virtual memory pages as needed. (This could unacceptably prolong substandard performance.) • Pre-copy memory, and stop to copy only pages currently in use (i.e., an optimal combination of alternatives 1 and 2 that reduces downtime to 0.2 second.) • All of the above.

  26. R U O K ? 24. Which of the following are true? • Multithreading enables processes to communicate faster between machines without I/O blocking. • Server processes implement compound documents. • Clients are classified as stateless or stateful. • All servers should be organized to shield apps from each other and the hardware. • All of the above.

More Related