1 / 32

Distributed Computing

Distributed Computing. 17. f1 = open(toPart2, …); while(…){ write(f1. …); } close(f1);. f2 = open(toPart2, …); while(…){ read(f1, …); } close(f1);. … f2 = open(toPart1, …); while(…){ write(f2. …); } close(f2);. f2 = open(toPart1, …); while(…){ read(f2, …); }

Download Presentation

Distributed Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Operating Systems: A Modern Perspective, Chapter 17

  2. DistributedComputing 17 Operating Systems: A Modern Perspective, Chapter 17

  3. f1 = open(toPart2, …); while(…){ write(f1. …); } close(f1); f2 = open(toPart2, …); while(…){ read(f1, …); } close(f1); … f2 = open(toPart1, …); while(…){ write(f2. …); } close(f2); f2 = open(toPart1, …); while(…){ read(f2, …); } close(f2); Distributed Computation Using Files Part 1 Part 2 Operating Systems: A Modern Perspective, Chapter 17

  4. Finer Grained Sharing • Distributed computing can be for: • Speed up (simultaneous execution) • Simplify data management (consistency) • Last Generation: programming languages (and programmers) do serial computation • Want to share global data • Speed up is a specialist’s domain • Procedure is basic unit of abstraction • Abstract data type behavior Operating Systems: A Modern Perspective, Chapter 17

  5. Finer Grained Sharing (2) • Newer computing model • Partition into processes/threads • Message passing communication • New OS & language support • Remote procedures • Remote objects with remote method invocation • Distributed process management • Shared memory • Distributed virtual memory • … but first, how to partition the computations? Operating Systems: A Modern Perspective, Chapter 17

  6. Distribute Data Serial Form Serial Form Serial Form Execute all data streams simultaneously Data Partition while(…){…} Serial Form of code Operating Systems: A Modern Perspective, Chapter 17

  7. The Parts A Partition Functional Partition Serial Form of code Operating Systems: A Modern Perspective, Chapter 17

  8. Functional Partition (2) • Software is composed from procedures • All programmers are familiar with procedural abstraction – exploit procedure model • Allow each function to be a blob • Implement each blob as a thread/process • OS provide network IPC mechanism for serial use of distributed functions • TCP/IP • Messages • “Procedure call” protocol between client and server Operating Systems: A Modern Perspective, Chapter 17

  9. Record Sharing Part 1 Part 2 … while(…){ writeSharedRecord(…); readSharedRecord(…); } … … while(…){ readSharedRecord(…); writeSharedRecord(…); } … Operating Systems: A Modern Perspective, Chapter 17

  10. Traditional Memory Interfaces Process Primary Memory Interface Secondary Memory Interface Virtual Memory File Management Device Interface Physical Memory Storage Devices Operating Systems: A Modern Perspective, Chapter 17

  11. Remote File Client Remote File Server Exploiting Normal Interfaces to the Memory System Process Primary Memory Interface File System Interface Virtual Memory Privileged Use Only File Management Physical Memory Storage Devices Remote Disk Client Remote Disk Server Operating Systems: A Modern Perspective, Chapter 17

  12. Remote Memory Process • Static memory  New language • Dynamic memory  New OS interface • Low level interface • Binding across address spaces • Shared memory malloc • High level interface • Tuples • Objects Remote Memory Interface File Management Remote Memory Client Remote Memory Server Operating Systems: A Modern Perspective, Chapter 17

  13. Distributed Virtual Memory Process Primary Memory Interface Virtual Memory Remote Paging Client Physical Memory Storage Devices Remote Paging Server Storage Devices Operating Systems: A Modern Perspective, Chapter 17

  14. Distributed Virtual Memory(2) Virtual Address Space Primary Memory Space map Process 1 Local Secondary Memory Server Remote Secondary Memory Virtual Address Space Primary Memory Space map Process 2 Local Secondary Memory Operating Systems: A Modern Perspective, Chapter 17

  15. Remote Procedure Call int main(…) { … func(a1, a2, …, an); … } void func(p1, p2, …, pn) { … } int main(…) { … func(a1, a2, …, an); … } void func(p1, p2, …, pn) { … } Remote Procedure Call Conventional Procedure Call Operating Systems: A Modern Perspective, Chapter 17

  16. Remote Procedure Call int main(…) { … func(a1, a2, …, an); … } void func(p1, p2, …, pn) { … } // Initialize the server while(TRUE) { msg = receive(anyClient); unpack(msg, t1); unpack(msg, t2); … unpack(msg, tn); func(t1, t2, …, tn); pack(a1, rtnMsg); pack(a2, rtnMsg); … pack(an, rtnMsg); send(rpcServer, rtnMsg); } … pack(a1, msg); pack(a2, msg); … pack(an, msg); send(rpcServer, msg); // waiting ... result = receive(rpcServer); ... Operating Systems: A Modern Perspective, Chapter 17

  17. Implementing RPC • Syntax of an RPC should look as much like a local procedure call as possible • Semantics are impossible to duplicate, but they should also be as close as possible • The remote procedure’s execution environment will not be the same as a local procedure’s environment • Global variables • Call-by-reference • Side effects • Environment variables Operating Systems: A Modern Perspective, Chapter 17

  18. First Time Only Implementing RPC rpcServer theClient main register(remoteF); while(1) { receive(msg); unpack(msg); remoteF(…); pack(rtnMsg); send(theClient,rtnMsg); } int main(…) { … localF(…); … remoteF(…); … } void localF(…) { … return; } void remoteF(…) { … return; } Name Server clientStub void register(…) { … } lookup(remote); pack(…); send(rpcServer, msg); receive(rpcServer); unpack(…); return; void lookup(…) { … } Operating Systems: A Modern Perspective, Chapter 17

  19. Compiling an RPC • callRemote(remoteF, …); • remoteF(…); • Compile time • Link time • Dynamic binding Operating Systems: A Modern Perspective, Chapter 17

  20. Sun XDR Conversion Client XDR Spec XDR conversion XDR conversion Network XDR Transmit RPC Server Operating Systems: A Modern Perspective, Chapter 17

  21. Sun rpcgen Files main.c rproc.x rproc.c rpcgen rproc_clnt.c rproc.h rproc_svc.c C compiler C compiler RPC client RPC server Operating Systems: A Modern Perspective, Chapter 17

  22. Process Local Object Interface Remote Object Interface e.g. Corba, DCOM, SOAP, … Performance Local Objects Remote Object Client Remote Object Server Distributed Object Interface Process Object Interface Local Objects Remote Object Client Remote Object Server (a) Single Interface to Objects (b) Interface to Local and Remote Objects Operating Systems: A Modern Perspective, Chapter 17

  23. The CORBA Approach Client ORB Interface IDL Stub Dynamic Stub Stub Implement Dynamic Stub. Interface Repository ORB Core IDL Skeleton Dynamic Skel. ORB Interface Object Adaptor IDL Skeleton Dynamic Skel Object Implementation Operating Systems: A Modern Perspective, Chapter 17

  24. Supporting the Computation • Each blob might be a process, thread, or object • Blobs should be able to run on distinct, interconnected machines • OS must provide mechanisms for: • Process Management • Control • Scheduling • Synchronization • IPC • Memory Management • Shared memory • File Management – remote files • Distributed OS or cooperating Network OSes? Operating Systems: A Modern Perspective, Chapter 17

  25. Control • Remote process/thread create/destroy • Managing descriptors • Deadlock Operating Systems: A Modern Perspective, Chapter 17

  26. Scheduling • Threads and processes • Explicit scheduling • Transparent scheduling • Migration & load balancing • Objects • Active vs passive • Address spaces Operating Systems: A Modern Perspective, Chapter 17

  27. p p p p p p p p p p p p p p p p p p p p p p p Migration and Load Balancing Machine A Machine A Machine B Machine B Machine C Machine C (a) Before Balancing (b) After Balancing A thread or process Operating Systems: A Modern Perspective, Chapter 17

  28. Synchronization • Distributed synchronization • No shared memory  no semaphores • New approaches use logical clocks & event ordering • Transactions • Became a mature technology in DBMS • Multiple operations with a commit or abort • Concurrency control • Two-phase locking Operating Systems: A Modern Perspective, Chapter 17

  29. Updating a Multiple Field Record Process pi Process pj ... ... send(server, update, k, 3); send(server, update, k, 5); send(server, update, k, 6); send(server, update, k, 8); send(server, update, k, 2); send(server, update, k, 4); send(server, update, k, 8); send(server, update, k, 6); ... ... Operating Systems: A Modern Perspective, Chapter 17

  30. Explicit Event Ordering • Alternative technique of growing importance in network systems • Rely on knowing the relative order of occurrence of every event • (occurrence of y in pj) < (occurrence of x in pi) • Then can synchronize by explicitly specifying each relation (when it is important) advance(eventCount): Announces the occurrence of an event related to eventCount, causing it to be incremented by 1 await(eventCount, v): Causes process to block as long as eventCount < v. Operating Systems: A Modern Perspective, Chapter 17

  31. Producer-Consumer Solution Using Precedence producer() { consumer() { /* i establishes local order */ /* i establishes local order */ int i = 1; int i = 1; while(TRUE) { while(TRUE) { /* Stay N–1 ahead of consumer */ /* Stay N–1 behind producer */ await(out, i-N); await(in, i); produce(buffer[(i–1) mod N]); consume(buffer[(I–1) mod N]; /* Signal a full buffer */ /* Signal an empty buffer */ advance(in); advance(out); i = i+1; i = i+1; } } } } eventcount in=0, out=0; struct buffer[N]; fork(producer, 0); fork(consumer, 0); Operating Systems: A Modern Perspective, Chapter 17

  32. More on EventCounts • Notice that advance and await need not be uninterruptible • There is no requirement for shared memory • For full use of this mechanism, actually need to extend it a bit with a sequencer • Underlying theory is also used to implement “virtual global clocks” in a network • Emerging as a preferred synchronization mechanism on networks Operating Systems: A Modern Perspective, Chapter 17

More Related