1 / 26

Client/Server Distributed Systems

Client/Server Distributed Systems. 240-322, Semester 1, 2005-2006. Objectives explain the general meaning of distributed programming beyond client/server look at the history of distributed programming. 2. Distributed Programming Concepts. Overview. 1. Definition

ciaran-fry
Download Presentation

Client/Server Distributed Systems

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Client/Server Distributed Systems 240-322, Semester 1, 2005-2006 • Objectives • explain the general meaning of distributed programming beyond client/server • look at the history of distributed programming 2. Distributed Programming Concepts

  2. Overview 1. Definition 2. From Parallel to Distributed 3. Forms of Communication 4. Data Distribution 5. Algorithmic Distribution 6. Granularity 7. Load Balancing 8. Brief History of Distributed Programming

  3. 1. Definition • Distributed programming is the spreading of a computational task across several programs, processes, or processors. • Includes parallel (concurrent) and networked programming. • Definition is a bit vague.

  4. 2. From Parallel to Distributed • Most parallel languages talk about processes: • these can be on different processors or on different computers • The implementor may choose to add language features to explicitly say where a process should run. • May also choose to address network issues (bandwidth, failure, etc.) at the language level. continued

  5. Often resources required by programs are distributed, which means that the programs must be distributed. continued

  6. continued

  7. Network Transparency • Most users want networks to be as transparent (invisible) as possible: • users do not want to care which machine is used to store their files • they do not want to know where a process is running

  8. 3. Forms of Communication These can be supported on top of shared memory or distributed memory platforms. • 1-to-1 communication • 1-to-many communication processes continued

  9. many-to-1 communication • many-to-many communication

  10. 4. Data Distribution • Divide input data between identical separate processes. • Examples: • database search • edge detection in an image • builders making a room with bricks

  11. Boss-Workers workers(all database search engines) send part of database send answer boss

  12. workers often need to talk to one another workers (all builders) send bricks done boss talking

  13. Boss - Eager Workers workers (all builders) ask for bricks send bricks boss talking

  14. Things to Note • The code is duplicated in every process. • The maximum no. of processes depends on the size of the task and difficulty of dividing data. • Talking can be very hard to code. • Talking is usually called communication, synchronisation or cooperation continued

  15. Communication is almost always implemented using message passing. • How are processes assigned to processors?

  16. Drier 5. Algorithmic Distribution • Divide algorithm into parallel parts / processes • e.g. UNIX pipes dirty plateson table dirtyplates collector washer clean wetplates Stacker plates incupboard wipe dryplates

  17. Things to Note • Talking is simple: pass data to next process which ‘wakes up’ that process. • Talking becomes harder to code if there are loops. • How to assign processes to processors?

  18. collector Drier washer collector Drier Stacker Drier Several Workers per Sub-task • Use both algorithmic and data distribution. • Problems: how to divide data? how to combine data?

  19. plumbing paint Parallelise Separate Sub-tasks Build a house: bricklaying electricalwiring b | (pl & e) | pt

  20. 6. Granularity Amount of data handled by a process: • Course grained: lots of data per process • e,g, UNIX processes • Fine grained: small amounts of data per process • e.g. UNIX threads, Java threads

  21. 7. Load Balancing • How to assign processes to processors? • Want to ‘even out’ work so that each processor does about the same amount of work. • But: • different processors have different capabilities • must consider cost of moving a process to a processor(e.g. network speed, load)

  22. 8. Brief History of (UNIX) Distributed Programming • 1970’s: UNIX was a multi-user, time-sharing OS • &, pipes • interprocess communication (IPC) on a single processor • mid 1980’s: System V UNIX • added extra IPC mechanisms: shared memory, messages, queues, etc. continued

  23. late 1970's to mid 1980’s: ARPA • US Advanced Research Projects Agency • funded research that produced TCP/IP, sockets • added to BSD Unix 4.2 • mid-late 1980’s: utilities developed • telnet, ftp • r* utilities: rlogin, rcp, rsh • client-server model based on sockets continued

  24. 1986: System V UNIX • released TL1, a set of socket-based libraries that support OSI • not widely used • late 1980’s: Sun Microsystems • NFS (Network File System) • RPC (Remote Procedure Call) • NIS (Network Information Services) continued

  25. early 1990’s • POSIX threads (light-weight processes) • Web client-server model based on TCP/IP • mid 1990's: Java • Java threads • Java Remote Method Invocation (RMI) • CORBA continued

  26. late 1990's / early 2000's • J2EE, .NET • peer-to-peer (P2P) • Napster, Gnutella, etc. • JXTA

More Related