1 / 45

Design and Implementation of a Server Cluster Backend for Thin Client Computing

Design and Implementation of a Server Cluster Backend for Thin Client Computing. MTP final stage presentation By Khurange Ashish Govind Under the guidance of Prof. Om P. Damani. Talk outline. Introduction Working of Thin Clients Design issues High availability of various services

virote
Download Presentation

Design and Implementation of a Server Cluster Backend for Thin Client Computing

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. Design and Implementation of a Server Cluster Backend for Thin Client Computing MTP final stage presentation By Khurange Ashish Govind Under the guidance of Prof. Om P. Damani

  2. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  3. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  4. 1.1 Thin Client System • Consists of multiple heterogeneous workstations connected to a single server. Diskless work station Old PC New PC

  5. 1.1 Thin Client System Thin Client Hardware requirement of a Thin Client is : Keyboard, monitor, mouse and some computation power.

  6. 1.1 Thin Client System Server Thin Client Server performs all application processing and stores all the user’s data

  7. 1.1 Thin Client System Keyboard and mouse input Server Thin Client Communication protocol Screen updates Thin Client and Server communicates using a display protocol such as X or RDP

  8. 1.2 Advantages of Thin Client System • Terminals are less expensive and maintenance free • Reduced cost of security • Reduced cost of data backup • Reduced cost of software installation

  9. 1.3 Limitations of Thin Client System • Server is a single point of failure • System is not scalable • Adding more independent servers helps with scalability but not with high availability

  10. 1.4 Windows Solution • Terminals cost $500 (without monitor) • Needs third party load balancing solution • Needs a separate file server

  11. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  12. 2.1 LTSP • Linux Terminal Server Project Thin Clients need following services to run • DHCP • TFTP • NFS • X protocol and a display manager (XDM, GDM, KDM)

  13. DHCP TFTP Thin Client NFS XDM DHCP request DHCP reply Download OS Mount root filesystem Send keyboard and mouse input Send graphical output

  14. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  15. 3. Design Issues • Provide highly available DHCP, TFTP, NFS and XDM services • Thin Client’s binding with XDM server must be dynamic • Software for finding cluster’s status, load balancing and managing cluster • Highly available filesystem

  16. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  17. 4 High Availability of Various Services • In this section we discuss about HA of following services • DHCP • TFTP • NFS • XDM

  18. 4.1 DHCP • More than one DHCP servers can exist on same subnet • IP addresses are offered to client on ‘lease’ basis • DHCP protocol has phases : • INIT • RENEW • REBIND

  19. 4.1.1 HA of DHCP • Multiple independent DHCP servers can not provide HA • For HA DHCP servers need to be arranged in • Failover protocol • Static binding

  20. 4.1.2 Failover Protocol • Pair of DHCP servers (primary & secondary) • lease database is synchronized between them • In normal mode only primary server works, updating secondary about its lease information • When primary goes down, secondary starts working • Assign IP addresses to new thin clients • Renewing existing leases

  21. 4.1.3 Static Binding • DHCP server’s static IP allocation method is used • All servers have the same binding of Thin Client’s MAC to IP address. • Thin Client can continue as long as one DHCP server is running

  22. 4.2 TFTP • TFTP service do not have any states and bindings like DHCP • Multiple independent TFTP servers running provides HA • TFTP server run on nodes on which DHCP server run • DHCP advertise its own address as TFTP

  23. 4.3 XDM • XDM server runs on all nodes in cluster • Each node provides XDM service to users who have home directory on this server • Static binding of Thin Client to XDM server will not work • Dynamic binding is based on username

  24. NFS NFS + Log-in Server Thin Client Thin Client Log – in Server 4.4 NFS • Multiple independent NFS servers running provide HA • To support this NFS server runs on all the nodes in cluster Single dependency Two separate dependencies

  25. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  26. 5 Software Components of The System • Main software components of the system • Health Status Service • Load Balancer • Cluster Manager

  27. 5.1 Health Status Service • Consists of two parts • Health Status Server • Health Status Client • Health Status server, runs on all servers and provides information of server health • Health Status Client is used by Load Balancer to find load on servers

  28. 5.2 Load Balancer • Accepts username from Thin Client • Database, mapping username to group of servers • Replies back with least loaded server’s IP, hosting user’s home directory • Cluster has two independent dependencies : DHCP, Load Balancer

  29. 5.3 Cluster Manager • Tool provided to system administrator to manage cluster • Makes sure that all the nodes in the cluster has latest information about the system • Provide service to • Add / remove node • Add / remove user

  30. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  31. 6.1 Arrangement of Components in The Cluster XDM Server NFS Server Health Status Server XDM Server NFS Server Health Status Server XDM Server NFS Server Health Status Server DHCP TFTP Load balancer XDM Server NFS Server Health Status Server DHCP TFTP Load balancer XDM Server NFS Server Health Status Server

  32. Server 1 Server 2 Thin client Server 3 Server 4 Server 5 Server 6 6.2 Working of The System 5 2 4 3 1 User 'A' 2 :-Nodes which has home directory for user 'A' :-Nodes where Load balancer is running

  33. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  34. 7.1 Filesystem Issues • Keep hardware cost as minimum as possible • Use server’s storage to store user’s data • Replicate user’s filesystem on multiple servers • Use open source tools to build filesystem

  35. 7.2 DRBD • Kernel module that maintains real time mirror of block device on remote machine • Pair of nodes one together. One acts as primary and other as secondary. • Read write access to replicated block device is allowed on primary only

  36. 7.3 Heartbeat • Detects liveliness of a node • Runs between pair of nodes • Primary node runs all services and has virtual IP of cluster • When primary goes down, secondary take over virtual IP and starts all services

  37. 7.4 HA-NFS DRBD Primary node DRBD messages Host-a 10.129.22.12 Host-b 10.129.22.13 Heartbeat Virtual NFS server 10.129.22.14 DRBD Primary node Host-a 10.129.22.12 Host-b 10.129.22.13 Virtual NFS server 10.129.22.14

  38. 7.5 Cluster Filesystem • Users are divide into mutually exclusive groups • Each user group’s files are replicated between separate pair of servers • Each user group can tolerate failure of one server

  39. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  40. 8 Future Work • Scalable filesystem • DRBD solution • Heartbeat Version 2 • Changes to DRBD • Coda filesystem • Filesystem performance measurement • Server sizing

  41. Talk outline • Introduction • Working of Thin Clients • Design issues • High availability of various services • Software components of the system • Working of the system • Filesystem for the cluster • Future work • Conclusion

  42. 9.1 Contribution • Scalable and highly available cluster design for Thin Client computing • HA solutions for DHCP, TFTP, NFS services • Filesystem using open source tools : DRBD and Heartbeat • Health Status Service, Load Balancer, Cluster Manager developed from scratch

  43. 9.2 Cluster Characteristics • Generic design • No special hardware required • DHCP can tolerate ‘n-1’ failures • Load balancer also tolerates ‘n-1’ failures • Each user group tolerate one server failure

  44. References • Internet Draft DHCP failover protocol • Linux Terminal Server Project http://www.ltsp.org • DRBD Homepage http://www.drbd.org • Heartbeat project http://linuxha.org/Heartbeat

  45. Thank You !!!

More Related