1 / 37

Server Selection and Topology Control for Multi-party Video Conferences

Server Selection and Topology Control for Multi-party Video Conferences. Shuopeng Zhang University of Waterloo, Canada Di Niu , Yaochen Hu University of Alberta, Canada Fangming Liu Huazhong University of Science and Technology, China. Popular multi-party conferencing applications.

milly
Download Presentation

Server Selection and Topology Control for Multi-party Video Conferences

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. ServerSelectionandTopologyControlforMulti-partyVideoConferencesServerSelectionandTopologyControlforMulti-partyVideoConferences ShuopengZhang UniversityofWaterloo,Canada DiNiu,YaochenHu UniversityofAlberta,Canada FangmingLiu HuazhongUniversityofScienceandTechnology,China

  2. Popular multi-party conferencing applications

  3. Characteristics for the Multi-party conferencing applications • Every terminal needs to transfer its own contents to all other terminals • Requirement for high throughput capacity • Stringent requirement for small end-to-end delays

  4. T T T T T T T T S Max Throughput Server Based Solution U U/3 U U/3 U U U/3 Without server With server

  5. Min Delay Max Throughput Server Based Solution Network topology and server location Optimize the grouping and server locations in the Multi-Server Topology

  6. A Multi-ServerTopology • Each clientis only connected to one server and no other host. • Each client onlysends one copy of its data stream to its own server.

  7. A Multi-ServerTopology • Servers form a full mesh • Ifserver Sreceives data from a client C, the data is forwarded to all other clients and servers connected to itself (server S) • If server S receives data from • otherservers, the data is • forwarded only to the clients • directly connected to S.

  8. Example of Multi Server Topology Mean end-to-end delay: 5.39 Mean end-to-end delay: 4.83 Mean end-to-end delay: 4.13

  9. What’s the Problem? • Given the measured pairwise pings of the clients and the utilizable server pool (large network of CDN nodes) • Find the network topology that minimizes the sum of end-to-end delays between clients • Determine the grouping and server location in the Multi-Server Topology

  10. Outline • Algorithms based on Multi-ServerTopology • Simulation • Prototype Implementation

  11. Algorithms 1.D-Grouping Partition clients into groups 2. Server Location Search Find the ideal servers location (ISL) for each group 3. Server Search Find real physical server for each ideal server location

  12. D-Grouping • Initial Grouping • Set the pair of clients with the largest RTT as two polars • If expect more than 2 groups, choose the non-polar client that is furthest away from the existing polars as the next polar • Repeat 2 until # polars= # groups • Puteach client into the polar’s group to which the client has the lowest RTT

  13. D-Grouping • Iterative Grouping Whiletermination condition is false do For each client, move it into a group such that the client has the minimum average delay to the clients in that group End while Termination condition: met when a certain number of iteration T is reached or when grouping result no longer changes. In simulation: partitions 12 clients in only 5 iterations for 92% of trials.

  14. D-Grouping • Iterative Grouping Whiletermination condition is false do For each client, move it into a group such that the client has the minimum average delay to the clients in that group End while Termination condition: met when a certain number of iteration T is reached or when grouping result no longer changes. In simulation: partitions 12 clients in only 5 iterations for 92% of trials. norequireofnetworkcoordinateembeddinglikeVivaldi

  15. Server Location Search • Geo-Center: Choose the geographic center of each clients group as the ideal server location for that group • Local Convex Optimization In each group, the ideal server location is chosen to minimize the sum of geographic distances to all the clients in that group,suchthat :idealserverlocationof group i :geo-locations of n clients : a set of clients that belong to group i : the geographic distance between two locations on the earth

  16. Server Location Search • Global Convex Optimization Find allthemidealserverlocations that jointly minimize the total geographic length of all end-to-end paths between clients, i.e.,

  17. Find Real Servers from Ideal Server Locations • NaiveServerSearch choose the server geographically closest to as the server of group . • Local Server Search For each group , choose pservers geographically closest to as candidate servers. Then choose the server that has the smallest sum of RTTs to all the clients within group as the server for .

  18. ServerSearch • GlobalServerSearch For each group , choose p servers geographically closest to as candidate servers. Assume there are m groups. Then choose the set of m servers from all combinations of candidate servers that minimizes the mean end-to-end delay.

  19. Simulation Geographic Distribution of 518 PlanetLab nodes

  20. # Clients: 12 # Candidate Servers per group: 3 D-Grp: D-Grouping Geo Ctr: Geographic Center LclCvx: Local Convex GlbCvx: Global Convex NaiSS: Naïve Server Search LclSS: Local Search Search GlbSS: Global Server Search Performance Ratio: the mean end-to-end delay (normalized by the full-mesh mean delay)

  21. # Clients: 12 # Candidate Servers per group: 3 D-Grp: D-Grouping Geo Ctr: Geographic Center LclCvx: Local Convex GlbCvx: Global Convex NaiSS: Naïve Server Search LclSS: Local Search Search GlbSS: Global Server Search Performance Ratio: the mean end-to-end delay (normalized by the full-mesh mean delay)

  22. # Clients: 12 # Candidate Servers per group: 3 D-Grp: D-Grouping Geo Ctr: Geographic Center LclCvx: Local Convex GlbCvx: Global Convex NaiSS: Naïve Server Search LclSS: Local Search Search GlbSS: Global Server Search Performance Ratio: the mean end-to-end delay (normalized by the full-mesh mean delay)

  23. Comparisonswithothermethods Clients: 12 Candidates for real server mapping: 3 k-means (GeoCoord):partitioningbasedongeo-locations

  24. Comparisonswithothermethods Always better than one server(if #server > 2) Clients: 12 Candidates for real server mapping: 3 k-means (GeoCoord):partitioningbasedongeo-locations

  25. Comparisonswithothermethods Always better than one server(if #server > 2) D-Grp always better than k-means(Geo Coord) Clients: 12 Candidates for real server mapping: 3 k-means (GeoCoord):partitioningbasedongeo-locations

  26. Comparisonswithothermethods Always better than one server(if #server > 2) D-Grp always better than k-means(Geo Coord) D-Grp+ Glbcvx + Glbsshas comparable performance to Vivaldi (that requires network embedding) Clients: 12 Candidates for real server mapping: 3 k-means (GeoCoord):partitioningbasedongeo-locations

  27. Simulation Time Consumption Time Consumption breakdown of different algorithms for 12 clients

  28. Simulation Time Consumption Consumes most of the time Time Consumption breakdown of different algorithms for 12 clients

  29. Simulation Time Consumption Consumes most of the time Time Consumption breakdown of different algorithms for 12 clients Only 3 seconds for 12 clients, 6 server, large scale for today

  30. Prototype Implementation • Implemented on PlanetLab • Asynchronous multi-threaded • Using Apache Thrift framework + boost library • ~ 7k C++ Code • Frequency:Fixedto300 packets/second from eachClient • Source rate iscontrolledbychangingpacketsize,formally, Source rate =Packet size * Frequency • ImplementationsourcerateRange:1–500kbps

  31. End to End Delay Measurement End to End Delay = T_circle – RTT_AB / 2

  32. Implementation @ different source rate

  33. Implementation @ different source rate Delay of Multi-server not affected by source rate

  34. Conclusion • Multi-serversplacementandtopologycontrol • D-grouping,serverplacementoptimization • Collectedpingtracesfrom518PlanetLabNodes • Similarperformancetofullnetworkcoordinateembeddinginadelayspaceyetwithloweroverhead • Implementationprovesthesupportofhigherthroughputunderreasonableend-to-enddelay • Loweroverhead,asnorequireofnetworkcoordinateembeddinglikeVivaldi

  35. Thanks you • Questions?

  36. Implementation v.s. Simulation Source rate=1kbps #Clients:6

  37. # Clients in the Figure: 4 – 20# Servers used : 1 – 5 # Candidates for real server mapping: 3 Method: D-Grouping + Global Convex Optimization + Global Server Search

More Related