1 / 19

TCP-XM “Multicast Transport for Grid Computing”

TCP-XM “Multicast Transport for Grid Computing”. Karl Jeacle. karl.jeacle@cl.cam.ac.uk. Rationale. Would like to achieve high-speed bulk data delivery to multiple sites Multicasting would make sense Existing multicast research has focused on sending to a large number of receivers

skah
Download Presentation

TCP-XM “Multicast Transport for Grid Computing”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. TCP-XM“Multicast Transport for Grid Computing” Karl Jeacle karl.jeacle@cl.cam.ac.uk

  2. Rationale • Would like to achieve high-speed bulk data delivery to multiple sites • Multicasting would make sense • Existing multicast research has focused on sending to a large number of receivers • But Grid is an applied example where sending to a moderate number of receivers would be extremely beneficial

  3. Reliability • Reliability of data transfer is essential • But multicast packets are typically UDP • Could adopt or invent a new reliable multicast protocol that allows reliable multicast (e.g. PGM) • But what about taking an existing reliable unicast protocol (e.g. TCP) and turning it into a reliable multicast protocol?

  4. Multicast TCP? • TCP provides a single reliable byte stream between two end-points • Modifying this to support multiple end-points may seem a little unnatural • But there is some precedent… • SCE – Talpade & Ammar • TCP-XMO – Liang & Cheriton • M/TCP – Visoottiviseth et al

  5. Obvious problems • ACK implosion • Sender swamped by ACKs from receivers! • Yes, so 40 byte ACKs on 1500 byte data packets means symmetric data traffic with ~40 receivers. But this beats 40x unicast! • End-point asymmetry • Unicast TCP: pipe has two end-points • Multicast TCP: pipe has n end-points • Sender needs application framing to demultiplex received data

  6. Implementation issues • Protocol change == kernel change • Widespread test deployment difficult • Two prong approach • Modify TCP protocol… • But incorporate into a userspace library • Library will implement modified TCP over UDP • Performance is sacrificed • But deployment is no longer an issue • Library can be used within Globus

  7. Sending Application Receiving Application native implementation TCP TCP IP IP prototype deployment UDP UDP IP IP TCP/IP/UDP/IP

  8. TCP engine • Where does initial TCP come from? • Could use BSD or Linux • Extracting from kernel could be problematic • More compact alternative • lwIP = Lightweight IP • Small but fully RFC-compliant TCP/IP stack • lwIP + multicast extensions = TCP-XM

  9. TCP SYN SYNACK ACK Sender Receiver DATA ACK FIN ACK FIN ACK

  10. TCP-XM Receiver 1 Sender Receiver 2 Receiver 3

  11. API • Sender API changes • New connection type • Connect to port on array of destinations • Single write sends data to all hosts • TCP-XM in use: conn = netconn_new(NETCONN_TCPXM); netconn_connectxm(conn, remotedest, numdests, group, port); netconn_write(conn, data, len, …); • lwIP uses “netconn” API • But a socket API is also available

  12. PCB changes • Every TCP connection has an associated Protocol Control Block (PCB) • TCP-XM adds: struct tcp_pcb { … struct ip_addr group_ip; // group addr enum tx_mode txmode; // uni/multicast u8t nrtxm; // # retrans struct tcp_pcb *nextm; // next “m” pcb }

  13. next next next next next U1 U2 M1 M3 M2 nextm nextm nextm nextm nextm Linking PCBs • *next points to the next TCP session • *nextm points to the next TCP session that’s part of a particular TCP-XM connection • Minimal timer and state machine changes

  14. TCP Group Option kind=50 len=6 Multicast Group Address 1 byte 1 byte 4 bytes • New group option sent in all TCP-XM SYN packets • Presence implies multicast capability • Non TCP-XM hosts will ignore (no option in SYNACK) • Sender will automatically revert to unicast

  15. TCP-XM connection • Connection • User connects to multiple unicast destinations • Multiple TCP PCBs created • Independent 3-way handshakes take place • SSM or random ASM group address allocated • (if not specified in advance by user/application) • Group address sent as TCP option • Ability to multicast depends on TCP option

  16. TCP-XM transmission • Data transfer • Data replicated/enqueued on all send queues • PCB variables dictate transmission mode • Data packets are multicast • Retransmissions are unicast • Auto switch to unicast after failed multicast • Close • Connections closed as per unicast TCP

  17. TCP-XM reception • Receiver • No API change • Normal TCP listen • Auto IGMP join on TCP-XM connect • Accepts data on both unicast/multicast ports • tcp_input() modified to check for valid data in packets addressed to multicast groups

  18. Future • To do: • Protocol work • Parallel unicast / multicast transmission • Fallback / fall forward • Multicast look ahead • Multiple groups • Globus integration • Experiments • Local network • UK eScience centres • Intel PlanetLab

  19. All done! • Thanks for listening! • Questions?

More Related