Extending globus to support multicast transmission
Download
1 / 31

Extending Globus to support Multicast Transmission - PowerPoint PPT Presentation


  • 61 Views
  • Uploaded on

Extending Globus to support Multicast Transmission. Karl Jeacle. [email protected] Rationale. Would like to achieve high-speed bulk data delivery to multiple sites Multicasting would make sense Existing multicast research has focused on sending to a large number of receivers

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about ' Extending Globus to support Multicast Transmission' - kirk


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript

Rationale
Rationale

  • Would like to achieve high-speed bulk data delivery to multiple sites

  • Multicasting would make sense

  • Existing multicast research has focused on sending to a large number of receivers

  • But Grid is an applied example where sending to a moderate number of receivers would be extremely beneficial


Multicast availability
Multicast availability

  • Deployment is a problem!

    • Protocols have been defined and implemented

    • Valid concerns about scalability; much FUD

    • “chicken & egg” means limited coverage

  • Clouds of native multicast

    • But can’t reach all destinations via multicast

    • So applications abandon in favour of unicast

    • What if we could multicast when possible…

    • …but fall back to unicast when necessary?


  • Multicast tcp
    Multicast TCP?

    • TCP

      • “single reliable stream between two hosts”

    • Multicast TCP

      • “multiple reliable streams from one to n hosts”

      • May seem a little odd, but there is precedent…

        • TCP-XMO – Liang & Cheriton

        • M-TCP – Mysore & Varghese

        • M/TCP – Visoottiviseth et al

        • PRMP – Barcellos et al

        • SCE – Talpade & Ammar



    Building multicast tcp
    Building Multicast TCP

    • Want to test multicast/unicast TCP approach

      • But new protocol == kernel change

      • Widespread test deployment difficult

    • Build new TCP-like engine

      • Encapsulate packets in UDP

      • Run in userspace

      • Performance is sacrificed…

      • …but widespread testing now possible


    Tcp ip udp ip

    Sending

    Application

    Receiving

    Application

    If natively implemented

    TCP

    TCP

    IP

    IP

    test deployment

    UDP

    UDP

    IP

    IP

    TCP/IP/UDP/IP


    Tcp engine
    TCP engine

    • Where does initial TCP come from?

    • Could use BSD or Linux

      • Extracting from kernel could be problematic

    • More compact alternative

      • lwIP = Lightweight IP

      • Small but fully RFC-compliant TCP/IP stack

    • lwIP + multicast extensions = “TCP-XM”


    Tcp xm overview
    TCP-XM overview

    • Primarily aimed at push applications

    • Sender initiated – advance knowledge of receivers

    • Opens sessions to n destination hosts simultaneously

    • Unicast is used when multicast not available

    • Options headers used to exchange multicast info

    • API changes

      • Sender incorporates multiple destination and group addresses

      • Receiver requires no changes

    • TCP friendly, by definition


    TCP

    SYN

    SYNACK

    ACK

    Sender

    Receiver

    DATA

    ACK

    FIN

    ACK

    FIN

    ACK


    Tcp xm
    TCP-XM

    Receiver 1

    Sender

    Receiver 2

    Receiver 3


    Tcp xm connection
    TCP-XM connection

    • Connection

      • User connects to multiple unicast destinations

      • Multiple TCP PCBs created

      • Independent 3-way handshakes take place

      • SSM or random ASM group address allocated

        • (if not specified in advance by user/application)

      • Group address sent as TCP option

      • Ability to multicast depends on TCP option


    Tcp xm transmission
    TCP-XM transmission

    • Data transfer

      • Data replicated/enqueued on all send queues

      • PCB variables dictate transmission mode

      • Data packets are multicast (if possible)

      • Retransmissions are unicast

      • Auto fall back/forward to unicast/multicast

    • Close

      • Connections closed as per unicast TCP


    Fall back fall forward
    Fall back / fall forward

    • TCP-XM principle

      • “Multicast if possible, unicast when necessary”

    • Initial transmission mode is group unicast

      • Ensures successful initial data transfer

    • Fall forward to multicast on positive feedback

      • Typically after ~75K unicast data

    • Fall back to unicast on repeated mcast failure


    Tcp xm reception
    TCP-XM reception

    • Receiver

      • No API-level changes

      • Normal TCP listen

      • Auto-IGMP join on TCP-XM connect

      • Accepts data on both unicast/multicast ports

      • tcp_input() accepts:

        • packets addressed to existing unicast destination…

        • …but now also those addressed to multicast group

      • Tracks how last n segs received (u/m)


    Grid multicast
    Grid multicast?

    • How can multicast be used in Grid environment?

    • TCP-XM is new multicast-capable protocol

    • Globus is de-facto Grid middleware

    • Would like TCP-XM support in Globus…


    Globus xio
    Globus XIO

    • eXtensible Input Output library

      • Allows “i/o plugins” to Globus

    • API

      • Single POSIX-like API / set of semantics

      • Simple open/close/read/write API

    • Driver abstraction

      • Hides protocol details / Allows for extensibility

      • Stack of 1 transport & n transform drivers

      • Drivers can be selected at runtime




    Xio xm driver specifics
    XIO/XM driver specifics

    • Two important XIO data structures

      • Handle

        • Returned to user when XIO framework ready

        • Used for all open/close/read/write calls

        • lwIP netconn connection structure used

      • Attribute

        • Used to set XIO driver-specific parameters…

        • … and TCP-XM protocol-specific options

        • List of destination addresses


    Xio code example
    XIO code example

    // init stack

    globus_xio_stack_init(&stack, NULL);

    // load drivers onto stack

    globus_xio_driver_load("tcpxm", &txdriver);

    globus_xio_stack_push_driver(stack, txdriver);

    // init attributes

    globus_xio_attr_init(&attr);

    globus_xio_attr_cntl(attr, txdriver, GLOBUS_XIO_TCPXM_SET_REMOTE_HOSTS, hosts, numhosts);

    // create handle

    globus_xio_handle_create(&handle, stack);

    // send data

    globus_xio_open(&handle, NULL, target);

    globus_xio_write(handle, "hello\n", 6, 1, &nbytes, NULL);

    globus_xio_close(handle, NULL);


    One to many issues
    One-to-many issues

    • Stack assumes one-to-one connections

      • XIO user interface requires modification

      • Needs support for one-to-many protocols

      • Minimal user API changes

      • Framework changes more significant

    • GSI is one-to-one

      • Authentication with peer on connection setup

      • But cannot authenticate with n peers

      • Need some form of “GSI-M”


    Lan wan tests

    LAN

    Computer Laboratory

    Mix of CPU speeds

    Linux

    All multicast

    Throughput to each host is typically at least 10Mb/s, depending on CPU and network load of both sender and receiver.

    WAN

    UK eScience Network

    Mix of CPU speeds

    FreeBSD/Linux/Solaris

    Some multicast

    Throughput varies (Mb/s)

    Imperial 25

    Cardiff 13

    Manchester 11

    Southampton 10

    Belfast 5

    LAN/WAN tests





    Driver availability
    Driver availability

    • Multicast transport driver for Globus XIO

      • Requires Globus 3.2 or later

    • Source code online

      • Sample client

      • Sample server

      • Driver installation instructions

    • http://www.cl.cam.ac.uk/~kj234/xio/


    Mcp mcpd
    mcp & mcpd

    • Multicast file transfer application using TCP-XM

    • ‘mcpd &’ on servers

    • ‘mcp file host1 host2… hostN’ on client

    • http://www.cl.cam.ac.uk/~kj234/mcp/

      • Full source code online

      • FreeBSD, Linux, Solaris


    Future
    Future

    • Protocol work

      • Parallel unicast / multicast transmission

      • Multicast look ahead & multiple groups

    • Deliverables

      • Updates to mcp/mcpd

    • Experimentation

      • More detailed testing required

      • Currently limited to LAN and UK eScience

      • Will extend to global Intel PlanetLab


    Escience volunteers

    1. In place

    Cambridge

    Cardiff

    Imperial

    Manchester

    Newcastle

    Oxford

    Southampton

    UCL

    2. Firewall issues

    Belfast

    Daresbury

    Glasgow

    Rutherford

    3. Not possible

    Edinburgh

    eScience volunteers…


    All done
    All done!

    • Thanks for listening!

    • Questions?


    ad