1 / 17

High Data Rate Transfer for HEP and VLBI

High Data Rate Transfer for HEP and VLBI. Ralph Spencer, Richard Hughes-Jones and Simon Casey The University of Manchester Netwrokshop33 March 2005. HEP: The LHC Detectors. CMS. ATLAS. ~6-8 PetaBytes / year ~10 8 events/year. LHC b. Radio Astronomy.

susane
Download Presentation

High Data Rate Transfer for HEP and VLBI

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. High Data Rate Transfer for HEP and VLBI Ralph Spencer, Richard Hughes-Jones and Simon Casey The University of Manchester Netwrokshop33 March 2005

  2. HEP: The LHC Detectors CMS ATLAS ~6-8 PetaBytes / year ~108 events/year LHCb

  3. Radio Astronomy • The study of celestial objects at <1 mm to >1m wavelength. • Sensitivity for continuum sources • B=bandwidth, t=integration time. • High resolution achieved by interferometers. Some radio emitting X-ray binary stars in our own galaxy: GRS 1915+105 MERLIN Cygnus X-1 VLBA SS433 MERLIN and European VLBI

  4. Earth-Rotation Synthesis Telescope data correlated in pairs: N(N-1)/2 baselines Merlin u-v coverage Need ~ 12 hours for full synthesis, not necessarily collecting data for all that time. NB Trade-off between B and t for sensitivity.

  5. The European VLBI NetworkEVN • Detailed radio imaging uses antenna networks over 100s-1000s km • At faintest levels, sky teems with galaxies being formed • Radio penetrates cosmic dust - see process clearly • Telescopes in place … • Disk recording at 512Mb/s • real-time connection allows greater • response • reliability • sensitivity

  6. EVN-NREN Gbit link Chalmers University of Technology, Gothenburg OnsalaSweden Gbit link TorunPoland Jodrell BankUK WesterborkNetherlands DedicatedGbit link MERLIN Dwingeloo DWDM link CambridgeUK MedicinaItaly

  7. January 2004: Disk buffered eVLBI session: • Three telescopes at 128Mb/s for first eVLBI image • On – Wb fringes at 256Mb/s • April 2004: Three-telescope, real-time eVLBI session. • Fringes at 64Mb/s • First real-time EVN image - 32Mb/s. • September 2004: Four telescope real-time eVLBI • Fringes to Torun and Arecibo • First EVN, eVLBI Science session • January 2005: First “dedicated light-path” eVLBI • ??Gbyte of data from Huygens descent transferred from Australia to JIVE • Data rate ~450Mb/s eVLBI Milestones

  8. 20 December 20 2004 • connection of JBO to Manchester by 2 x 1 GE • eVLBI tests between Poland Sweden UK and Netherlands • at 256 Mb/s • February 2005 • TCP and UDP memory – memory tests at rates up to 450 Mb/s • (TCP) and 650 Mb/s (UDP) • Tests showed inconsistencies betweeb Red Hat kernals, • rates of 128 Mb/s only obtained on 10 Feb • Haystack (US) – Onsala (Sweden) runs at 256 Mb/s • 11 March Science demo • JBO telescope winded off, short run on calibrator source done • Test needs to be repeated!

  9. WesterborkNetherlands OnsalaSweden EffelsbergGermany ? 1Gb/s 1Gb/s ? ? ? ? 2.5Gb/s 155Mb/s 1Gb/s JIVE TorunPoland 1Gb/s Jodrell BankUK Cambridge UK MERLIN MERLIN MedicinaItaly Telescope connections e

  10. Tests on the Network, Manchester-Dwingeloo: investigation of packet loss • 4th year MPhys Project at The Univertsity of Manchester Oct-Dec 2003, using campus network and SuperJANET 4 academic network in the UK. • UDPmon used to test throughput and packet loss • NB tune up the end machines – see http://grid.ucl.ac.uk/NFNN.html • Effect on the local traffic:

  11. UDP Throughput Manchester-Dwingeloo (Nov 2003) • Throughput vs packet spacing • Manchester: 2.0G Hz Xeon • Dwingeloo: 1.2 GHz PIII • Near wire rate, 950 Mbps • NB record stands at 6.6 Gbps SLAC-CERN • Packet loss • CPU Kernel Load sender • CPU Kernel Load receiver • 4th Year project • Adam Mathews • Steve O’Toole

  12. Packet loss distribution: Cumulative distribution Long range effects in the data? Poisson Cumulative distribution of packet loss, each bin is 12 msec wide

  13. 26th January 2005 UDP TestsSimon Casey (PhD project) Between JBO and JIVE in Dwingeloo, using production network Period of high packet loss (3%):

  14. The Future: • eVLBI tests in EVN at ~ 1 per month • Network tests at up to 950 Mb/s – statistics of packet loss • Investigate alternatives to TCP/UDP – tsunami, VSI-E etc. • ESLEA tests on MB-NG and UKLight • EXPRes EU eVLBI proposal including distributed processing, JBO-Onsala test link at 4 Gbps, submitted March 19th 2005 • eVLBI will be routine in 2006!!

  15. VLBI Correlation: GRID Computation task Controller/DataConcentrator Processing Nodes

  16. Questions ?

More Related