1 / 11

EXPReS FABRIC WP 2.2 Correlator Engine

EXPReS FABRIC WP 2.2 Correlator Engine. Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans. WP2.2 Correlator Engine. Develop a Correlator Engine that can run on standard workstations, deployable on clusters and grid nodes Correlator algorithm design (m5)

carolinaw
Download Presentation

EXPReS FABRIC WP 2.2 Correlator Engine

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. EXPReS FABRICWP 2.2 Correlator Engine Meeting 25-09-2006 Poznan Poland JIVE, Ruud Oerlemans EXPReS FABRIC meeting at Poznan, Poland

  2. WP2.2 Correlator Engine Develop a Correlator Engine that can run on standard workstations, deployable on clusters and grid nodes • Correlator algorithm design (m5) • Correlator computational core, single node (m14) • Scaled up version for clusters (m23) • Distributed version, middle ware (m33) • Interactive visualization (m4) • Output definition (m15) • Output merge (m24) EXPReS FABRIC meeting at Poznan, Poland

  3. Station 1 Station 2 Station N EVN Mk4 equivalents Raw data BW=16 MHz, Mk4 format on Mk5 disk From Mk5 to linux disk Raw data 16 MHz, Mk4 format on linux disk DIM,TRM,CRM Channel extraction Extracted data SU Pre-calculated,Delay tables DCM,DMM,FR Delay corrections Delay corrected data Correlator Chip Correlation. SFXC Data Product Current broadband Software Correlator EXPReS FABRIC meeting at Poznan, Poland

  4. Field System Field System Mark5 System Mark5 System Field System Mark5 System High level design Distributed Correlation schedule VEX EOP SCHED Telescope operator Principle Investigator WFM CALC Process VEX Central operator VEX CCF Delay Grid Node Grid Node Grid Node JIVE archive EXPReS FABRIC meeting at Poznan, Poland

  5. Grid considerations/aspects • Why use grid processing power? • It is available, no hardware investment required • It will be upgraded regularly • Degree of distribution is trade-off between • Processing power at the grid nodes • Data transport capacity to the grid nodes • Data logistics and coordination • More complicated when more distributed • Processing at telescope and grid nodes • Station related processing at telescope site and correlation elsewhere • All processing at grid nodes EXPReS FABRIC meeting at Poznan, Poland

  6. Data distribution over grid sites (1) Baseline slicing • Pros • Small nodes • Simple implementation at node • Cons • Multiplication of large data rates, especially when number of baselines is large • Data logistics complex • Scalability complex EXPReS FABRIC meeting at Poznan, Poland

  7. Data distribution over grid sites (2) • Pros • Simple data logistics • Central processing • Live processing easy • Slicing at the grid site • Dealing with only one site. • Cons • Powerful central processing site required • Pros • Smaller nodes • Live processing possible • Data slicing at nodes • Cons • Multiplication of large data rates • Simultaneous availability of sites when processing live 1 2 All data to one site All data to different sites • Pros • Smaller nodes • Live processing per channel • Simple implementation • Easy scalable • Cons • Channel extraction at telescope increases data rate • Pros • Smaller nodes • Smaller data rates • Simple implementation • Easy scalable • No data mulitplication • Cons • Complex data logistics after correlation • Live correlation complex Time slicing Channel slicing 3 4 EXPReS FABRIC meeting at Poznan, Poland

  8. Correlator architecture for file input • Processes data from one channel • Easy scalable, because one application has all the functionality • Can exploit multiple processors using MPI • Code reuse through OO and C++ Time slice 1 SD SC SB CP1 Core1 SA Time slice 2 SD SC SB CP2 CP Core1 Core2 SA This software architecture can work for data distributions 1,2 and 3 Time slice 3 SD SC SB CP3 Core3 SA Offline processing EXPReS FABRIC meeting at Poznan, Poland

  9. Correlator architecture for data streams CP SA SB SC SD 1.1 1.1 1.2 1.2 1.3 1.3 Core1 Buffer 1 2.1 2.1 2.2 2.2 2.3 2.3 Core4 Core2 Buffer 2 Time 3.1 3.1 3.2 3.2 3.3 3.3 Core3 Buffer 3 4.1 4.1 4.2 4.2 4.3 4.3 File on disk Memory buffer with short time slices Real time processing EXPReS FABRIC meeting at Poznan, Poland

  10. Other issues • Swinburne University, Adam Deller • Last summer exchange of expertise on their software correlator • New EXPReS employee: Yurii Pidopryhora, • Astronomy background • Data analysis and testing • New SCARIe employee: Nico Kruithof • Computer science background • Scari, NWO funded project aimed at sw correlator on Dutch Grid EXPReS FABRIC meeting at Poznan, Poland

  11. WP 2.2.? Status Work Package M Status • Correlator algorithm design 5 almost finished • Correlator computational core 14 active • Scaled up version for clusters 23 active • Distributed version 33 pending • Interactive visualization 4 pending • Output definition 15 designing • Output merge 24 designing EXPReS FABRIC meeting at Poznan, Poland

More Related