1 / 7

“L1/L2 farm: some thought ”

“L1/L2 farm: some thought ”. G.Lamanna , R.Fantechi & D. Di Filippo (CERN) Computing WG – 28.9.2011. The “Old” online farm. L1 farm. L2 farm. L1 = M pc, L2 = N pc Blue Switch  (M+40+1) ports mono- directionals Green switch  (N+M+40+1) ports bi- directionals. < 5 0 Gb /s.

temira
Download Presentation

“L1/L2 farm: some thought ”

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. “L1/L2 farm: some thought” G.Lamanna, R.Fantechi & D. Di Filippo (CERN) ComputingWG – 28.9.2011

  2. The “Old” online farm L1 farm L2 farm • L1 = M pc, L2 = N pc • Blue Switch  (M+40+1) ports mono-directionals • Green switch  (N+M+40+1) ports bi-directionals <50 Gb/s 40x10 Gb/s 40x10 Gb/s 180 Gb/s 2.5 Gb/s CDR 1 x10 Gb/s CREAM Services (DCS, Boot, …)

  3. The “new” online farm L1/2 farm TCP or UDP? • Blue switch  (N+M+40+40+1+1+1) bidirectionals ports • Number of ports available for computing = Ntot(128) – 83 = 35 <50 Gb/s 40x10 Gb/s 40x10 Gb/s 180 Gb/s 2.5 Gb/s Storage farm CDR CREAM 1 x10 Gb/s Services (DCS, Boot, …)

  4. Advantages: more efficient reuse of PCs, smaller number of PCs • Disavantages: bottleneck, transmission protocol, switch cost and support , flexibility and limited upgradability (up to the switch back plane capability)

  5. <50 Gb/s • Blue switch as before • Each PC elaborates one single event (both L1 and L2) instead of a fraction of event • No needed of routing among the PCs (no problem on protocol, no latency due to the network, …) 40x10 Gb/s 40x10 Gb/s Only final results L1/2 farm 180 Gb/s 2.5 Gb/s CREAM CDR 1 x10 Gb/s 2.5 Gb/s Storage farm Services (DCS, Boot, …)

  6. ev: #123 L1 – CHOD L1 – RICH L1 – MUV L1 – MUV … LKr DATA Merging L2 ev: #123 L1 – RICH (fragment) Switch Switch ev: #124 L1 – CHOD L1 – RICH L1 – MUV L1 – MUV … LKr DATA Merging L2 • From the PCs to the switch = L1 trigger request for the LKr (peanuts) and final events for CDR (2.5 Gb/s) • Max rate per PC: 180 Gb/s / (Number of PC) • Multiprocessor PCs: the parallelism is exploited at the PC levels instead of at network level. • Routing strategies at TEL62 level similar to the Jonas farm (the round robin with “per burst routing table upgrade” is a little bit more safe) ev: #123 L1 – CHOD (fragment) ev: #125 L1 – CHOD L1 – RICH L1 – MUV L1 – MUV … LKr DATA Merging L2 ev: #123 L1 – MUV (fragment) ev: #126 L1 – CHOD L1 – RICH L1 – MUV L1 – MUV … LKr DATA Merging L2 ev: #123 L2 - merge (Full event)

  7. Switch • Since the routing is univocal/injective (the PC doesn’t to talk each other) a tree structure can be foreseen: upgradability and smaller switches (cost) • Still open the possibility to use 1 Gb instead of 10 Gb

More Related