1 / 32

First group planning meeting: Introduction

First group planning meeting: Introduction. - Ted Liu, Jan. 30 th , 2003. Introduce new people A brief history of time Goals of this meeting Project overview Roadmap Guideline for the project planning …real talks by others…

Download Presentation

First group planning meeting: Introduction

An Image/Link below is provided (as is) to download presentation Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author. Content is provided to you AS IS for your information and personal use only. Download presentation by click this link. While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server. During download, if you can't get a presentation, the file might be deleted by the publisher.

E N D

Presentation Transcript


  1. First group planning meeting: Introduction - Ted Liu, Jan. 30th, 2003 • Introduce new people • A brief history of time • Goals of this meeting • Project overview • Roadmap • Guideline for the project planning • …real talks by others… • Some starting points for discussion session(see web). • Initial thoughts (or targets for you to shoot at): • near term hardware needs/production plan • manpower/organization/near term goals … • future meetings & L2 upgrade e-log

  2. This project is attracting the best people/groups @ CDF … New people who joined the project recently: • Burkard Reisert (FNAL new posdoc) • Cheng-Ju Lin (FNAL posdoc) • Paul Keener (Upenn, DAQ expert) • Joe Kroll & his $$ (Upenn) • Kristian Hahn (Upenn, Nigel’s student) • Shawn Kwang (UC, Mel’s student) • Bob Blair (ANL) • John Dawson (ANL) • Jimmy Proudfoot (ANL) • Franco Spinella (INFN) • “Old” people who is eager to come back: • Sakari Pitkanen (still waiting for visa @ Finland) • => youngest collaborator

  3. A brief history of time … Oct. 2001: FNAL decided to build Pulsar board UofC decided to provide engineer support soon Upenn purchased obsolete interface components Mar. 2002: INFN built SVT-TSI-PCI daughter card Aug. 2002: ANL  possibility with Atlas RoIB Oct. 2002: Pulsar works with ANL Gigabit Ethernet mezzanine card (essentially plug&play) ANL/FNAL/INFN/UofC/Upenn We have a strong group, now we need a good plan

  4. Goals of this meeting • Focus on planning, not technique details • Roadmap • Near term goals and long term plan • Manpower availability (current/future) • Who is interested in working on what&when • Get a clear picture of the issues/tasks involved •  will have a few technique talks

  5. Project Overview • may roughly divide into four “parallel” efforts: • E1: Pulsar hardware/firmware/VME software • E2: Design spec. for data format/algorithm/system interface • E3: RoIB option effort (see Bob’s talk) • E4: PCI/CPU architecture/OS/Infrastructure SF/L2D SF … E1 E2 E3 E4 CDF RunIIb L2 Atlas L2 Phase A Phase B Phase C

  6. Roadmap • may divide into three phases/stages: • Phase A: Teststand, core firmware, HW prod/testing • Algorithm firmware specification • PCI/CPU/OS/infrastructure software •  will show some details • Phase B: Rx algorithm firmware implementation/testing, • system integration in standalone mode • Phase C: System integration in parasitic mode, test runs • with cosmic and beam

  7. Pulsar design 3 Altera APEX 20K400_652 (BGA) FPGAs 502 user IO pins each 9U VME (VME and CDF ctrl signals are visible to all three FPGAs) P1 Data IO Mezz card connectors T S T R K P2 Control/ Merger SRAM 128K x 36 bits SLINK signal lines Data IO P3 L 1 T R K spare lines SRAM 3 APEX20K400 FPGAs on board = 3 Million system gates/80KB RAM per board 2 128K x 36 pipelined SRAMs with No Bus Latency: 1 MB SRAM (~5ns access time)

  8. Pulsar is designed to be fully self-testable: at board level as well as system level For every input/output, there can be an output/input, All interfaces are bi-directional (except VME) TSI SVT/XTRP one for all and all for one SRAMs Gigabit Ethernet RF clock It is this feature which allows us to develop&tune an upgrade system in standalone mode Overall planning should be based on this feature

  9. Chicago  Booster CDF DØ Tevatron p source Main Injector (new) An old slide from an old talk on Dec. 7th, 2001: original motivation to build Pulsar: reduce impact on running exp. The way we have been debugging the Level 2 decision crate: very often need to use CDF+Tevatron HUGE amount of work has been done this way by a few hardworking experts Why diagnostic tools? Reces HP scope L 1 α S V T X T R P C L I S T I S O TS L2 inputs L2 decision crate Logical Analyzer HP scope The idea is to build test stand tools to “replace” CDF and Tevatron, to make life MUCH easier.

  10. Chicago  Booster CDF DØ Tevatron p source Main Injector (new) A slightly modified version for funding agency Pulsar can be used as Weapons against mass distraction to reduce impact on running experiment CDF can be used as Weapons of top mass construction real data emulation L2T Pulsar can be used to emulate both CDF/Tevatron and ATLAS/LHC for L2T

  11. Hardware setup during Phase A: • Pulsar Tx => Rx =>PCI=>PC • w/ 4 hotlink mezzanine cards • (muon/L1/SVT/XTRP/TSI paths) • Fully develop teststand capability • (should test with RunIIA boards) • Develop core firmware for all cases • Robustness/Production test setup • (see Burkard’s talk) • (2) Pulsar Tx => Rx => PCI=>PC • 4 Taxi mezzanine cards (Reces path) • (3) Pulsar Tx => Rx => PCI=>PC • 2 Hotlink (Cluster) & • 2 Taxi (Isolation) Tx Rx PC

  12. Core firmware example: DataIO FPGA (Rx case) => Control FPGA is similar VME responses DAQ buffers SRAM interface CDF Ctrl interface Mezzanine Card interface Filter algorithm SLINK formatter Play/record Play/record downstream upstream RAM/ Spy buffers RAM/ Spy buffers • Firmware was designed • this way from day one Blocks in green: common to all FPGA types, blocks in white are data path specific (core firmware only deal with the simplest case: pack ALL data into SLINK format)

  13. Well organized firmware effort is one of the keys to have a system: clean/easy-to-understand/robust & with minimal maintenance effort Common core firmware muon VME responses SLINK formatter DAQ buffers CDFCtrl responses SRAM ctrl Mezzanine interfaces Diagnostic RAMs Spy buffers … SVT XTRP LVDS + ext. FIFO cluster hotlink L1 LVDS Taxi Isolation TSI Reces Knowing one data path Pulsar  knows all Pulsars Data specific algorithm difference is just minor details Need dedicated people responsible for developing the core firmware

  14. Summary of what I have said or Guideline for the project planning: Project planning should follow Pulsar design philosophy (1) In God we trust, everything else we test (2) One for all and all for one Anyone who has a problem with this is in the wrong project

  15. Goals of this meeting • Focus on planning, not technique details • Roadmap • Near term goals and long term plan • Manpower availability (current/future) • Who is interested in working on what&when • Get a clear picture of the issues/tasks involved •  will have a few technique talks… By now, hopefully you are all warmed up…

  16. Agenda • Introduction (TL 20mins) • Pulsar hardware status (Mircea, 5mins) • Automating Pulsar test procedures (Burkard, 10mins) • L2 data format/real data patterns (Cheng-Ju, 10mins) • Pulsar algorithm firmware spec. for Iso&Reces (Bob, 20mins) • Status/plan of S32PCI64/CPU/OS study (Paul, 20mins) • ATLAS RoIB option (Bob, 20mins) • Discussions (ALL, 60mins) • Summary (5 mins) • dinner at 6pm

  17. Material (some initial thoughts) for discussion session • Some details on hardware needs during phase A/B/C • Manpower/organization • Near term goals for each effort • Future meetings & L2 upgrade e-log • More material can be found on the meeting web page http://hep.uchicago.edu/~thliu/projects/Pulsar/ L2_upgrade_meeting.html

  18. Some details Hardware setup during Phase A1: • Current setup (since Nov 02): • Pulsar Tx => Rx =>PCI=>PC • with 4 hotlink mezzanine cards • (for muon/L1/SVT/TSI path) • uses CERN AUX card • uses old SLINK to PCI card & • uses ANL Gigabit Ethernet card • uses an old PC Tx • Will need to: • test with our AUX card (next month) • need 4 more hotlink mezz cards • as reference cards to test new Pulsars Rx PC

  19. Hardware setup during Phase A2: (2) Next setup (March 03 -- ): Pulsar Tx => Rx =>PCI=>PC with 4 Taxi mezzanine cards (for Reces path) Tx • 2 more Pulsars from prototype run • (ready for testing next week) • 4 Taxi mezz. cards (next month) • Will need to: • test the two new Pulsars with the • 4 hotlink mezz. reference cards first Rx PC

  20. Hardware setup during Phase A3: (3) Next setup (~April 02): Pulsar Tx => Rx =>PCI=>PC with 2 hotlink/Taxi cards (for Cluster/Isolation paths) Tx • 2 more Pulsars from pre-production • 2 hotlink/Taxi mezz. cards Once this setup is fully developed&tested: time for production • Will need to: • test the two new Pulsars with the • 4 hotlink mezz. reference cards first • new PC with new PCI card Rx PC

  21. Pulsar hardware needs during Phase A (summary): • Current setup: Pulsar Tx => Rx =>PCI=>PC • with 4 hotlink mezzanine cards • (for muon/L1/SVT/TSI paths) • * 4 more hotlink mezzanine cards as reference cards • for new motherboards testing • * one AUX card (5V) • (2) Next setup: Pulsar Tx => Rx => PCI=>PC • with 4 Taxi mezzanine cards (Reces path) • * 2 more Pulsar boards from prototype batch (next week) • * 4 Taxi mezzanine cards from prototype run • * one more AUX card (3.3V) • (3) Next setup: Pulsar Tx => Rx => PCI=>PC • with 2 Hotlink (Cluster) & 2 Tax(Isolation) • * 2 more Pulsar boards from pre-production run • * 2 Hotlink/Taxi mezzanine cards Production starts after (1) –(3) fully tested

  22. M U O N R E CE S R E CE S R E CE S R E CE S T R A CE R C L U / I S O R O C Hardware setup for Phase B: Does RoIB need a separate crate ? M U O N R E CE S R E CE S R E CE S R E CE S T R A CE R C L U / I S O R O C • Initial system level integration • and performance studies • Finalize data path specific • firmware, reduce the data size • as necessary • online monitoring software • … Rx … Tx TSI Teststand Area

  23. Hardware setup for Phase C: In addition to the teststand area setup in standalone mode, setup the full system (Rx) in L2 central crate next to the current L2 decision crate M U O N R E CE S R E CE S R E CE S R E CE S T R A CE R C L U / I S O R O C Rx … L2 Central Crate in Trigger Room • Initial system level integration • with real system • fine tune FW/SF/performance • parasitic running if possible • special test runs with cosmic • and beam • …

  24. Project Overview • may roughly divide into four “parallel” efforts: • E1: Pulsar hardware/firmware/VME software • E2: Design spec. for data format/algorithm/system interface • E3: RoIB option effort (see Bob’s talk) • E4: PCI/CPU architecture/OS/Infrastructure SF/L2D SF … E1 E2 E3 E4 Phase A Phase B Phase C

  25. (E1) Pulsar hardware/firmware/VME software Pulsar/mezz/Aux production: UC shop Production testing: Burkard/Sakari+Kristian + … Technical support: UC(Harold/Mircea)/FNAL(Bob Demaat) Infrastructure support: Crates/TSI/Testclk/Tracer: CJLin/PWittich Core firmware spec/implementation: Burkard/Sakari + others Core VME software(testing,online): Burkard/Kristian/Sakari subsystem specific algorithm firmware/VME software: muon path: Shawn with support from Sakari/Burkard Reces path: ? … Cluster path: Kristian … Isolation: Kristian … L1/XTRP/SVT: part of the core firmware Overall coordinator: Burkard Reisert Consultant: Mel Shochet/Bob Blair/John Dawson/Peter Wilson/Jonathan Lewis...+ volunteers

  26. E1: possible near term goals • (1) Feb: have the automated testing procedure fully working for 16 • hotlink channels (muon path): Tx->Rx->PC; • (2) March: finish testing 2 more Pulsar boards with hotlink/Taxi mezzanine • cards and AUX cards; • (3) April: initial proof of principle test with S32PCI64 with test setup • (4) May: core firmware (Tx/Rx) fully developed and tested with • hotlink/taxi mezzanine cards; • (5) June: production/testing for Pulsar/mezz cards/AUX cards • (6) July: have Tx->Rx-PC fully developed for Reces(taxi), • cluster/Isolation (hotlink/taxi); • (7) Aug: document everything above: cdfnote

  27. (E2) Firmware spec./SLINK data format/data patterns System Level(TS handshake, DAQ etc): Peter/Mel/Bob Blair/John + others Subsystem specific: muon: Mel/Cheng-Ju/Shawn Kwang Reces: Bob Blair/Cheng-Ju/? Cluster: Cheng-Ju/Mel/Kristian Isolation: Bob/Cheng-Ju/Kristian L1/XTRP/SVT: Cheng-Ju/Burkard/Mel Overall coordinator: Cheng-Ju Lin Consultant: Jonathan Lewis/Peter Wilson/Henry Frisch/Jimmy Proudfoot/... + volunteers

  28. E2: possible near term goals • Feb: document ALL L2 data format; • March: software ready to derive all real data patterns; • April: initial algorithm specification for each path; • May: software modeling based on algorithm specifications •  online monitoring software • (5) June: finalize algorithm/system firmware specification • (6) July: document everything above --> cdf note ready

  29. (E4) PCI/CPU/OS/infrastructure software/L2D software need more thinking here for the organization…need a posdoc in this area ? R&D setup at Upenn: Paul Keener PCI/CPU/OS/infrastructure software development Upenn has funding from the university for this R&D effort. Testing/integration setup at ANL: Bob Blair/John Dawson +? Testing/integration setup at FNAL: Kristian + ? L2 algorithm software: Peter Wittich/Bob/Cheng-Ju/Kristian/Burkard... Overall coordinator: Peter Wittich Consultant: Jim Patrick/Rick van Berg/Joe Kroll/Franco Spinella/Bill Ashmanskas...+

  30. E4: possible near term goals • Feb: at the very least repeat CERN's results on S32PCI64 performance; • March: pick one PC platform/OS with reasonable performance • (doesn't have to be the best) • April: basic infrastructure software ready early April and transfer to FNAL • teststand, along with a PC/OS. This will be used for proof of • principle test at FNAL in April and teststand needs • (4) May-June: more studies to see if there is any better choice for PC • platform/OS. study on how to send the L2 decision back to Pulsar • (5) July: final decision on PC platform/OS... and infrastructure software... • (6) Aug: final decision on how to send L2 decision back to Pulsar. • (7) Sept: document everything above --> cdfnote ready

  31. Future meetings • Group meetings: Once per month is reasonable • I would suggest that each group • ANL(Bob)/FNAL/INFN(Luciano)/UC(Mel)/Upenn(Joe) • take turn to organize/chair the meetings • Subgroup meetings during phase A: • bi-weekly regular meetings or let each coordinator • decide when to call the meetings? • create a L2 upgrade e-log? too many meetings  too much BS We have good people who want to spend more time to do real work Based onyears of observation: much less BS on e-log an effective way to improve communication

  32. RunIIA and RunIIB 2B or not 2B?

More Related